Feed aggregator

Reducing manual effort in coverage closure using CCF commands

EDN Network - 3 hours 12 min ago

Ensuring the reliability and performance of complex digital systems has two fundamental aspects: functional verification and digital design. Digital Design predominantly focuses on the architecture of the system that involves logic blocks, control flow units, and data flow units. However, design alone is not enough.

Functional verification plays a critical role in confirming if the design (digital system) behaves as intended in all expected conditions. It involves writing testbenches and running simulations that test the functionality of the design and catch bugs as early as possible. Without proper verification, even the most well-designed system can fail in real world use.

Coverage is a set of metrics/criteria that determines how thoroughly a design has been exercised during a simulation. It identifies and checks if all required input combinations have been exercised in the design.

There are several types of coverage used in modern verification flows, the first one being code coverage, which analyzes the actual executed code and its branches in the design. Functional coverage, on the other hand, is user-defined and tests the functionality of the design based on the specification and the test plan.

Coverage closure is a crucial step in the verification cycle. This step ensures that the design is robust and has been tested thoroughly. With an increase in scale and complexity of modern SoC/IP architectures, the processes required to achieve coverage closure become significantly difficult, time-consuming, and resource intensive.

Traditional verification involves a high degree of manual intervention, especially if the design is constantly evolving. This makes the verification cycle recursive, inefficient, and prone to human errors. Manual intervention in coverage closure remains a persistent challenge when dealing with complex subsystems and large SoCs.

Automation is not just a way to speed up the verification cycle; it gives us the bandwidth to focus on solving strategic design problems rather than repeating the same tasks over and over. This research is based on the same idea; it turns coverage closure from a tedious task to a focused, strategic part of the verification cycle.

This paper focuses on leveraging automation provided by the Cadence Incisive Metric Center (IMC) tool to minimize the need for manual effort in the coverage closure process. With the help of configurable commands in the Coverage Configuration File (CCF), we can exercise fine control in coverage analysis, reducing the chances of manual adjustments and making the flow dynamic.

Overview of Cadence IMC tool

IMC stands for Incisive Metrics Center, which is a coverage analysis tool designed by Cadence to help design and verification engineers evaluate the completeness of verification efforts. It works across the design and testbench during simulation to collect coverage data stored in a database. This database is later analyzed to identify the areas of design that have been tested and those which have not met the desired coverage goals.

IMC uses well defined metrics or commands for both code and functional coverage, which provide a detailed view of coverage results and identify any gaps to improve testing. The application includes the creation of a user-defined file called CCF, which includes these commands to control the type of coverage data that should be collected, excluded, or refined.

This paper offers several commands—such as “select_coverage”, “deselect_coverage”, “set_com”,”set_fsm_arc_scoring” and “set_fsm_reset_scoring”—which handle different genres of coverage aspects. The “select_coverage” and “deselect_coverage” commands automate the inclusion and exclusion activity by selecting specific sections of code as per the requirement, thus eliminating the manual exclusion process.

The “set_com” command provides a simple approach to avoid the manual efforts by automatically excluding coverage for constant variables. Meanwhile, the “set_fsm_arc_scoring” and “set_fsm_reset_scoring” commands focus more on enhancement of finite state machine (FSM) coverage by identifying state and reset transitions for the FSMs present in the design.

By using this precise and command-driven approach, the techniques discussed in this paper improve productivity and coverage accuracy. That plays a crucial role in today’s fast-paced complex chip development cycles.

Selecting/deselecting modules and covergroups for coverage analysis

The RTL design is a hierarchical structure which consists of various design units like modules, packages, instances, interfaces, and program blocks. It can be a mystifying exercise to exclude a specific code coverage section (block, expr, toggle, fsm) for the various design units in IMC tool.

The exercise to select/deselect any design units for code coverage can be implemented in a clean manner by using the commands mentioned below. These commands also provide support to select/deselect any specific covergroups (inside classes).

  • deselect_coverage

The command can enable the code coverage type (block, expr, toggle, fsm) for the given design unit and can also enable covergroups which are present in the given class.

Syntax:

select_coverage <-metrics> [-module | -instance | -class] <list_of_module/instance/class>

Figure 1 The above snapshot shows an example of select_coverage command. Source: eInfochips

This command is to be passed in CCF with the appropriate set of switches; <-metrics> defines the type of coverage metric like block, expr, toggle, fsm, and covergroup. According to the coverage metric, -module or -instance or -class is passed and then the list of module/instance/class is to be mentioned.

  • deselect_coverage

The command can disable the code coverage type (block, expr, toggle, fsm) for the given design unit or can disable covergroups which are present in the given class.

Syntax:

deselect_coverage <-metrics> [-module | -instance | -class] <list_of_module/instance/class>

Figure 2 This snapshot highlights how deselect_coverage command works. Source: eInfochips

The combination of these two commands can be used to control/manage several types of code coverage metrics scoring throughout the design hierarchy, as shown in Figure 4, and functional coverage (covergroup) scoring throughout the testbench environment, as shown in Figure 7.

The design has hierarchical structure of modules, sub-modules, and instances (Figure 3). Here, no commands in CCF are provided and the code coverage scoring for all the design units is enabled, as shown in the figure below.

Figure 3 Code coverage scoring is shown without CCF Commands. Source: eInfochips

For example, let us assume code coverage (block, expr, toggle) scoring in ‘ctrl_handler’ module is not required and block coverage scoring in ‘memory_2’ instance is also not required; then in CCF, the deselect_coverage commands mentioned in Figure 4 will be used. To deselect all the code coverage metrics (block, expr, fsm, toggle), ‘-all’ option is used. Figure 4 also depicts the outcome of the commands used for disabling the assumed coverage.

Figure 4 Code coverage scoring is shown with deselect_coverage CCF commands. Source: eInfochips

In another scenario, the code coverage scoring is required for the ‘design_top’ module, and the toggle coverage scoring is required for the ‘memory_3’ instance. Code coverage for the rest of the design units is not required. So, the whole design hierarchy will be de-selected and only the two design units in which the code coverage scoring is required are selected, as shown in Figure 5. The code coverage scoring generated as per the CCF commands is also shown in Figure 5.

Figure 5 Code coverage scoring is shown with deselect_coverage/select_coverage CCF commands. Source: eInfochips

The two covergroups (cg1, cg2) in class ‘tb_func_class ’ are scored when no commands in CCF are mentioned, as shown in Figure 6. In case functional coverage scoring of ‘cg2’ covergroup is not required, the CCF command mentioned in Figure 7 is used. For de-selecting any specific covergroup in a class, the ‘-cg_name’ <covergroup name> option is used

Figure 6 Functional verification is conducted without CCF command. Source: eInfochips

Figure 7 Functional verification is conducted with CCF command. Source: eInfochips

It’s important to note that both commands ‘select_coverage/deselect_coverage’ will have a cumulative effect on the coverage analysis. In <metrics> sub-option, ‘-all’ will include all the code coverage metrics (block, expr, toggle, fsm) but will not include -covergroup metric.

In the final analysis, by using the ‘select_coverage/deselect_coverage’ commands, code/functional coverage in the design hierarchy and from the testbench environment can be enabled and disabled from the CCF directly, which makes the coverage flow neat. If these commands are not used, to obtain a similar effect, manual exclusions from design hierarchy and testbench environment need to be performed in the IMC tool.

Smart exclusions of constants in a design

In many projects, there are some signals or codes of a design that are not exercised throughout the simulation. Such signals or codes of design create unnecessary gaps in the coverage database. To manually add the exclusion of such constant objects in all the modules/instances of design is an exhausting job.

Cadence IMC provides a command which smartly identifies the constant objects in the design and ignores them from the coverage database. It’s described below.

set_com

When the set_com command is enabled in the CCF, it identifies the coverage items such as an inactive block, the constant signals, and constant expressions, which remain unexercised throughout the simulation; it omits them from coverage analysis and marks them IGN in the output generated file.

Syntax:

set_com [-on|-off] [<coverages>] [-log | -logreuse] [-nounconnect] [-module | -instance]

To enable the Constant Object Marking (COM) analysis, provide the [- on] option with the set_com command. When the COM analysis is done, the IMC generates an output file named “icc.com” which captures all the objects that are marked as constant.

By providing the [-log] option, it creates the icc.com file and ensures that the icc.com is updated each time for all the simulations. This icc.com file is created in the path “cov_work/scope/test/icc.com.” The COM analysis for specific module/instance is enabled by providing the [-module | -instance] option with the set_com command.

Figure 8 The above image depicts the design hierarchy. Source: eInfochips

Figure 9 The COM analysis command is shown as mentioned in CCF. Source: eInfochips

Consider that the “chip_da” variable of the design remains constant throughout the simulation. By enabling the set_com command as shown in Figure 9, the variable chip_da will be ignored from the coverage database, which is shown in Figure 10 and Figure 11.

Figure 10 The icc.com output file is shown in the coverage database. Source: eInfochips

Figure 11 Constant variable chip_da is ignored with set_com command enabled. Source: eInfochips

COM analysis

In the CCF, the set_com command is enabled for the addr_handler_instance1 instance.

  • Here, as the set_com command is enabled, the “chip_da” signal, which remains constant throughout the simulation, will be ignored from coverage analysis for the defined instances. As shown in Figure 10, in every submodule where the chip_da signal is passed, it gets ignored as the chip_da signal is the port signal, and the COM analysis is done based on the connectivity (top-down/bottom-up).
  • Along with the port signals, the internal signals which remain constant, are also ignored from the coverage database. In Figure 10, the “wr” signal is an internal signal and it’s ignored from the coverage database (also reflected in Figure 11).
  • The signal chip_da is constant for this simulation (which is IGN) while if chip_da is variable for some other simulation (which is covered/uncovered) and these two simulations are merged. Then the signal chip_da will be considered as a variable (covered/uncovered) and not an ignored constant.

It’s worth noting that when the set_com command is enabled for a module/instance, and if the signal is port signal and is marked as IGN, then the port signals of other sub-modules, which are directly connected to this signal, are also IGN irrespective of the command enabled for that module/instance.

Finally, to avoid the unnecessary coverage that is captured for constant objects and to save time in adding exclusion for such constant objects, the set_com command is extremely useful.

Detailed analysis of FSM coverage

A coverage-driven verification approach gives assurance that the design is exercised thoroughly. For the FSM-based design, there are several types of coverage analysis available. FSM state and transition coverage analysis are the two ways that help to perform the coverage-driven verification of FSM designs, but it’s not a complete verification of FSM designs.

FSM arc coverage provides a comprehensive analysis to ensure that the design is exercised thoroughly. To do that, Cadence IMC provides some CCF commands, which are described below.

set_fsm_arc_scoring

The FSM arc coverage is disabled by default in ICC. It can be enabled by using the set_fsm_arc_scoring command in the CCF. The set_fsm_arc_scoring enables the FSM arcs, which are nothing but all the possible input conditions for which transitions take place between the two FSM states.

Syntax:

set_fsm_arc_scoring [-on|-off] [ -module <modules> | -tag <tags>] [-no_delay_check]

To enable the FSM arc coverage, provide the [-on] option in the set_fsm_arc_scoring. The FSM arc coverage can be encompassed for all the FSMs defined in a module by providing the [-module <module_name>] option.

If the FSM arc coverage needs to be captured for specific FSM in the module, it can be achieved by providing the tag name to FSM using the set_fsm_attribute command in the CCF. By providing tag name option with set_fsm_arc_scoring, FSM arc coverage can be captured for the FSM in design.

set_fsm_reset_scoring

A state is considered a reset state if the transition to that state is not dependent on the current state of the FSM; for example, in the code shown below.

Figure 12 Here is an example of a reset state. Source: eInfochips

State “Zero” is a reset state because the transition to this state is independent of the current state (ongoing_state). By default, the FSM reset state and transition coverage are disabled in ICC, as shown in Figure 13. They can be enabled using the set_fsm_reset_scoring command in the CCF. This command enables scoring for all the FSM reset states and transitions leading to reset states that are defined within the design module.

Figure 13 FSM coverage is shown without set_fsm_arc_scoring command. Source: eInfochips

Syntax:

set_fsm_reset_scoring

In the design, there are two FSMs defined—fsm_design_one and fsm_design_two—and we are enabling the FSM arc and reset state and transition coverage for fsm_design_two only. If the set_fsm_arc_scoring and set_fsm_reset_scoring commands are not provided in the CCF, the FSM arc, FSM reset state and transition coverage are not enabled, as shown in Figure 13.

If the set_fsm_arc_scoring and set_fsm_reset_scoring commands are provided in the CCF, as shown in Figure 14, then the FSM arc, the FSM reset state, and the transition coverage are enabled as shown in Figure 15.

Figure 14 The set_fsm_arc_scoring and set_fsm_reset_scoring commands are provided in CCF. Source: eInfochips

Figure 15 FSM coverage is shown with set_fsm_arc_scoring and set_fsm_reset_scoring commands. Source: eInfochips

In case the design consists of FSM(s), and to ensure that the FSM design is exercised thoroughly, and it’s verified based on a coverage-driven approach, one should enable the set_fsm_arc_scoring and set_fsm_reset_scoring commands in the CCF files.

Efficient coverage closure

Efficient coverage closure is essential for ensuring thorough verification of complex SoC/IP designs. This paper builds on prior work by introducing Cadence IMC commands that automate key aspects of coverage management, significantly reducing manual effort.

The use of select_coverage and deselect_coverage enables precise control over module and covergroup coverage, while set_com intelligently excludes constant objects, improving the coverage accuracy. Furthermore, set_fsm_arc_scoring and set_fsm_reset_scoring enhance the FSM verification, ensuring that all state transitions and reset conditions are thoroughly exercised.

By adopting these automation-driven techniques, verification teams can streamline the coverage closure process, enhance efficiency, and maintain high verification quality, improving productivity in modern SoC/IP development.

Rohan Zala, a senior verification engineer at eInfochips, has expertise in in IP/SoC verification for sensor-based chips, sub-system verification for fabric-based design, and NoC systems.

Khushbu Nakum, a senior verification engineer at eInfochips, has expertise in IP/SoC verification for sensor-based chips and sub-system verification for NoC design.

Jaini Patel, a senior verification engineer at eInfochips, has expertise in IP/SoC verification for sensor-based chips and SoC verification for signal processing design.

Dhruvesh Bhingradia, a senior verification engineer at eInfochips, has expertise in IP/SoC verification for sensor-based chips, sub-system verification for fabric-based design, and NoC systems.

Related Content

The post Reducing manual effort in coverage closure using CCF commands appeared first on EDN.

Pragmatic appoints John Quigley as executive VP of Engineering

Semiconductor today - Mon, 06/30/2025 - 21:35
Flexible integrated circuit (FlexIC) designer and manufacturer Pragmatic Semiconductor Ltd of Cambridge, UK has appointed John Quigley as executive VP (EVP) of engineering with responsibility for technology development, IC design and applications engineering...

CSconnected names first recipients for £1m Supply Chain Development Programme

Semiconductor today - Mon, 06/30/2025 - 21:24
The South Wales-based compound semiconductor cluster CSconnected, in partnership with Cardiff Capital Region (CCR), has announced the first four successful applicants to its £1m Supply Chain Development Programme, aimed at strengthening and scaling the compound semiconductor supply chain in South Wales...

Зустріч у ДПМ з телеведучою Анастасією Красницькою

Новини - Mon, 06/30/2025 - 16:43
Зустріч у ДПМ з телеведучою Анастасією Красницькою
Image
Інформація КП пн, 06/30/2025 - 16:43
Текст

Життєва аксіома: у дитинстві особливо запам'ятовуються зустрічі з відомими творчими особистостями. Майже двогодинний урок-діалог, який провела сьомого червня ведуча телеканалу "Київ24", викладачка Київського національного університету культури і мистецтв (КНУКіМ) журналістка Анастасія Красницька для учнів медіа школи з Василькова (Київщина) та вихованців інформаційно-творчого агентства "Юн-прес" (Київський палац дітей та юнацтва), було присвячено відзначенню Дню журналіста. Пані Анастасія має 15-річний професійний досвід роботи на телебаченні (понад чотири тисячі годин роботи у прямих ТБ-ефірах).

Power Tips #142: A comparison study on a floating voltage tracking power supply for ATE

EDN Network - Mon, 06/30/2025 - 15:05

In order to test multiple ICs simultaneously with different test voltages and currents, semiconductor automatic test equipment (ATE) uses multiple source measurement units (SMUs). Each SMU requires its own independent floating voltage tracking power supply to ensure clean measurements.

Figure 1 shows the basic structure of the SMU power supply. The voltage tracking power supplies need to supply the power amplifiers with a wide voltage range (±15 V to ±50 V) and a constant power capability.

Figure 1 A simplified power-supply block diagram in an ATE. Source: Texas Instruments

Figure 2 illustrates the maximum steady-state voltage and current that the SMU requires in red and the pulsed maximums in blue.

Figure 2 An example voltage-current profile for a voltage tracking power supply. Source: Texas Instruments

The ICs under test require a low-noise power supply with minimal power loss. In order to manage the power dissipation in a linear power device and deliver constant power under the conditions shown in Figure 2, it is required that the power supply be able to generate a pulsating output with high instantaneous power.

In addition to power dissipation considerations, it is essential that the power supply has a sufficient efficiency and thermal management to accommodate as many test channels as possible.

Four topologies are studied and compared to see which one best meets the voltage tracking power supply requirements. Table 1 lists the electrical and mechanical specifications for the power supply. The four topologies under consideration are: hard-switching full bridge (HSFB), full-bridge inductor-inductor-capacitor (FB-LLC) resonant converter, dual active half bridge (DAHB), and a two-stage approach composed of a four-switch buck-boost (4sw-BB) plus half-bridge LLC resonant converter (HB-LLC).

Parameter

Minimum

Maximum

Vin

15V

45V

Vout

±15V

±45V

Iout

0A

±2.0A

Pout,pulse

N/A

150W

Height

N/A

4mm

Width

N/A

14mm

Length

N/A

45mm

PCB layers

N/A

18

Table 1 Electrical and mechanical SMU requirements. Source: Texas Instruments

Topology comparison

Figure 3 shows the schematic for each of the four power supplies.

Figure 3 The four topologies evaluated to see which one best meets the voltage tracking power supply requirements listed in Table 1. Source: Texas Instruments

Each topology was evaluated on two essential requirements: small size and minimizing the thermal footprint. Efficiency is only important in as far as heat management is concerned.

Table 2 summarizes the potential benefits and challenges of each topology. In addition to size, the maximum height constraint necessitates a printed circuit board (PCB)-based transformer design.

Topology

Benefits

Challenges

HSFB

  • Single power conversion stage.
  • Simple well-known control.
  • Hard switching will limit the operating frequency.
  • A wide input and output range will be difficult for a single stage.

FB-LLC

  • Single power conversion stage.
  • Capable of high switching frequency because of zero voltage switching (ZVS).
  • A wide input and output range will be difficult for a single stage.
  • Low Lm may result in low efficiency because of high root-mean-square (RMS) currents.

DAHB

  • Single power conversion stage.
  • Capable of high switching frequency because of ZVS.
  • A wide input and output range will be difficult for a single stage.
  • Complex control is required to deliver the required power and maintain ZVS.

Two-stage

  • Optimized preregulator for power delivery over a wide range.
  • Optimizing the LLC for a single operating frequency makes the resonant tank design straightforward.
  • Heat is spread out over a larger area.
  • Two stages will increase the required space.

Table 2 The benefits and challenges of the four different SMU power supply topologies. Source: Texas Instruments

In order to understand the size implications for the HSFB, it is necessary to start out by examining the structure of the transformer. Equation 1 calculates the turns ratio for the HSFB as:

Using the requirements listed in Table 1 gives a result of . Because a practical design will require a PCB with no more than 18 layers, the maximum required primary turns on a center-tapped design is 2:8:8. With this information, you can use Equation 2 to estimate the center leg core diameter:

Hard switching losses in the FETs will keep the frequency no higher than 500 kHz, resulting in a 12 mm diameter of the center leg. The resulting core will be at least twice this size. The end result is that the HSFB solution is just too large for any serious practical consideration.

The single-stage FB-LLC enables a higher switching frequency by solving the hard-switching concerns found in the HSFB. However, the broad input and output voltage range will require a small magnetizing inductance. The best design identified used a turns ratio of 4:5, Lm = 2 µH, Lr = 1 µH, and fr = 800 kHz. This design addresses the issues with the HSFB by incorporating more primary turns, achieving a high operating frequency for minimal size, and requiring only 14 layers. However, the design suffers from several operating points that result in ZVS loss and an inability to generate the necessary output voltage under pulsed load conditions.

Figure 4 shows the equation and plots of the maximum gain of the system. Supporting the requirements outlined in Table 1 requires a gain of at least 3. Figure 4 shows that this is only possible by drastically decreasing one or more of Lr, Lm, or fr. Decreasing Lr will result in a loss of ZVS from the rapid change in the inductor current. Reducing fr will drive up the size of the transformer and the required primary turns. Decreasing Lm will significantly increase losses from additional circulating current. Given these factors, the single-stage FB-LLC is also not an option.

Figure 4 Maximum fundamental harmonic approximation (FHA) gain plots. Source: Texas Instruments

DAHB

The DAHB [1] is an interesting option that also attempts to solve the hard-switching concerns. One area of concern is the requirement to have active control of the secondary FETs. This kind of control will require additional circuitry to translate the control across the isolation boundary. Equation 3 predicts the resulting power delivery capability of the DAHB:

Table 3 lists the results for the full requirements outlined in Table 1. Notice that there are several problematic conditions, most notably one condition where the required peak current is 80 A. The FETs used in the design cannot accommodate this current.

Table 3 DAHB operating points with several problematic conditions that cannot be designed. Source: Texas Instruments

The two-stage approach pushes the voltage regulation problem to the 4sw-BB and operates the HB-LLC at a fixed frequency at resonance, which allows the HB-LLC to run at high frequency and more easily achieve ZVS under all conditions. The obvious downside of this approach is that it uses two power stages instead of one. However, the reduced currents in the HB-LLC and its ability to run at higher frequencies enable you to minimize the size of the transformer.

Table 4 summarizes the comparison between the four topologies, highlighting the reasons for selecting the two-stage approach. References [2] and [3] describe some essential control parameters used for the buck-boost and LLC.

Topology

Comparison results

HSFB

  • Hard switching losses keep the switching frequency low.
  • Large secondary turns and low operating frequency result in a large magnetic core.

FB-LLC

  • Parasitic capacitance requires a larger resonant inductor.
  • Gain requires a smaller resonant inductor.
  • The design cannot provide the required voltages.

DAHB

  • Complex multimode control.
  • Active secondary-side FET control.
  • Large RMS currents.

Two stage

  • Including a preregulator optimizes power delivery over the wide range.
  • The LLC can be optimized for a single operating frequency.
  • Heat is spread out over a larger thermal footprint

 Table 4 Comparison between the four different topologies, highlighting the reasons for selecting the two-stage approach. Source: Texas Instruments

Test results

Based on the comparison results, I built a high-power-density (14 mm by 45 mm) 4sw-BB plus HB-LLC prototype. Figure 5 shows an image of the hardware prototype of the final design that fits in the space outlined by Table 1.

Figure 5 The top-side layout of the high-power density 4sw-BB + HB-LLC test board. Source: Texas Instruments

Figure 6 shows both efficiency and thermal performance of the LLC converter.

Figure 6 The LLC efficiency curve and a thermal scan of the LLC converter. Source: Texas Instruments

Two-stage approach

After considering four topologies to meet ATE SMU requirements, the two-stage approach with the four-switch buck boost and fixed-frequency LLC was the smallest overall solution capable of meeting the system requirements.

Brent McDonald works as a system engineer for the Texas Instruments Power Supply Design Services team, where he creates reference designs for a variety of high-power applications. Brent received a bachelor’s degree in electrical engineering from the University of Wisconsin-Milwaukee, and a master’s degree, also in electrical engineering, from the University of Colorado Boulder.

Related Content

References

  1. Laturkar, N. Deshmukh and S. Anand. “Dual Active Half Bridge Converter with Integrated Active Power Decoupling for On-Board EV Charger.” 2022 IEEE International Conference on Power Electronics, Smart Grid, and Renewable Energy (PESGRE), Trivandrum, India, 2022, pp. 1-6, doi: 10.1109/PESGRE52268.2022.9715900.
  2. McDonald and F. Wang.” LLC performance enhancements with frequency and phase shift modulation control.” 2014 IEEE Applied Power Electronics Conference and Exposition – APEC 2014, Fort Worth, TX, USA, 2014, pp. 2036-2040, doi: 10.1109/APEC.2014.6803586.
  3. Sun, B. “Multimode control for a four-switch buck-boost converter.” Texas Instruments Analog Design Journal, literature No. SLYT765, 1Q 2019.

The post Power Tips #142: A comparison study on a floating voltage tracking power supply for ATE appeared first on EDN.

US-based GlobalFoundries investing extra $3bn for R&D on silicon photonics, advanced packaging and GaN

Semiconductor today - Mon, 06/30/2025 - 15:02
GlobalFoundries of Malta, NY, (GF, the only US-based pure-play foundry with a global manufacturing footprint including facilities in the USA, Europe and Singapore) plans to invest another $3bn in its expansion of semiconductor manufacturing and advanced packaging capabilities across its facilities in New York and Vermont...

III-V Epi brings independent, epi manufacturing expertise to Glasgow’s Critical Technologies Accelerator program

Semiconductor today - Mon, 06/30/2025 - 13:07
III–V Epi Ltd of Glasgow, Scotland, UK — which provides a molecular beam epitaxy (MBE) and metal-organic chemical vapor deposition (MOCVD) service for custom compound semiconductor wafer design, manufacturing, test and characterization — says that it is bringing crucial, independent, epitaxial manufacturing expertise to the University of Glasgow’s Critical Technologies Accelerator (CTA) program...

UIUC reveals ‘efficiency cliff’ when LEDs are scaled to submicron dimensions

Semiconductor today - Mon, 06/30/2025 - 11:15
Researchers at the University of Illinois Urbana-Champaign (UIUC) in the USA have fabricated blue light-emitting diodes (LEDs) down to an unprecedented 250nm in size, a critical step for next-generation technologies like ultra-high-resolution displays and advanced optical communication. However, their study reveals a significant challenge: a sharp ‘efficiency cliff’ when these LEDs are scaled to submicron dimensions...

Accelerating time-to-market as cars become software defined

EDN Network - Mon, 06/30/2025 - 08:41

Automakers have always raced to get the latest models to market. The shift to software-defined vehicles (SDVs) has turned that race into a sprint. It’s not a simple shift, however.

Building cars that can evolve constantly demands an overhaul of development practices, tools, and even team culture. From globally distributed engineering teams and cloud-based workflows to virtual testing and continuous integration pipelines, automakers are adopting new approaches to shrink development timelines without compromising safety or quality. These shifts are enabling the industry to move faster.

In older vehicles, after a car leaves the factory, code is rarely changed over its lifetime. In contrast, SDVs are designed for continuous improvement. Manufacturers can push over the-air (OTA) updates to add features, fix bugs, or enhance performance throughout a car’s life.

However, delivering continuous upgrades requires development cycles to speed up dramatically. Instead of a process measured in years for the next model refresh, software updates often need to be developed, tested, and rolled out in a matter of months—sometimes less. The cadence of innovation in automotive is shifting, and time-to-market for each new enhancement has become paramount.

Figure 1 Software-defined vehicles (SDVs) are designed for continuous improvement. Source: NXP

This new pace is a profound change for automakers, and calls for a far more agile, software-centric mindset. Companies that successfully shrink their cycle times can deliver constant improvements; those that cannot risk their vehicles quickly becoming outdated.

Distributed teams, unified development

Managing the massive, distributed development teams behind SDVs is another challenge when it comes to speeding up software delivery. Where a car’s software was previously handled by small in-house teams, today it takes hundreds or thousands of engineers spread around the globe.

This international talent pool enables 24-hour development, but it also introduces fragmentation. Different groups may use different tools or processes, and not everyone can access the same test hardware. Without a coordinated approach, a large, distributed team can prove a bottleneck rather than a benefit.

Automotive manufacturers are tackling the issue by uniting teams in cloud-based development environments. Instead of each engineer working in isolation, everyone accesses a standardized virtual workspace in the cloud pre-configured with every necessary tool. This ensures code runs the same for each developer, eliminating the “works on my machine” syndrome.

It also means updates to the toolchain or libraries can be rolled out to all engineers at once. Onboarding new team members becomes much faster as well—rather than spending days installing software, a new hire can start coding within hours by logging into the cloud environment. With a shared codebase and common infrastructure, a dispersed team can collaborate as one, keeping productivity high and projects on schedule.

Virtual testing: From months to minutes

Rethinking how and when software testing happens is critical to the acceleration of SDV development. In the past, software testing depended heavily on physical prototypes—electronic control units (ECUs) or test vehicles that developers needed to use in person, often creating idle time and long delays that are unacceptable in a fast-moving SDV project. The solution is to virtualize as much of the testing as possible.

Virtual prototypes of automotive hardware enable software development to begin long before physical parts are available. If new hardware won’t arrive until next year, engineers can work with a digital twin today. By the time actual prototypes come in, much of the software will already be validated in simulation, potentially accelerating time to market by months.

Figure 2 Virtual prototypes can be developed in parallel to hardware development. Source: NXP

Even when real hardware testing is required, remote access is speeding things up. Many companies now host “hardware-in-the-cloud” labs—racks of ECUs and other devices accessible online. Instead of waiting their turn or traveling to a test site, developers anywhere can deploy code to these remote rigs and see the results in real time. This approach compresses the validation cycle, catching issues earlier and proving out new features in weeks rather than months.

Embracing CI/CD for rapid releases

Accelerating time-to-market also requires the software release process itself to be reengineered. Modern development teams are increasingly adopting continuous integration and continuous delivery (CI/CD) pipelines to keep code flowing smoothly from development to deployment. In a CI/CD approach, contributions from all developers are merged and tested continuously rather than in big infrequent batches.

Automated build and test systems catch integration bugs or regressions a lot sooner in the development process, making fixes a lot easier to handle. This reduces last-minute scrambles that often plagued traditional, slower development cycles. With a robust CI/CD pipeline, software is always in a deployable state.

Of course, moving at such speed in a safety-critical industry requires care. CI/CD’s built-in rigor ensures each change passes all quality and safety checks before it ever reaches a car.

Driving into the future, faster

The push to accelerate vehicle software development is reshaping automotive engineering. Building cars that are defined by software forces automakers to adopt the tools, practices, and culture of software companies. Investments in cloud-based development environments, virtual testing frameworks, and CI/CD pipelines are quickly becoming the norm for any automaker that wants to stay competitive.

Ultimately, as cars increasingly resemble computers on wheels, time-to-market for software-driven features has become a make-or-break factor. The race is on for automakers to deliver new capabilities faster than ever, without hitting the brakes on safety or quality.

Those who successfully integrate distributed teams with cloud-first workflows, leverage virtual testing, and adopt continuous delivery practices will be perfectly placed to win over automakers with vehicles that keep improving over time.

Curt Hillier is technical director for automotive solutions at NXP Semiconductors.

Razvan Ionescu is automotive software and tools architect at NXP Semiconductors.

Related Content

The post Accelerating time-to-market as cars become software defined appeared first on EDN.

Вихованка PhD-програми КПІ Оксана Григор'єва – радниця з гендерних питань командування ЗСУ

Новини - Mon, 06/30/2025 - 02:03
Вихованка PhD-програми КПІ Оксана Григор'єва – радниця з гендерних питань командування ЗСУ
Image
kpi пн, 06/30/2025 - 02:03
Текст

Нещодавно в Інтернеті пройшла інформація, що у ЗСУ з'явилася посада радниці з гендерних питань і що призначили на неї Оксану Володимирівну Григор'єву.

It ain't dumb if it works...

Reddit:Electronics - Mon, 06/30/2025 - 01:42
It ain't dumb if it works...

Added a "slightly" bigger capacitor (the red thing) because the old one was ripped of The radio works now again

submitted by /u/Darcy_Wu_NR1
[link] [comments]

I made my first pair of Bluetooth speakers.

Reddit:Electronics - Sun, 06/29/2025 - 10:42
I made my first pair of Bluetooth speakers.

You can’t hear it, but it sounds beautiful 😍 AI had helped with some issues. Learned A LOT. Gemini told me to add a 1000uf cap to the Bluetooth module bc it kept on disconnecting at high power, and it worked, and I feel like it sounds better now. I’m gonna 3d print a housing and mount them under my desk as conduction speakers. Total project cost was 9 dollars. 1$ Bluetooth board, 2$ amp, and 6$ for 2 3 watt 4 ohm speaker drivers repurposed from a random speaker off eBay.

submitted by /u/Fit_Antelope_1045
[link] [comments]

You've heard of a clap switch what about a whistle switch!?

Reddit:Electronics - Sun, 06/29/2025 - 09:14
You've heard of a clap switch what about a whistle switch!?

Powered by a $0.10 RISC V MCU we can do surprisingly accurate whistle detection! Using a timer to make sure whistle sequences are done within a time frame we can do simple whistle pattern recognition for a switch! Great quick project!

submitted by /u/Separate-Choice
[link] [comments]

DIY USB to FM Transmitter board

Reddit:Electronics - Sun, 06/29/2025 - 03:31
DIY USB to FM Transmitter board

I designed a simple board that lets you transmit audio directly from your computer onto the commercial FM band. no code, no drivers, just plug and play.

This was a fun personal project and not meant to be an actual product (you can find similar boards on AliExpress for around $5). It’s also my first ever SMD assembly, and it was pretty fun working with SMD components (SSOP was a bit difficult).

The board uses a TI PCM2704 chip to stream audio over USB from the host device. That audio is then passed to a KT0803 FM transmitter chip, which broadcasts it over FM radio. I added I²C breakout pins, which can be used reprogram the KT0803's settings like transmitting frequency, mode, and calibration parameters.

Github page for the project (Includes the demo with sound) - https://github.com/Outdatedcandy92/FM-Transmitter

submitted by /u/FirefighterDull7183
[link] [comments]

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Sat, 06/28/2025 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Found this in my old electronics trinket box.

Reddit:Electronics - Sat, 06/28/2025 - 06:41
Found this in my old electronics trinket box.

I think I salvaged it from an old VCD player. Pretty cool.

submitted by /u/BobBolzac
[link] [comments]

HP 412A Photoconductive Chopper

Reddit:Electronics - Sat, 06/28/2025 - 06:35
HP 412A Photoconductive Chopper

Some background here https://antiqueradios.com/forums/viewtopic.php?t=306396

"Prior to the introduction of integrated op amps, it was extremely difficult to build stable DC amplifiers. By passing the signal through a chopper, the DC voltage can be passed through a feedback stabilized AC amplifier and then converted back to DC afterward. Chopper stabilized DC amplifiers--using electromechanical devices--have been around since the late 1940s at least."

"HP's photoconductive choppers eliminated the inevitable problems with contact adjustment and wear in the electromechanical ones, but they required higher input voltages to overcome the "on" resistance of the photocells."

Enjoy!

submitted by /u/99posse
[link] [comments]

Pages

Subscribe to Кафедра Електронної Інженерії aggregator