Microelectronics world news

Silicon Austria Labs and TU Graz launch joint Power Electronics Research Laboratory

Semiconductor today - Fri, 03/21/2025 - 19:04
Silicon Austria Labs (SAL) and Graz University of Technology (TU Graz) have opened their new joint Power Electronics Research Laboratory (PERL), led by Roberto Petrella, staff scientist in Power Electronics at SAL and professor at the University of Udine in Italy, and Michael Hartmann, professor and head of the Electric Drives and Power Electronic Systems Institute (EALS) at TU Graz. Researchers from both SAL and TU Graz will work together, including four dedicated PhD students...

USPTO gives ruling on EPC patent disputed by Innoscience

Semiconductor today - Fri, 03/21/2025 - 16:34
Efficient Power Conversion Corp (EPC) of El Segundo, CA, USA — which makes enhancement-mode gallium nitride on silicon (eGaN) power field-effect transistors (FETs) and integrated circuits for power management applications — says that the United States Patent Office (USPTO) has strengthened its US Patent No.8,350,294 by adding two new patent claims that are fundamental to commercial enhancement-mode GaN devices. However, the USPTO has also cancelled two claims that were the basis for the decision by the US International Trade Commission (ITC) that China-based Innoscience (Suzhou) Technology Holding Co Ltd infringed the patent. EPC will appeal the cancellation of these two claims...

Breaking Boundaries with Photonic Chips and Optical Computing

ELE Times - Fri, 03/21/2025 - 13:50
Introduction: The Shift from Electronics to Photonics

As traditional semiconductor-based computing approaches its physical and energy efficiency limits, photonic chips and optical computing have emerged as transformative solutions. By harnessing the speed and parallelism of light, these technologies offer significant advantages over conventional electronics in high-performance computing (HPC), artificial intelligence (AI), and data centers. Optical computing has the potential to revolutionize the way information is processed, enabling faster, more energy-efficient computation with lower latency.

The Fundamentals of Photonic Chips

Photonic chips leverage integrated photonics to manipulate light for computing, communication, and sensing applications. Unlike traditional chips that use electrons as the primary carriers of information, photonic chips use photons, which can travel at the speed of light with minimal energy loss. Key components of photonic chips include:

  • Waveguides: Optical channels that guide light through a photonic circuit, analogous to electrical traces in traditional chips.
  • Modulators: Convert electrical signals into optical signals by modulating light properties such as intensity or phase.
  • Detectors: Convert optical signals back into electrical signals for further processing.
  • Resonators and Interferometers: Facilitate advanced signal processing functions such as filtering, multiplexing, and logic operations.
  • Photonic Crystals: Control the flow of light by creating periodic dielectric structures, enhancing optical confinement and manipulation.
Optical Computing: A Seismic Change in Processing

Optical computing aims to replace or supplement electronic computation with light-based logic operations. This transition offers several key advantages:

  1. Unparalleled Speed: Photons travel at the speed of light, reducing signal delay and increasing processing throughput.
  2. Low Energy Consumption: Unlike electrical circuits that suffer from resistive heating, photonic systems dissipate minimal heat, enhancing energy efficiency.
  3. Massive Parallelism: Optical systems can process multiple data streams simultaneously, significantly improving computational throughput.
  4. Reduced Signal Crosstalk: Optical signals do not experience the same interference as electrical signals, reducing errors and noise in computation.
Core Technologies Enabling Photonic Computing 1. Silicon Photonics: Bridging Electronics and Photonics

Silicon photonics integrates optical components onto a silicon platform, enabling compatibility with existing semiconductor fabrication techniques. Key innovations in silicon photonics include:

  • On-chip Optical Interconnects: Replace traditional copper interconnects with optical waveguides to reduce power consumption and signal delay.
  • Optical RAM and Memory: Photonic memory elements store and retrieve data using light, enhancing data transfer speeds.
  • Electro-Optical Modulators: Convert electronic signals to optical signals efficiently, allowing seamless integration into existing computing architectures.
2. Optical Logic Gates and Boolean Computation

Optical computing relies on photonic logic gates to perform fundamental computations. These gates operate using:

  • Nonlinear Optical Effects: Enable all-optical switching without electronic intermediaries.
  • Mach-Zehnder Interferometers (MZI): Implement XOR, AND, and OR logic functions using light phase interference.
  • Optical Bistability: Maintains state information in optical latches, paving the way for optical flip-flops and memory elements.
3. Neuromorphic Optical Computing for AI Acceleration

With the growing demand for AI processing, photonic neural networks offer an alternative to traditional GPUs and TPUs. Optical deep learning accelerators employ:

  • Matrix Multiplication with Light: Perform multiply-accumulate operations at light speed using photonic interference.
  • Optical Tensor Processing Units (TPUs): Enhance AI inference by leveraging photonic components for ultra-fast computation.
  • Wavelength-Division Multiplexing (WDM): Enables parallel processing by encoding multiple data streams onto different wavelengths of light.
4. Quantum Photonics: The Future of Secure Computation

Quantum computing benefits immensely from photonics due to the inherent properties of quantum light. Advancements in quantum photonic processors include:

  • Single-Photon Sources and Detectors: Essential for quantum information processing and cryptographic applications.
  • Quantum Key Distribution (QKD): Enables ultra-secure communication leveraging the principles of quantum entanglement.
  • Optical Quantum Logic Gates: Facilitate complex quantum computations with minimal decoherence.
Industrial Applications and Use Cases 1. Data Centers and High-Performance Computing

Modern data centers face thermal constraints and power limitations due to electronic interconnects. Photonic interconnects dramatically reduce power consumption and increase bandwidth, making them an ideal solution for high-speed data transmission between servers and storage units.

2. Artificial Intelligence and Machine Learning Acceleration

AI workloads rely on extensive matrix operations, which photonic computing executes at orders of magnitude faster speeds than traditional electronics. Companies like Lightmatter and Lightelligence are pioneering photonic AI accelerators to enhance deep learning performance while reducing energy costs.

3. Telecommunications and Optical Networks

Fiber-optic networks already leverage photonics for data transmission, but photonic computing extends these advantages to real-time processing. Photonic switches enable ultra-fast data routing, improving the efficiency of 5G and future 6G networks.

4. Healthcare and Biophotonics

Optical computing is revolutionizing biomedical imaging and diagnostics. Photonic chips enable high-resolution imaging techniques such as optical coherence tomography (OCT) and bio-sensing applications, enhancing early disease detection.

5. Defense and Aerospace

The military and aerospace industries require ultra-fast, secure processing for signal intelligence, radar systems, and cryptographic applications. Optical computing’s speed and resistance to electromagnetic interference make it a critical enabler for next-generation defense systems.

Challenges and Future Roadmap 1. Fabrication Complexity and Scalability

While photonic chips leverage semiconductor manufacturing techniques, integrating large-scale optical circuits remains a challenge. Standardizing fabrication methods and developing CMOS-compatible photonic components are essential for commercial scalability.

2. Hybrid Photonic-Electronic Architectures

Despite the advantages of photonic computing, hybrid architectures that integrate both electronic and optical components are likely to dominate in the near term. Developing efficient electro-optic interfaces remains a key research focus.

3. Software and Algorithm Development

Current software is optimized for electronic computation, requiring a shift in programming paradigms for photonic systems. Developing photonic-aware compilers and simulation tools will accelerate adoption.

4. Energy Efficiency and Power Consumption

While photonic computing reduces heat dissipation, the challenge lies in optimizing light generation and detection components to minimize power consumption further.

Conclusion: The Dawn of the Photonic Computing Era

Photonic chips and optical computing represent a paradigm shift in computation, offering unparalleled speed, efficiency, and scalability. As silicon photonics, quantum optics, and neuromorphic photonic computing continue to advance, the technology is poised to revolutionize AI, data centers, telecommunications, and beyond. Overcoming fabrication, software, and integration challenges will be crucial for realizing the full potential of photonic computing, marking the beginning of a new era in information processing.

The post Breaking Boundaries with Photonic Chips and Optical Computing appeared first on ELE Times.

Design a feedback loop compensator for a flyback converter in four steps

EDN Network - Fri, 03/21/2025 - 02:51

Due to their versatility, ease of design, and low cost, flyback converters have become one of the most widely used topologies in power electronics. Its structure derives from one of the three basic topologies—specifically, buck-boost topology. However, unlike buck-boost converters, flyback topologies allow the voltage output to be electrically isolated from the input power supply. This feature is vital for industrial and consumer applications.

Among the different control methods used to stabilize power converters, the most widely used control method is peak current mode, which continuously senses the primary current to provide important protection for the power supply.

Additionally, to obtain a higher design performance, it’s common to regulate the converter with the output that has the highest load using a technique called cross-regulation.

This article aims to show engineers how to correctly design the control loop that stabilizes the flyback converter in order to provide optimal functionality. This process includes minimizing the stationary error, increasing/decreasing the bandwidth as required, and increasing the phase/gain margin as much as possible.

Closed-loop flyback converter block diagram

Before making the necessary calculations for the controller to stabilize the peak current control mode flyback, it’s important to understand the components of the entire closed-loop system: the converter averaged model and the control loop (Figure 1).

              Figure 1 Here is how the components look in the entire closed-loop system. Source: Monolithic Power Systems

The design engineer’s main interest is to study the behavior of the converter under load changes. Considering a fixed input voltage (VIN), the open-loop transfer function can be modeled under small perturbations produced in the duty cycle to study the power supply’s dynamic response.

The summarized open-loop system can be modeled with Equation 1            (1)

Where G is the current-sense gain transformed to voltage and GC(s) and GCI(s) are the transfer functions of the flyback converter in terms of output voltage and magnetizing current response (respectively) under small perturbations in the duty cycle. GαTS is the modeling of the ramp compensation to avoid the double-pole oscillation at half of the switching frequency.

Flyback converter control design

There are many decisions and tradeoffs involved in designing the flyback converter’s control loop. The following sections of the article will explain the design process step by step. Figure 2 shows the design flow.

Figure 2 The design flow highlights control loop creation step by step. Source: Monolithic Power Systems

Control loop design process and calculations

Step 1: Design inputs

Once the converter’s main parameters have been designed according to the relevant specifications, it’s time to define the parameters as inputs for the control loop design. These parameters include the input and output voltages (VIN and VOUT, respectively), operation mode, switching frequency (fSW), duty cycle, magnetizing inductance (LM), turns ratio (NP:NS), shunt resistor (RSHUNT), and output capacitance (COUT). Table 1 shows a summary of the design inputs for the circuit discussed in this article.

Table 1 Here is a summary of design inputs required for creating control loop. Source: Monolithic Power Systems

To design a flyback converter compensator, it’s necessary to first obtain all main components that make the converter. Here, HF500-40 flyback regulator is used to demonstrate design of a compensator using optocoupler feedback. This device is a fixed-frequency, current-mode regulator with built-in slope compensation. Because the converter works in continuous conduction mode (CCM) at low line input, a double-pole oscillation at half of the switching frequency is produced; built-in slope compensation dampens this oscillation, making its effect almost null.

Step 2: Calculate parameters of the open-loop transfer function

It’s vital to calculate the parameters of the open-loop transfer function and calculate the values for all of the compensator’s parameters that can optimize the converter at the dynamic behavior level.

The open-loop transfer function of the peak current control flyback converter (also including the compensation ramp factor) can be estimated with Equation 2:

      (2)

Where D’ is defined by the percentage of time that the secondary diode (or synchronous FET) is active during a switching cycle.

The basic canonical model can be defined with Equation 3            (3)

Note that the equivalent series resistance (ESR) effect on the output capacitors has been included in the transfer function, as it’s the most significant parasitic effect.

By using Equation 2 and Equation 3, it’s possible to calculate the vital parameters.

The resonant frequency (fO) can be calculated with Equation 4:

              (4)

After inputting the relevant values, fO can be calculated with Equation 5:              (5)

The right-half-plane zero (fRHP) can be estimated with Equation 6:              (6)

The q-factor (Q) can be calculated with Equation 7:              (7)

After inputting the relevant values, Q can be estimated with Equation 8:              (8)

The DC gain (K) can be calculated with Equation 9:              (9)

After inputting the relevant values, K can be estimated with Equation 10            (10)

The high-frequency zero (fHF) can be calculated with Equation 11:

              (11)

It’s important to note that with current mode control, it’s common to obtain values well below 0.5 for Q. With this in mind, the result of the second-degree polynomial in the denominator of the transfer function ends up giving two real and negative poles. This is different from voltage-control mode or when there is a very large compensation ramp, which results in two complex conjugate poles.

The two real and negative poles can be estimated with Equation 12:              (12)

The new open-loop transfer function can be calculated with Equation 13:              (13)

The cutoff frequency (fC) can be estimated with Equation 14:              (14)

The following sections will explain how the frequency compensator design achieves power supply stability and excellent performance.

Step 3: Frequency compensator design

Once the open-loop transfer function is modeled, it’s necessary to design the frequency compensator such that it achieves the best performance possible. Because the frequency response of the above transfer function has two separate poles—one at a low frequency and one at a high frequency—a simple Type II compensator can be designed. This compensator does not need an additional zero, which is not the case in voltage-control mode because there is a double pole that produces a resonance.

To minimize the steady-state error, it’s necessary to design an inverted-zero (or a pole at the origin) because it produces higher gains at low frequencies. To ensure that the system’s stability is not impacted, the frequency must be at least 10 times lower than the first pole, calculated with Equation 15:

           (15)

Due to the ESR parasitic effect at high frequencies, it’s necessary to design a high-frequency pole to compensate for and remove this effect. The pole can be estimated with Equation 16:

(16)

On the other hand, it’s common to modify the cutoff frequency to achieve a higher or lower bandwidth and produce faster or slower dynamic responses, respectively. Once the cutoff frequency is selected (in this case, fC is increased up to 6.5 kHz, or 10% of fSW), the compensator’s middle-frequency gain can be calculated with Equation 17:           (17)

Once the compensator has been designed within the frequency range, calculate the values of the passive components.

Step 4: Design the compensator’s passive components

The most common Type II compensator used for stabilization in current control mode flyback converters with cross-regulation is made up of an optocoupler feedback (Figure 3).

Figure 3 Type-II compensator is made up with optocoupler feedback. Source: Monolithic Power Systems

The compensator transfer function based on optocoupler feedback can be estimated with Equation 18:          (18)

The middle-frequency gain is formed in two stages: the optocoupler gain and the adjustable voltage reference compensator gain, calculated with Equation 19:

(19)

It’s important to calculate the maximum resistance to correctly bias the optocoupler. This resistance can be estimated with Equation 20:     (20)

The parameters necessary to calculate RD can be found in the optocoupler and the adjustable voltage reference datasheets. Table 2 shows the typical values for these parameters from the optocoupler.

Table 2 Here are the main optocoupler parameters. Source: Monolithic Power Systems

Table 3 shows the typical values for these parameters from the adjustable voltage reference.

Table 3 The above data shows adjustable voltage reference parameters. Source: Monolithic Power Systems

Once the above parameters have been obtained, RD can be calculated with Equation 21:              (21)

Once the value of R3 is obtained (in this case, R3 is internal to the HF500-40 controller, with a minimum value of 12 kΩ), as well as the values for R1, R2, and RD (where RD = 2 kΩ), RF can be estimated with Equation 22:   (22)

Where GCOMP is the compensator’s middle frequency gain, calculated with Equation (17). GCOMP is used to adjust the power supply’s bandwidth.

Because the inverted zero and high-frequency pole were already calculated, CF and CFB can be calculated with Equation 23 and Equation 24, respectively.         (23)           (24)

Once the open-loop system and compensator have been designed, the loop gain transfer function can be estimated with Equation 25:           (25)

Equation 25 is based on Equation 13 and Equation 18.

It’s important to calculate the phase and gain margins to ensure the stability of power supply.

The phase margin can be calculated with Equation 26:          (26)

After inputting the relevant values, the phase margin can be calculated with Equation 27:          (27)

If the phase margin exceeds 50°, it’s an important parameter necessary to comply with certain standards.

At the same time, the gain margin can be approximated with Equation 28:            (28)

Equation 29 is derived from Equation 25 at the specified frequency:     (29)

In this scenario, the gain margin is below -10dB, which is another important parameter to consider, particularly regarding compliance with regulation specifications. If the result is close to 0dB, some iteration is necessary to decrease the value; otherwise, the performance is suboptimal. This iteration must start by decreasing the value of the cutoff frequency.

This complete transfer function provides stability to the power supply and the best performance made possible by:

  • Minimizing steady-state error
  • Minimizing ESR parasitic effect
  • Increasing bandwidth of power supply up to 6.5 kHz

Final design

After calculating all the passive component values for the feedback loop compensator and determining the converter’s main parameters, the entire flyback can be designed using the flyback regulator. Figure 4 shows the circuit’s final design using all calculated parameters.

Figure 4 Here is how the final design circuit schematic looks like. Source: Monolithic Power Systems

Figure 5 shows the bode plot of the complete loop gain frequency response.

Figure 5 Bode plot is shown for the complete loop gain frequency response. Source: Monolithic Power Systems

Obtaining the flyback averaged model via small-signal analysis is a complex process to most accurate approximation of the converter’s transfer functions. In addition, the cross-regulation technique involves secondary-side regulation through optocoupler feedback and an adjustable voltage reference, which complicates calculations.

However, by following the four steps explained in this article, a good approximation can be obtained to improve the power supply’s performance, as the output with the heaviest load is directly regulated. This means that the output can react quickly to load changes.

Joan Mampel is application engineer at Monolithic Power Systems (MPS).

Related Content

The post Design a feedback loop compensator for a flyback converter in four steps appeared first on EDN.

Hot-swap controller protects AI servers

EDN Network - Fri, 03/21/2025 - 00:55

The XDP711-001 48-V digital hot-swap controller from Infineon offers programmable SOA current control for high-power AI servers. It provides I/O voltage monitoring with an accuracy of ≤0.4% and system input current monitoring with an accuracy of ≤0.75% across the full ADC range, enhancing fault detection and reporting.

Built on a three-block architecture, the XDP711-001 integrates high-precision telemetry, digital SOA control, and high-current gate drivers capable of driving up to eight N-channel power MOSFETs. It is designed to drive multiple MOSFETs in parallel, supporting the development of power delivery boards for 4-kW, 6-kW, and 8-kW applications.

The controller operates within an input voltage range of 7 V to 80 V and can withstand transients up to 100 V for 500 ms. It provides input power monitoring with reporting accuracy of ≤1.15% and features a high-speed PMBus interface for active monitoring.

Programmable gate shutdown for severe overcurrent protection ensures shutdown within 1 µs. With options for external FET selection, one-time programming, and customizable fault detection, warning programming, and de-glitch timers, the XDP711-001 offers flexibility for various use cases. Additionally, its analog-assisted digital mode maintains backward compatibility with legacy analog hot swap controllers.

The XDP711-001 will be available for order in mid-2025. For more information on the XPD series of protection and monitoring ICs, click here.

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

The post Hot-swap controller protects AI servers appeared first on EDN.

Snapdragon G chips drive next-gen handheld gaming

EDN Network - Fri, 03/21/2025 - 00:55

Qualcomm unveiled the Snapdragon G series, a lineup of three chips for advanced handheld, dedicated gaming devices. The G3 Gen 3, G2 Gen 2, and G1 Gen 2 SoCs support various play styles and form factors, enabling gamers to play cloud, console, Android, or PC games.

Snapdragon G3 Gen 3 is the first in the G Series to support Lumen, Unreal Engine 5’s dynamic global illumination and reflections technology, for Android handheld gaming. Gen3 Gen 3 offers 30% faster CPU performance, 28% faster graphics, and improved power efficiency over the previous generation. Wi-Fi 7 support reduces latency and boosts bandwidth.

Snapdragon G2 Gen 2 is optimized for gaming and cloud gaming at 144 frames/s, delivering 2.3x faster CPU performance and 3.8x faster GPU capabilities compared to G2 Gen 1. It also supports Wi-Fi 7 for faster, more reliable connections.

Snapdragon G1 Gen 2 targets a wider audience, supporting 1080p at 120 frames/s over Wi-Fi. Designed for cloud gaming on handheld Android devices, it boosts CPU performance by 80% and GPU performance by 25% for smooth gameplay.

Starting this quarter, OEMs like AYANEO, ONEXSUGAR, and Retroid Pocket will release devices powered by the Snapdragon G series. For more details on all three platforms, click here.

Qualcomm Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

The post Snapdragon G chips drive next-gen handheld gaming appeared first on EDN.

MCUs support ASIL C/SIL 2 safety

EDN Network - Fri, 03/21/2025 - 00:53

Microchip’s AVR SD entry-level MCUs feature built-in functional safety mechanisms and a dedicated safety software framework. Intended for applications requiring rigorous safety assurance, they meet ASIL C and SIL 2 requirements and are developed under a TÜV Rheinland-certified functional safety management system.

Hardware safety features include a dual-core lockstep CPU, dual ADCs, ECC on all memory, an error controller, error injection, and voltage and clock monitors. These features reduce fault detection time and software complexity. The AVR SD family detects internal faults quickly and deterministically, meeting Fault Detection Time Interval (FDTI) targets as low as 1 ms to enhance reliability and prevent hazards.

Microchip’s safety framework software integrates with MCU hardware features to manage diagnostics, enabling the devices to detect errors and initiate a safe state autonomously. The AVR SD microcontrollers serve as main processors for critical tasks such as thermal runaway detection and sensor monitoring while consuming minimal power. They can also function as coprocessors, mirroring or offloading safety functions in systems with safety integrity levels up to ASIL D/SIL 3.

Prices for the AVR SD microcontrollers start at $0.93 each in lots of 5000 units, with lower pricing for higher volumes.

AVR SD product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

The post MCUs support ASIL C/SIL 2 safety appeared first on EDN.

Broad GaN FET lineup eases design headaches

EDN Network - Fri, 03/21/2025 - 00:53

Nexperia has expanded its GaN FET portfolio with 12 new E-mode devices, available in both low- and high-voltage options. The additions address the demand for higher efficiency and compact designs across consumer, industrial, server/computing, and telecommunications markets. Nexperia’s portfolio includes both cascode and E-mode GaN FETs, available in a wide variety of packages, providing flexibility for diverse design needs.

The new offerings include 40-V bidirectional devices (RDS(on) <12 mΩ), designed for overvoltage protection, load switching, and low-voltage applications such as battery management systems in mobile devices and laptop computers. These devices provide critical support for applications requiring efficient and reliable switching.

Also featured are 100-V and 150-V devices (RDS(on) <7 mΩ), useful for synchronous rectification in power supplies for consumer devices, DC/DC converters in datacom and telecom equipment, photovoltaic micro-inverters, Class-D audio amplifiers, and motor control systems in e-bikes, forklifts, and light electric vehicles. The release also includes 700-V devices (RDS(on) >140 mΩ) for LED drivers and power factor correction (PFC) applications, along with 650-V devices (RDS(on) >350 mΩ) suitable for AC/DC converters, where slightly higher on-resistance is acceptable for the specific application.

To learn more about Nexperia’s E-mode GaN FETs, click here.

Nexperia

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

The post Broad GaN FET lineup eases design headaches appeared first on EDN.

NVIDIA switches scale AI with silicon photonics

EDN Network - Fri, 03/21/2025 - 00:52

NVIDIA’s Spectrum-X and Quantum-X silicon photonics-based network switches connect millions of GPUs, scaling AI compute. They achieve up to 1.6 Tbps per port and up to 400 Tbps aggregate bandwidth. NVIDIA reports the switch platforms use 4x fewer lasers for 3.5x better power efficiency, 63x greater signal integrity, 10x higher network resiliency at scale, and 1.3x faster deployment than conventional networks.

Spectrum-X Photonics Ethernet switches support 128 ports of 800 Gbps or 512 ports of 200 Gbps, delivering 100 Tbps of total bandwidth. A high-capacity variant offers 512 ports of 800 Gbps or 2048 ports of 200 Gbps, for a total throughput of 400 Tbps.

Quantum-X Photonics InfiniBand switches provide 144 ports of 800 Gbps, achieved using 200 Gbps SerDes per port. Built-in liquid cooling keeps the onboard silicon photonics from overheating. According to NVIDIA, Quantum-X Photonics switches are 2x faster and offer 5x higher scalability for AI compute fabrics compared to the previous generation.

NVIDIA’s silicon photonics ecosystem includes collaborations with TSMC, Coherent, Corning, Foxconn, Lumentum, and SENKO to develop an integrated silicon-optics process and robust supply chain.

Quantum-X Photonics InfiniBand switches are expected to be available later this year. Spectrum-X Photonics Ethernet switches will be coming in 2026 from leading infrastructure and system vendors. Learn more about NVIDIA’s silicon photonics here.

NVIDIA

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

The post NVIDIA switches scale AI with silicon photonics appeared first on EDN.

Quantum Critical Metals stakes Prophecy Germanium-Gallium-Zinc Project in northern British Columbia

Semiconductor today - Thu, 03/20/2025 - 22:50
Canadian mineral exploration company Quantum Critical Metals Corp has announced the staking of its Prophecy Germanium-Gallium-Zinc Project, a new, highly prospective critical metals property in northern British Columbia. Additionally, the firm has staked a second zinc-focused property in southern British Columbia, further expanding its strategic critical metals portfolio...

TI launches integrated GaN power stages in TOLL packages

Semiconductor today - Thu, 03/20/2025 - 18:10
To simplify data-center design, at the Applied Power Electronics Conference (APEC 2025) in Atlanta, GA, USA (16–20 March) Dallas-based Texas Instruments Inc (TI) has introduced a new family of integrated GaN power stages in industry-standard transistor outline leadless (TOLL) packaging, , allowing designers to take advantage of the efficiency of TI’s GaN without costly and time-consuming redesigns...

Can a free running LMC555 VCO discharge its timing cap to zero?

EDN Network - Thu, 03/20/2025 - 16:16

Frequent design idea (DI) contributor Nick Cornford recently published a synergistic pair of DIs “A pitch-linear VCO, part 1: Getting it going” and “A pitch-linear VCO, part 2: taking it further.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

The main theme of these articles is design techniques for audio VCOs that have an exponential (a.k.a. linear in pitch) relationship between control voltage and frequency. Great work Nick! I became particularly interested in the topic during a lively discussion (typical of editor Aalyia’s DI kitchen) in the comments section. The debate was about whether such a VCO could be built around the venerable 555 analog timer. Some said nay, others yea. I leaned toward the latter opinion and decided to try to put a schematic where my mouth was. Figure 1 is the result.

Figure 1 555 VCO discharges timing cap C1 completely to the negative rail via a Reset pulse.

The nay-sayers’ case hinged on a perceived inability of the 555 architecture to completely discharge the timing capacitor, C1 in Figure 1. They seemed to have a good argument because, in its usual mode of operation, the discharge of C1 ends when the trigger input level is crossed. This normally happens at one third of the supply rail differential and one third is a long way from zero! But it turns out the 555, despite being such an old dog, knows a different trick, it involves a very seldom used feature of this ancient chip: the reset pin 4.

The 555 datasheet says a pulse on reset will override trigger and also force discharge of C1. In Figure 1, R3 and C2 provide such a pulse when the OUT pin goes low at the end of the timing cycle. The R3C2 product ensures the pulse is long enough for the 15 Ω Ron of the Dch pin to accurately evacuate C1. 

And that’s it. Problem solved as sketched in Figure 2.

Figure 2 The VCO waveforms; reset pulses at the end of each timing cycle, and is triggered when Vc1 = Vcon, to force an adequately complete discharge of C1.

Figure 3 illustrates the resulting satisfactory log conformity (due mostly to my shameless theft of Nick’s clever resistor ratios) of the resulting 555. VCO, showing good exponential (linear in pitch) behavior over the desired two octaves of 250 to 1000 Hz.

Figure 3 Log plot of the frequency versus control voltage for the two-octave linear-in-pitch VCO. [X axis = Vcon volts (inverted), Y axis = Hz / 16 = 250 Hz to 1 kHz]

In fact, at the price of an extra resistor, it might be possible to improve linearity enough to pick up another half a volt and half an octave on both ends of the pitch range to span 177 Hz to 1410 Hz. See Figure 4 and Figure 5.

Figure 4 R4 sums ~6% of Vcon with the C1 timing ramp to get the improvement in linearity shown in Figure 5.

Figure 5 The effect of the R4 modification showing a linearity improvement. [X axis = Vcon volts (inverted), Y axis = Hz / 16]

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Can a free running LMC555 VCO discharge its timing cap to zero? appeared first on EDN.

Enhancing Wireless Communication with AI-Optimized RF Systems

ELE Times - Thu, 03/20/2025 - 14:23
Introduction: The Convergence of AI and RF Engineering

The integration of Artificial Intelligence (AI) into Radio Frequency (RF) systems marks a paradigm shift in wireless communications. Traditional RF design relies on static, rule-based optimization, whereas AI enables dynamic, data-driven adaptation. With the rise of 5G, mmWave, satellite communications, and radar technologies, AI-driven RF solutions are crucial for maximizing spectral efficiency, improving signal integrity, and reducing energy consumption.

The Urgency for AI in RF Systems: Industry Challenges & Market Trends

The RF industry is under immense pressure to meet growing demands for higher data rates, better spectral utilization, and reduced latency. One of the key challenges is Dynamic Spectrum Management, where the increasing scarcity of available spectrum forces telecom providers to adopt intelligent allocation mechanisms. AI-powered systems can predict and allocate spectrum dynamically, ensuring optimal utilization and minimizing congestion.

Another significant challenge is Electromagnetic Interference (EMI) Mitigation. As the density of wireless devices grows, the likelihood of interference between different RF signals increases. AI can analyze vast amounts of data in real-time to predict and mitigate EMI, thus improving overall signal integrity.

Power Efficiency is another major concern, especially in battery-operated and energy-constrained applications. AI-driven power control mechanisms in RF front-ends enable systems to dynamically adjust transmission power based on network conditions, leading to significant energy savings. Additionally, Edge Processing Demands are increasing with the advent of autonomous systems that require real-time, AI-driven RF adaptation for high-speed decision-making and low-latency communications.

Advanced AI Techniques in RF System Optimization

Industry leaders like Qualcomm, Ericsson, and NVIDIA are investing heavily in AI-driven RF innovations. The following AI methodologies are transforming RF architectures:

Reinforcement Learning for Adaptive Spectrum Allocation

AI-driven Cognitive Radio Networks (CRNs) leverage Deep Reinforcement Learning (DRL) to optimize spectrum usage dynamically. By continuously learning from environmental conditions and past allocations, DRL can predict interference patterns and proactively assign spectrum in a way that maximizes efficiency. This allows for the intelligent utilization of both sub-6 GHz and mmWave bands, ensuring high data throughput while minimizing collisions and latency.

Deep Neural Networks for RF Signal Classification & Modulation Recognition

Traditional RF signal classification methods struggle in complex, noisy environments. AI-based techniques such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTMs) networks enhance modulation recognition accuracy, even in fading channels. These deep learning models can also be used for RF fingerprinting, which improves security by uniquely identifying signal sources. Furthermore, AI-based anomaly detection helps identify and counteract jamming or spoofing attempts in critical communication systems.

AI-Driven Beamforming for Massive MIMO Systems

Massive Multiple-Input Multiple-Output (MIMO) is a cornerstone technology for 5G and 6G networks. AI-driven beamforming techniques use deep reinforcement learning to dynamically adjust transmission beams, improving directional accuracy and link reliability. Additionally, unsupervised clustering methods help optimize beam selection by analyzing traffic load variations, ensuring that the best possible configuration is applied in real-time.

Generative Adversarial Networks (GANs) for RF Signal Synthesis

GANs are being explored for RF waveform synthesis, where they generate realistic signal patterns that adapt to changing environmental conditions. This capability is particularly beneficial in electronic warfare (EW) applications, where adaptive waveform generation can enhance jamming resilience. GANs are also useful for RF data augmentation, allowing AI models to be trained on synthetic RF datasets when real-world data is scarce.

AI-Enabled Digital Predistortion (DPD) for Power Amplifiers

Power amplifiers (PAs) suffer from nonlinearities that introduce spectral regrowth, degrading signal quality. AI-driven Digital Predistortion (DPD) techniques leverage neural network-based PA modeling to compensate for these distortions in real-time. Bayesian optimization is used to fine-tune DPD parameters dynamically, ensuring optimal performance under varying transmission conditions. Additionally, adaptive biasing techniques help improve PA efficiency by adjusting power consumption based on the input signal’s requirements.

Industry-Specific Applications of AI-Optimized RF Systems

The impact of AI-driven RF innovation extends across multiple high-tech industries:

Telecommunications: AI-Powered 5G & 6G Networks

AI plays a crucial role in optimizing adaptive coding and modulation (ACM) techniques, allowing for dynamic throughput adjustments based on network conditions. Additionally, AI-enhanced network slicing enables operators to allocate bandwidth efficiently, ensuring quality-of-service (QoS) for diverse applications. AI-based predictive analytics also assist in proactive interference management, allowing networks to mitigate potential disruptions before they occur.

Defense & Aerospace: Cognitive RF for Military Applications

In military communications, AI is revolutionizing RF situational awareness, enabling autonomous systems to detect and analyze threats in real-time. AI-driven electronic countermeasures (ECMs) help counteract enemy jamming techniques, ensuring robust and secure battlefield communications. Machine learning algorithms are also being deployed for predictive maintenance of radar and RF systems, reducing operational downtime and enhancing mission readiness.

Automotive & IoT: AI-Driven RF Optimization for V2X Communication

Vehicle-to-everything (V2X) communication requires reliable, low-latency RF links for applications such as autonomous driving and smart traffic management. AI-powered spectrum sharing ensures that vehicular networks can coexist efficiently with other wireless systems. Predictive congestion control algorithms allow urban IoT deployments to adapt to traffic variations dynamically, improving efficiency. Additionally, AI-driven adaptive RF front-end tuning enhances communication reliability in connected vehicles by automatically adjusting antenna parameters based on driving conditions.

Satellite Communications: AI-Enabled Adaptive Link Optimization

Satellite communication systems benefit from AI-driven link adaptation, where AI models adjust signal parameters based on atmospheric conditions such as rain fade and ionospheric disturbances. Machine learning algorithms are also being used for RF interference classification, helping satellite networks distinguish between different types of interference sources. Predictive beam hopping strategies optimize resource allocation in non-geostationary satellite constellations, improving coverage and efficiency.

The Future of AI-Optimized RF: Key Challenges and Technological Roadmap

While AI is revolutionizing RF systems, several roadblocks must be addressed. One major challenge is computational overhead, as implementing AI at the edge requires energy-efficient neuromorphic computing solutions. The lack of standardization in AI-driven RF methodologies also hinders widespread adoption, necessitating global collaboration to establish common frameworks. Furthermore, security vulnerabilities pose risks, as adversarial attacks on AI models can compromise RF system integrity.

Future Innovations

One promising area is Quantum Machine Learning for RF Signal Processing, which could enable ultra-low-latency decision-making in complex RF environments. Another key advancement is Federated Learning for Secure Distributed RF Intelligence, allowing multiple RF systems to share AI models while preserving data privacy. Additionally, AI-Optimized RF ASICs & Chipsets are expected to revolutionize real-time signal processing by embedding AI functionalities directly into hardware.

Conclusion

AI-driven RF optimization is at the forefront of wireless communication evolution, offering unparalleled efficiency, adaptability, and intelligence. Industry pioneers are integrating AI into RF design to enhance spectrum utilization, interference mitigation, and power efficiency. As AI algorithms and RF hardware continue to co-evolve, the fusion of these technologies will redefine the future of telecommunications, defense, IoT, and satellite communications.

The post Enhancing Wireless Communication with AI-Optimized RF Systems appeared first on ELE Times.

OSRAM’s and Nichia’s micro-LED solutions boost resolution 100-fold over traditional matrix LEDs

Semiconductor today - Thu, 03/20/2025 - 14:12
In its report ‘Automotive MicroLED Comparison 2025’ focusing on the new micro-LED-based technology emerging in the automotive sector, market research and strategy consulting company Yole Group notes that two leading LED companies ams OSRAM and Nichia have developed dedicated micro-LED solutions, enabling more than a 100-fold increase in resolution compared to existing matrix LED systems based on discrete LEDs...

Tiger and GESemi selling thin-film GaAs flexible PV production equipment

Semiconductor today - Thu, 03/20/2025 - 10:46
Tiger Group and GESemi are now accepting offers for equipment used to produce high-efficiency gallium arsenide (GaAs)-based thin-film photovoltaic (PV) cells. The fully decommissioned, ready-to-ship manufacturing assets from Ubiquity Solar of Endicott, NY, USA — including nearly 600 crates stored in South Central New York —features brands such as Aixtron Group, Attolight, GigaMat, SCHMID, Hercules and KLA Corp (KLA-Tencor)...

Data center solutions take center stage at APEC 2025

EDN Network - Thu, 03/20/2025 - 09:11

This year during APEC, much of the focus on the show floor revolved around data center tech, with companies showcasing high-density power supply units (PSU), battery backup units (BBU), intermediate bus converters (IBC), and GPU solutions (Figure 1). 

Figure 1: Up to 12 kW Infineon PSU technology leverages a mixture of the CoolSIC, CoolMOS, and CoolGaN technologies. 

The motivation comes from the massive power demand increase that the generative AI, in particular, LLMs have brought on, shooting up the 2% of global power consumption from data centers to a projected 7% by 2030. This power demand originates from the shift from the more 120 kV (single-phase AC) stepped down to 48 V to 250-350 kV (three-phase AC) stepped down to 400 VDC rails attached to the rack and distributed from there (to switches, PSUs, compute trays, switch trays, BBUs, and GPUs).

Infineon’s booth presented a comprehensive suite of solutions from the “power grid to the core.” The BBU technology (Figure 2) utilizes the partial power converter (PPC) topology to enable high power densities (> 12 kW) using scalable 4 kW power converter cards.

Figure 2: Infineon BBU roadmap, using both Si and GaN to scale up the power density of the converters with high efficiencies. Source: Infineon

The technology boasts an efficiency of 99.5% using lower voltage (40 V and 80 V) switches to increase figure of merit (FOM) and yield efficiency gains. The solutions are aimed at meeting space-restrictions of modern BBUs that are outfitted with more and more batteries and hence less space for the embedded DC/DC converter.

Their latest generation of vertical power delivery modules feature a leap in GPU/AI card power delivery, offering up to 2 A/mm2. These improvements create massive space-savings on the already space-constrained AI cards that often require 2000 A to 3000 A for power-hungry chips such as the Nvidia Blackwell GPU.

Instead of being mounted laterally, or alongside the chip, these devices deliver power on the underside of the card to massively reduce power delivery losses. The backside mounting does come with its profile restraints; there is a max height of 5 mm to facilitate heatsink mounting on the other side of the board, so these modules must maintain their 4-mm height. 

The first generation of the dual-phase module featured the silicon device that sat on top of the substrate with integrated inductors and capacitors to achieve 1 A/mm2, or 140A max,  in a 10 x 9 mm package. This was followed by a dual-phase module that featured a 1.5 A/mm2, or 160 A max, improvement within 8 x 8 mm dimensions. Embedding the silicon into the substrate to have only one PCB is what contributed to the major space-savings in this iteration (Figure 4). 

Figure 4: The second generation of Infineon vertical power delivery modules mounted on the backside of GPU PCB deliver a total of 2000 A. An Infineon controller IC can also be seen providing the necessary voltage/current through coordination with the vertical power delivery modules and chip.

The third generation just released has brought on two more power stages for a quad-phase module for 2 A/mm2, or 280 A max, in the 10 x 9 mm space; doubling the current density of the first generation in the same space (Figure 5). 

Figure 5: Third generation of Infineon vertical power delivery modules are mounted on the backside of GPU PCB delivering a total of 2,000 A. 

Custom solutions can go beyond this, integrating more power stages in a single substrate. Other enhancements include bypassing the motherboard and direct-attaching to the substrate in the GPU since PCB substrate materials are lossy for signals with high current densities.

However, this calls for closer collaboration with SoC vendors that are willing to implement system-level solutions. High current density solutions are in the works with Infineon, potentially doubling the current density with another multi-phase module.

The Navitas booth also showed two form factors of PSUs: a common redundant power supply (CRPS) form factor and a longer PSU that meets open compute project (OCP) guidelines and compiled to the ORv3 base specification (Figure 6). The CRPS solution delivers 4.5 kW with two-stages including a SiC PFC end and GaN LLC and offers titanium level efficiency.

Figure 6: Typical rack is shown with RAM, GPU, PSUs, and airflow outlet with barrel fans. The PSUs conform to the CRPS and provide redundancy to encourage zero downtime in the event of transient faults, brownouts, and blackouts.

Hyperscalers or high performance compute (HPC) applications that utilize the OCP architecture can install PSUs in a row to centralize power in the rack. The Navitas PSU offered for this datacenter topology offers up to 8.5 kW with up to a 98% efficiency using a three-phase interleaved CCM totem pole SiC PFC and three-phase GaN LLC (Figure 7).

Figure 7: Navitas 8.5 kW PSU is geared toward hyperscalers using both Gen-3 Fast SiC and GaNSafe devices.

Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

The post Data center solutions take center stage at APEC 2025 appeared first on EDN.

STM32CubeProgrammer 2.18: Improving the “flow” in “workflow”

ELE Times - Thu, 03/20/2025 - 08:19

Author: STMicroelectronics

STM32CubeProgrammer 2.18 brings new features to improve our developers’ experience. For instance, as we close 2024, flashing and debugging STM32 microcontrollers is more straightforward and intuitive. For instance, the new software leverages STM32 security firmware update (root security system extension binaries), helps change multiple option bytes more efficiently through a synthetic view, and port user configuration settings more easily. It is, therefore, the most user-friendly version yet, as it aims to make development on STM32 feel less like work and more like flow.

What’s new in STM32CubeProgrammer 2.18? New MCU Support

While nearly every version of STM32CubeProgrammer comes with new MCU support, 2.18 is particularly noteworthy for the number of added devices. Users can now work with the STM32WL3 announced just a few weeks ago, the STM32N6 launched a few days ago, the new STM32C0 devices with 64 KB and 256 KB of flash.

STM32CubeProgrammer also brings additional feature support for the STM32H7R3/7S3/7R7/7S7, all STM32 MPUs, and the STM32U5. For instance, the STM32H7R/S MCUs can now perform Secure Firmware Installation, while the STM32MP25 gets a GUI to manage PMIC registers and export settings to a binary file, which makes porting them to another project a breeze. And the STM32U5 can now restore its option byte configuration to factory settings if developers make an error that gets them stuck.

New improvements to the user experience

ST also continues to increase the number of supported features when using the SEGGER J-Link probe and flasher. In version 2.18, STM32CubeProgrammer adds the ability to securely install the Bluetooth stack on an STM32WB via a J-Link probe. Hence, developers can use their SEGGER tool for more use cases, making these features more widespread.

We are also introducing new improvements to the user experience, such as a project mode that allows users to save and restore configuration and connection settings, option byte values, firmware lists, external flash loaders, security firmware updates (root security system extension binaries), stack install settings for the STM32WB, and automatic mode parameters. In a nutshell, we want developers to collaborate more efficiently by importing and exporting major project elements so they can focus on their code rather than ticking boxes and applying the same settings repeatedly.

STM32CubeProgrammer 2.18 also adds a new synthetic option byte view to see and edit multiple option bytes on a single row instead of having to scroll through detailed lists. For expert users who know exactly what they want to do, this synthetic view makes changing an option byte a lot quicker. Finally, to facilitate updates to RSSe binaries, STM32HSM-V2 personalization files, and option bytes templates, these elements are now delivered separately in the X-CUBE-RSSe expansion package supported by both STM32CubeProgrammer and Trusted Package Creator tools. Consequently, these elements are no longer part of the lastest version of CubeProgrammer and should be downloaded separately.

What is STM32CubeProgrammer? An STM32 flasher and debugger

At its core, STM32CubeProgrammer helps debug and flash STM32 microcontrollers. As a result, it includes features that optimize these two processes. For instance, version 2.6 introduced the ability to dump the entire register map and edit any register on the fly. Previously, changing a register’s value meant changing the source code, recompiling it, and flashing the firmware. Testing new parameters or determining if a value is causing a bug is much simpler today. Similarly, engineers can use STM32CubeProgrammer to flash all external memories simultaneously. Traditionally, flashing the external embedded storage and an SD card demanded developers launch each process separately. STM32CubeProgrammer can do it in one step.

Another challenge for developers is parsing the massive amount of information passing through STM32CubeProgrammer. Anyone who flashes firmware knows how difficult it is to track all logs. Hence, we brought custom traces that allow developers to assign a color to a particular function. It ensures developers can rapidly distinguish a specific output from the rest of the log. Debugging thus becomes a lot more straightforward and intuitive. Additionally, it can help developers coordinate their color scheme with STM32CubeIDE, another member of our unique ecosystem designed to empower creators.

STM32CubeProgrammerSTM32CubeProgrammer What are some of its key features? New MCU support

Most new versions of STM32CubeProgrammer support a slew of new MCUs. For instance, version 2.16 brought compatibility with the 256 KB version of the STM32U0s. The device was the new ultra-low power flagship model for entry-level applications thanks to a static power consumption of only 16 nA in standby. STM32CubeProgrammer 2.16 also brought support for the 512 KB version of the STM32H5, and the STM32H7R and STM32H7S, which come with less Flash so integrators that must use external memory anyway can reduce their costs. Put simply, ST strives to update STM32CubeProgrammer as rapidly as possible to ensure our community can take advantage of our newest platforms rapidly and efficiently.

SEGGER J-Link probe support

To help developers optimize workflow, we’ve worked with SEGGER to support the J-Link probe fully. This means that the hardware flasher has access to features that were previously only available on an ST-LINK module. For instance, the SEGGER system can program internal and external memory or tweak the read protection level (RDP). Furthermore, using the J-Link with STM32CubeProgrammer means developers can view and modify registers. And since version 2.17, we added the ability to generate serial numbers and automatically increment them within STM32CubeProgrammer, thus hastening the process of flashing multiple STM32s in one batch.

We know that many STM32 customers use the SEGGER probe because it enables them to work with more MCUs, it is fast, or they’ve adopted software by SEGGER. Hence, STM32CubeProgrammer made the J-Link vastly more useful, so developers can do more without leaving the ST software.

Exporting option bytes and editing memory fields

Other quality-of-life improvements aim to make STM32CubeProgrammer more intuitive. For instance, it is now possible to export an STM32’s option bytes. Very simply, they are a way to store configuration options, such as read-out protection levels, watchdog settings, power modes, and more. The MCU loads them early in the boot process, and they are stored in a specific part of the memory that’s only accessible by debugging tools or the bootloader. By offering the ability to export and import option bytes, STM32CubeProgrammer enables developers to configure MCUs much more easily. Similarly, version 2.17 can now edit memory fields in ASCII to make certain section a lot more readable.

Automating the installation of a Bluetooth LE stack

Until now, developers updating their Bluetooth LE wireless stack had to figure out the address of the first memory block to use, which varied based on the STM32WB and the type of stack used. For instance, installing the basic stack on the STM32WB5x would start at address 0x080D1000, whereas a full stack on the same device would start at 0x080C7000, and the same package starts at 0x0805A000 on the STM32WB3x with 512 KB of memory. Developers often had to find the start address in STM32CubeWB/Projects/STM32WB_Copro_Wireless_Binaries. The new version of STM32CubeProgrammer comes with an algorithm that determines the right start address based on the current wireless stack version, the device, and the stack to install.

A portal to security on STM32

Readers of the ST Blog know STM32CubeProgrammer as a central piece of the security solutions present in the STM32Cube EcosystemThe utility comes with Trusted Package Creator, which enables developers to upload an OEM key to a hardware secure module and to encrypt their firmware using this same key. OEMs then use STM32CubeProgrammer to securely install the firmware onto the STM32 SFI microcontroller. Developers can even use an I2C or SPI interface, which gives them greater flexibility. Additionally, the STM32H735, STM32H7B, STM32L5, STM32U5, and STM32H5 also support external secure firmware install (SFIx), meaning that OEMs can flash the encrypted binary on memory modules outside the microcontroller.

Secure Manager

Secure Manager is officially supported since STM32CubeProgrammer 2.14 and STM32CubeMX 1.13. Currently, the feature is exclusive to our new high-performance MCU, the STM32H573, which supports a secure ST firmware installation (SSFI) without requiring a hardware secure module (HSM). In a nutshell, it provides a straightforward way to manage the entire security ecosystem on an STM32 MCU thanks to binaries, libraries, code implementations, documentation, and more. Consequently, developers enjoy turnkey solutions in STM32CubeMX while flashing and debugging them with STM32CubeProgrammer. It is thus an example of how STM32H5 hardware and Secure Manager software come together to create something greater than the sum of its parts.

Other security features for the STM32H5

STM32CubeProgrammer enables many other security features on the STM32H5. For instance, the MCU now supports secure firmware installation on internal memory (SFI) and an external memory module (SFIx), which allows OEMs to flash encrypted firmware with the help of a hardware secure module (HSM). Similarly, it supports certificate generation on the new MCU when using Trusted Package Creator and an HSM. Finally, the utility adds SFI and SFIx support on STM32U5s with 2 MB and 4 MB of flash.

Making SFI more accessible STM32CubeProgrammerSTM32CubeProgrammer

Since version 2.11, STM32CubeProgrammer has received significant improvements to its secure firmware install (SFI) capabilities. For instance, in version 2.15, ST added support for the STM32WBA5. Additionally, we added a graphical user interface highlighting addresses and HSM information. The GUI for Trusted Package Creator also received a new layout under the SFI and SFIx tabs to expose the information needed when setting up a secure firmware install. The Trusted package creator also got a graphical representation of the various option bytes to facilitate their configuration.

Secure secret provisioning for STM32MPx

Since 2.12, STM32CubeProgrammer has a new graphical user interface to help developers set up parameters for the secure secret provisioning available on STM32MPx microprocessors. The mechanism has similarities with the secure firmware install available on STM32 microcontrollers. It uses a hardware secure module to store encryption keys and uses secure communication between the flasher and the device. However, the nature of a microprocessor means more parameters to configure. STM32CubeProgrammers’ GUI now exposes those settings previously available in the CLI version of the utility to expedite workflows.

Double authentication

Since version 2.9, the STM32CubeProgrammer supports a double authentication system when provisioning encryption keys via JTAG or a Boot Loader for the Bluetooth stack on the STM32WB. Put simply, the feature enables makers to protect their Bluetooth stack against updates from end-users. Indeed, developers can update the Bluetooth stack with ST’s secure firmware if they know what they are doing. However, a manufacturer may offer a particular environment and, therefore, may wish to protect it. As a result, the double authentication system prevents access to the update mechanism by the end user. ST published the application note AN5185 to offer more details.

PKCS#11 support

Since version 2.9, STM32CubeProgrammer supports PKCS#11 when encrypting firmware for the STM32MP1. The Public-Key Cryptography Standards (PKCS) 11, also called Cryptoki, is a standard that governs cryptographic processes at a low level. It is gaining popularity as APIs help embedded system developers exploit its mechanisms. On an STM32MP1, PKCS#11 allows engineers to segregate the storage of the private key and the encryption process for the secure secret provisioning (SSP).

SSP is the equivalent of a Secure Firmware Install for MPUs. Before sending their code to OEMs, developers encrypt their firmware with a private-public key system with STM32CubeProgrammer. The IP is thus unreadable by third parties. During assembly, OEMs use the provided hardware secure module (HSM) containing a protected encryption key to load the firmware that the MPU will decrypt internally. However, until now, developers encrypting the MPU’s code had access to the private key. The problem is that some organizations must limit access to such critical information. Thanks to the new STM32CubeProgrammer and PKCS#11, the private key remains hidden in an HSM, even during the encryption process by the developers.

Supporting new STM32 MCUs STM32C0, STM32MP25, and STM32WB05/6/7

Since version 2.17, STM32CubeProgrammer supports STM32C0s with 128 KB of flash. It also recognizes the STM32MP25, which includes a 1.35-TOPS NPU, and all the STM32WB0s, including the STM32WB05, STM32WB05xN, STM32WB06, and STM32WB07In the latter case, we brought support only a few weeks after their launch, thus showing that STM32CubeProgrammer keeps up with the latest releases to ensure developers can flash and debug their code on the newest STM32s as soon as possible.

Access to the STM32MP13’s bare metal

Microcontrollers demand real-time operating systems because of their limited resources, and event-driven paradigms often require a high level of determinism when executing tasks. Conversely, microprocessors have a lot more resources and can manage parallel tasks better, so they use a multitasking operating system, like OpenSTLinux, our Embedded Linux distributionHowever, many customers familiar with the STM32 MCU world have been asking for a way to run an RTOS on our MPUs as an alternative. In a nutshell, they want to enjoy the familiar ecosystem of an RTOS and the optimizations that come from running bare metal code while enjoying the resources of a microprocessor.

Consequently, we are releasing today STM32CubeMP13, which comes with the tools to run a real-time operating system on our MPU. We go into more detail about what’s in the package in our STM32MP13 blog post. Additionally, to make this initiative possible, ST updated its STM32Cube utilities, such as STM32CubeProgrammer. For instance, we had to ensure that developers could flash the NOR memory. Similarly, STM32CubeProgrammer enables the use of an RTOS on the STM32MP13 by supporting a one-time programmable (OTP) partition.

Traditionally, MPUs can use a bootloader, like U-Boot, to load the Linux kernel securely and efficiently. It thus serves as the ultimate first step in the boot process, which starts by reading the OTP partition. Hence, as developers move from a multitasking OS to an RTOS, it was essential that STM32CubeProgrammer enable them to program the OTP partition to ensure that they could load their operating system. The new STM32CubeProgrammer version also demonstrates how the ST ecosystem works together to release new features.

STM32WB and STM32WBA support

Since version 2.12, STM32CubeProgrammer has brought numerous improvements to the STM32WB series, which is increasingly popular in machine learning applications, as we saw at electronica 2022Specifically, the ST software brings new graphical tools and an updated wireless stack to assist developers. For instance, the tool has more explicit guidelines when encountering errors, such as when developers try to update a wireless stack with the anti-rollback activated but forget to load the previous stack. Similarly, new messages will ensure users know if a stack version is incompatible with a firmware update. Finally, STM32CubeProgrammer provides new links to download STM32WB patches and get new tips and tricks so developers don’t have to hunt for them.

Similarly, STM32CubeProgrammer supports the new STM32WBA, the first wireless Cortex-M33. Made official a few months ago, the MCU opens the way for a Bluetooth Low Energy 5.3 and SESIP Level 3 certification. The MCU also has a more powerful RF that can reach up to +10 dBm output power to create a more robust signal.

STM32H5 and STM32U5

The support for STM32H5 began with STM32CubeProgrammer 2.13, which added compatibility with MCUs, including anything from 128 KB up to 2 MB of flash. Initially, the utility brought security features like debug authentication and authentication key provisioning, which are critical when using the new life management system. The utility also supported key and certificate generation, firmware encryption, and signature. Over time, ST added support for the new STM32U535 and STM32U545 with 512 KB and 4 MB of flash. The MCUs benefit from RDP regression with a password to facilitate developments and SFI secure programming.

Additionally, STM32CubeProgrammer includes an interface for read-out protection (RDP) regression with a password for STM32U5xx. Developers can define a password and move from level 2, which turns off all debug features, to level 1, which protects the flash against certain reading or dumping operations, or to level 0, which has no protections. It will thus make prototyping vastly simpler.

STLINK-V3PWR

In many instances, developers use an STLINK probe with STM32CubeProgrammer to flash or debug their device. Hence, we quickly added support for our latest STLINK-PWR probe, the most extensive source measurement unit and programmer/debugger for STM32 devices. If users want to see energy profiles and visualize the current draw, they must use STM32CubeMonitor-Power. However, STM32CubeProgrammer will serve as an interface for all debug features. It can also work with all the probe’s interfaces, such as SPI, UART, I2C, and CAN.

Script mode

The software includes a command-line interface (CLI) to enable the creation of scripts. Since the script manager is part of the application, it doesn’t depend on the operating system or its shell environment. As a result, scripts are highly sharable. Another advantage is that the script manager can maintain connections to the target. Consequently, STM32CubeProgrammer CLI can keep a connection live throughout a session without reconnecting after every command. It can also handle local variables and even supports arithmetic or logic operations on these variables. Developers can thus create powerful macros to automate complex processes. To make STM32CubeProgrammer CLI even more powerful, the script manager also supports loops and conditional statements.

A unifying experience

STM32CubeProgrammer aims to unify the user experience. ST brought all the features of utilities like the ST-LINK Utility, DFUs, and others to STM32CubeProgrammer, which became a one-stop shop for developers working on embedded systems. We also designed it to work on all major operating systems and even embedded OpenJDK8-Liberica to facilitate its installation. Consequently, users do not need to install Java themselves and struggle with compatibility issues before experiencing STM32CubeProgrammer.

Qt 6 support

Since STM32CubeProgrammer 2.16, the ST utility uses Qt 6, the framework’s latest version. Consequently, STM32CubeProgrammer no longer runs on Windows 7 and Ubuntu 18.04. However, Qt 6 patches security vulnerabilities, brings bug fixes, and comes with significant quality-of-life improvements.

 

The post STM32CubeProgrammer 2.18: Improving the “flow” in “workflow” appeared first on ELE Times.

PSU exploded

Reddit:Electronics - Thu, 03/20/2025 - 01:36
PSU exploded

Took this out of a unit cause it wasn’t turning on, flipped it over and multiple resisters and caps were gone. Most likely a power surge. Thought would be interesting to post cause don’t see this every day

submitted by /u/STUFFY69420
[link] [comments]

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки