Новини світу мікро- та наноелектроніки

Can a free running LMC555 VCO discharge its timing cap to zero?

EDN Network - Чтв, 03/20/2025 - 16:16

Frequent design idea (DI) contributor Nick Cornford recently published a synergistic pair of DIs “A pitch-linear VCO, part 1: Getting it going” and “A pitch-linear VCO, part 2: taking it further.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

The main theme of these articles is design techniques for audio VCOs that have an exponential (a.k.a. linear in pitch) relationship between control voltage and frequency. Great work Nick! I became particularly interested in the topic during a lively discussion (typical of editor Aalyia’s DI kitchen) in the comments section. The debate was about whether such a VCO could be built around the venerable 555 analog timer. Some said nay, others yea. I leaned toward the latter opinion and decided to try to put a schematic where my mouth was. Figure 1 is the result.

Figure 1 555 VCO discharges timing cap C1 completely to the negative rail via a Reset pulse.

The nay-sayers’ case hinged on a perceived inability of the 555 architecture to completely discharge the timing capacitor, C1 in Figure 1. They seemed to have a good argument because, in its usual mode of operation, the discharge of C1 ends when the trigger input level is crossed. This normally happens at one third of the supply rail differential and one third is a long way from zero! But it turns out the 555, despite being such an old dog, knows a different trick, it involves a very seldom used feature of this ancient chip: the reset pin 4.

The 555 datasheet says a pulse on reset will override trigger and also force discharge of C1. In Figure 1, R3 and C2 provide such a pulse when the OUT pin goes low at the end of the timing cycle. The R3C2 product ensures the pulse is long enough for the 15 Ω Ron of the Dch pin to accurately evacuate C1. 

And that’s it. Problem solved as sketched in Figure 2.

Figure 2 The VCO waveforms; reset pulses at the end of each timing cycle, and is triggered when Vc1 = Vcon, to force an adequately complete discharge of C1.

Figure 3 illustrates the resulting satisfactory log conformity (due mostly to my shameless theft of Nick’s clever resistor ratios) of the resulting 555. VCO, showing good exponential (linear in pitch) behavior over the desired two octaves of 250 to 1000 Hz.

Figure 3 Log plot of the frequency versus control voltage for the two-octave linear-in-pitch VCO. [X axis = Vcon volts (inverted), Y axis = Hz / 16 = 250 Hz to 1 kHz]

In fact, at the price of an extra resistor, it might be possible to improve linearity enough to pick up another half a volt and half an octave on both ends of the pitch range to span 177 Hz to 1410 Hz. See Figure 4 and Figure 5.

Figure 4 R4 sums ~6% of Vcon with the C1 timing ramp to get the improvement in linearity shown in Figure 5.

Figure 5 The effect of the R4 modification showing a linearity improvement. [X axis = Vcon volts (inverted), Y axis = Hz / 16]

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Can a free running LMC555 VCO discharge its timing cap to zero? appeared first on EDN.

Enhancing Wireless Communication with AI-Optimized RF Systems

ELE Times - Чтв, 03/20/2025 - 14:23
Introduction: The Convergence of AI and RF Engineering

The integration of Artificial Intelligence (AI) into Radio Frequency (RF) systems marks a paradigm shift in wireless communications. Traditional RF design relies on static, rule-based optimization, whereas AI enables dynamic, data-driven adaptation. With the rise of 5G, mmWave, satellite communications, and radar technologies, AI-driven RF solutions are crucial for maximizing spectral efficiency, improving signal integrity, and reducing energy consumption.

The Urgency for AI in RF Systems: Industry Challenges & Market Trends

The RF industry is under immense pressure to meet growing demands for higher data rates, better spectral utilization, and reduced latency. One of the key challenges is Dynamic Spectrum Management, where the increasing scarcity of available spectrum forces telecom providers to adopt intelligent allocation mechanisms. AI-powered systems can predict and allocate spectrum dynamically, ensuring optimal utilization and minimizing congestion.

Another significant challenge is Electromagnetic Interference (EMI) Mitigation. As the density of wireless devices grows, the likelihood of interference between different RF signals increases. AI can analyze vast amounts of data in real-time to predict and mitigate EMI, thus improving overall signal integrity.

Power Efficiency is another major concern, especially in battery-operated and energy-constrained applications. AI-driven power control mechanisms in RF front-ends enable systems to dynamically adjust transmission power based on network conditions, leading to significant energy savings. Additionally, Edge Processing Demands are increasing with the advent of autonomous systems that require real-time, AI-driven RF adaptation for high-speed decision-making and low-latency communications.

Advanced AI Techniques in RF System Optimization

Industry leaders like Qualcomm, Ericsson, and NVIDIA are investing heavily in AI-driven RF innovations. The following AI methodologies are transforming RF architectures:

Reinforcement Learning for Adaptive Spectrum Allocation

AI-driven Cognitive Radio Networks (CRNs) leverage Deep Reinforcement Learning (DRL) to optimize spectrum usage dynamically. By continuously learning from environmental conditions and past allocations, DRL can predict interference patterns and proactively assign spectrum in a way that maximizes efficiency. This allows for the intelligent utilization of both sub-6 GHz and mmWave bands, ensuring high data throughput while minimizing collisions and latency.

Deep Neural Networks for RF Signal Classification & Modulation Recognition

Traditional RF signal classification methods struggle in complex, noisy environments. AI-based techniques such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTMs) networks enhance modulation recognition accuracy, even in fading channels. These deep learning models can also be used for RF fingerprinting, which improves security by uniquely identifying signal sources. Furthermore, AI-based anomaly detection helps identify and counteract jamming or spoofing attempts in critical communication systems.

AI-Driven Beamforming for Massive MIMO Systems

Massive Multiple-Input Multiple-Output (MIMO) is a cornerstone technology for 5G and 6G networks. AI-driven beamforming techniques use deep reinforcement learning to dynamically adjust transmission beams, improving directional accuracy and link reliability. Additionally, unsupervised clustering methods help optimize beam selection by analyzing traffic load variations, ensuring that the best possible configuration is applied in real-time.

Generative Adversarial Networks (GANs) for RF Signal Synthesis

GANs are being explored for RF waveform synthesis, where they generate realistic signal patterns that adapt to changing environmental conditions. This capability is particularly beneficial in electronic warfare (EW) applications, where adaptive waveform generation can enhance jamming resilience. GANs are also useful for RF data augmentation, allowing AI models to be trained on synthetic RF datasets when real-world data is scarce.

AI-Enabled Digital Predistortion (DPD) for Power Amplifiers

Power amplifiers (PAs) suffer from nonlinearities that introduce spectral regrowth, degrading signal quality. AI-driven Digital Predistortion (DPD) techniques leverage neural network-based PA modeling to compensate for these distortions in real-time. Bayesian optimization is used to fine-tune DPD parameters dynamically, ensuring optimal performance under varying transmission conditions. Additionally, adaptive biasing techniques help improve PA efficiency by adjusting power consumption based on the input signal’s requirements.

Industry-Specific Applications of AI-Optimized RF Systems

The impact of AI-driven RF innovation extends across multiple high-tech industries:

Telecommunications: AI-Powered 5G & 6G Networks

AI plays a crucial role in optimizing adaptive coding and modulation (ACM) techniques, allowing for dynamic throughput adjustments based on network conditions. Additionally, AI-enhanced network slicing enables operators to allocate bandwidth efficiently, ensuring quality-of-service (QoS) for diverse applications. AI-based predictive analytics also assist in proactive interference management, allowing networks to mitigate potential disruptions before they occur.

Defense & Aerospace: Cognitive RF for Military Applications

In military communications, AI is revolutionizing RF situational awareness, enabling autonomous systems to detect and analyze threats in real-time. AI-driven electronic countermeasures (ECMs) help counteract enemy jamming techniques, ensuring robust and secure battlefield communications. Machine learning algorithms are also being deployed for predictive maintenance of radar and RF systems, reducing operational downtime and enhancing mission readiness.

Automotive & IoT: AI-Driven RF Optimization for V2X Communication

Vehicle-to-everything (V2X) communication requires reliable, low-latency RF links for applications such as autonomous driving and smart traffic management. AI-powered spectrum sharing ensures that vehicular networks can coexist efficiently with other wireless systems. Predictive congestion control algorithms allow urban IoT deployments to adapt to traffic variations dynamically, improving efficiency. Additionally, AI-driven adaptive RF front-end tuning enhances communication reliability in connected vehicles by automatically adjusting antenna parameters based on driving conditions.

Satellite Communications: AI-Enabled Adaptive Link Optimization

Satellite communication systems benefit from AI-driven link adaptation, where AI models adjust signal parameters based on atmospheric conditions such as rain fade and ionospheric disturbances. Machine learning algorithms are also being used for RF interference classification, helping satellite networks distinguish between different types of interference sources. Predictive beam hopping strategies optimize resource allocation in non-geostationary satellite constellations, improving coverage and efficiency.

The Future of AI-Optimized RF: Key Challenges and Technological Roadmap

While AI is revolutionizing RF systems, several roadblocks must be addressed. One major challenge is computational overhead, as implementing AI at the edge requires energy-efficient neuromorphic computing solutions. The lack of standardization in AI-driven RF methodologies also hinders widespread adoption, necessitating global collaboration to establish common frameworks. Furthermore, security vulnerabilities pose risks, as adversarial attacks on AI models can compromise RF system integrity.

Future Innovations

One promising area is Quantum Machine Learning for RF Signal Processing, which could enable ultra-low-latency decision-making in complex RF environments. Another key advancement is Federated Learning for Secure Distributed RF Intelligence, allowing multiple RF systems to share AI models while preserving data privacy. Additionally, AI-Optimized RF ASICs & Chipsets are expected to revolutionize real-time signal processing by embedding AI functionalities directly into hardware.

Conclusion

AI-driven RF optimization is at the forefront of wireless communication evolution, offering unparalleled efficiency, adaptability, and intelligence. Industry pioneers are integrating AI into RF design to enhance spectrum utilization, interference mitigation, and power efficiency. As AI algorithms and RF hardware continue to co-evolve, the fusion of these technologies will redefine the future of telecommunications, defense, IoT, and satellite communications.

The post Enhancing Wireless Communication with AI-Optimized RF Systems appeared first on ELE Times.

OSRAM’s and Nichia’s micro-LED solutions boost resolution 100-fold over traditional matrix LEDs

Semiconductor today - Чтв, 03/20/2025 - 14:12
In its report ‘Automotive MicroLED Comparison 2025’ focusing on the new micro-LED-based technology emerging in the automotive sector, market research and strategy consulting company Yole Group notes that two leading LED companies ams OSRAM and Nichia have developed dedicated micro-LED solutions, enabling more than a 100-fold increase in resolution compared to existing matrix LED systems based on discrete LEDs...

Tiger and GESemi selling thin-film GaAs flexible PV production equipment

Semiconductor today - Чтв, 03/20/2025 - 10:46
Tiger Group and GESemi are now accepting offers for equipment used to produce high-efficiency gallium arsenide (GaAs)-based thin-film photovoltaic (PV) cells. The fully decommissioned, ready-to-ship manufacturing assets from Ubiquity Solar of Endicott, NY, USA — including nearly 600 crates stored in South Central New York —features brands such as Aixtron Group, Attolight, GigaMat, SCHMID, Hercules and KLA Corp (KLA-Tencor)...

Data center solutions take center stage at APEC 2025

EDN Network - Чтв, 03/20/2025 - 09:11

This year during APEC, much of the focus on the show floor revolved around data center tech, with companies showcasing high-density power supply units (PSU), battery backup units (BBU), intermediate bus converters (IBC), and GPU solutions (Figure 1). 

Figure 1: Up to 12 kW Infineon PSU technology leverages a mixture of the CoolSIC, CoolMOS, and CoolGaN technologies. 

The motivation comes from the massive power demand increase that the generative AI, in particular, LLMs have brought on, shooting up the 2% of global power consumption from data centers to a projected 7% by 2030. This power demand originates from the shift from the more 120 kV (single-phase AC) stepped down to 48 V to 250-350 kV (three-phase AC) stepped down to 400 VDC rails attached to the rack and distributed from there (to switches, PSUs, compute trays, switch trays, BBUs, and GPUs).

Infineon’s booth presented a comprehensive suite of solutions from the “power grid to the core.” The BBU technology (Figure 2) utilizes the partial power converter (PPC) topology to enable high power densities (> 12 kW) using scalable 4 kW power converter cards.

Figure 2: Infineon BBU roadmap, using both Si and GaN to scale up the power density of the converters with high efficiencies. Source: Infineon

The technology boasts an efficiency of 99.5% using lower voltage (40 V and 80 V) switches to increase figure of merit (FOM) and yield efficiency gains. The solutions are aimed at meeting space-restrictions of modern BBUs that are outfitted with more and more batteries and hence less space for the embedded DC/DC converter.

Their latest generation of vertical power delivery modules feature a leap in GPU/AI card power delivery, offering up to 2 A/mm2. These improvements create massive space-savings on the already space-constrained AI cards that often require 2000 A to 3000 A for power-hungry chips such as the Nvidia Blackwell GPU.

Instead of being mounted laterally, or alongside the chip, these devices deliver power on the underside of the card to massively reduce power delivery losses. The backside mounting does come with its profile restraints; there is a max height of 5 mm to facilitate heatsink mounting on the other side of the board, so these modules must maintain their 4-mm height. 

The first generation of the dual-phase module featured the silicon device that sat on top of the substrate with integrated inductors and capacitors to achieve 1 A/mm2, or 140A max,  in a 10 x 9 mm package. This was followed by a dual-phase module that featured a 1.5 A/mm2, or 160 A max, improvement within 8 x 8 mm dimensions. Embedding the silicon into the substrate to have only one PCB is what contributed to the major space-savings in this iteration (Figure 4). 

Figure 4: The second generation of Infineon vertical power delivery modules mounted on the backside of GPU PCB deliver a total of 2000 A. An Infineon controller IC can also be seen providing the necessary voltage/current through coordination with the vertical power delivery modules and chip.

The third generation just released has brought on two more power stages for a quad-phase module for 2 A/mm2, or 280 A max, in the 10 x 9 mm space; doubling the current density of the first generation in the same space (Figure 5). 

Figure 5: Third generation of Infineon vertical power delivery modules are mounted on the backside of GPU PCB delivering a total of 2,000 A. 

Custom solutions can go beyond this, integrating more power stages in a single substrate. Other enhancements include bypassing the motherboard and direct-attaching to the substrate in the GPU since PCB substrate materials are lossy for signals with high current densities.

However, this calls for closer collaboration with SoC vendors that are willing to implement system-level solutions. High current density solutions are in the works with Infineon, potentially doubling the current density with another multi-phase module.

The Navitas booth also showed two form factors of PSUs: a common redundant power supply (CRPS) form factor and a longer PSU that meets open compute project (OCP) guidelines and compiled to the ORv3 base specification (Figure 6). The CRPS solution delivers 4.5 kW with two-stages including a SiC PFC end and GaN LLC and offers titanium level efficiency.

Figure 6: Typical rack is shown with RAM, GPU, PSUs, and airflow outlet with barrel fans. The PSUs conform to the CRPS and provide redundancy to encourage zero downtime in the event of transient faults, brownouts, and blackouts.

Hyperscalers or high performance compute (HPC) applications that utilize the OCP architecture can install PSUs in a row to centralize power in the rack. The Navitas PSU offered for this datacenter topology offers up to 8.5 kW with up to a 98% efficiency using a three-phase interleaved CCM totem pole SiC PFC and three-phase GaN LLC (Figure 7).

Figure 7: Navitas 8.5 kW PSU is geared toward hyperscalers using both Gen-3 Fast SiC and GaNSafe devices.

Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

The post Data center solutions take center stage at APEC 2025 appeared first on EDN.

STM32CubeProgrammer 2.18: Improving the “flow” in “workflow”

ELE Times - Чтв, 03/20/2025 - 08:19

Author: STMicroelectronics

STM32CubeProgrammer 2.18 brings new features to improve our developers’ experience. For instance, as we close 2024, flashing and debugging STM32 microcontrollers is more straightforward and intuitive. For instance, the new software leverages STM32 security firmware update (root security system extension binaries), helps change multiple option bytes more efficiently through a synthetic view, and port user configuration settings more easily. It is, therefore, the most user-friendly version yet, as it aims to make development on STM32 feel less like work and more like flow.

What’s new in STM32CubeProgrammer 2.18? New MCU Support

While nearly every version of STM32CubeProgrammer comes with new MCU support, 2.18 is particularly noteworthy for the number of added devices. Users can now work with the STM32WL3 announced just a few weeks ago, the STM32N6 launched a few days ago, the new STM32C0 devices with 64 KB and 256 KB of flash.

STM32CubeProgrammer also brings additional feature support for the STM32H7R3/7S3/7R7/7S7, all STM32 MPUs, and the STM32U5. For instance, the STM32H7R/S MCUs can now perform Secure Firmware Installation, while the STM32MP25 gets a GUI to manage PMIC registers and export settings to a binary file, which makes porting them to another project a breeze. And the STM32U5 can now restore its option byte configuration to factory settings if developers make an error that gets them stuck.

New improvements to the user experience

ST also continues to increase the number of supported features when using the SEGGER J-Link probe and flasher. In version 2.18, STM32CubeProgrammer adds the ability to securely install the Bluetooth stack on an STM32WB via a J-Link probe. Hence, developers can use their SEGGER tool for more use cases, making these features more widespread.

We are also introducing new improvements to the user experience, such as a project mode that allows users to save and restore configuration and connection settings, option byte values, firmware lists, external flash loaders, security firmware updates (root security system extension binaries), stack install settings for the STM32WB, and automatic mode parameters. In a nutshell, we want developers to collaborate more efficiently by importing and exporting major project elements so they can focus on their code rather than ticking boxes and applying the same settings repeatedly.

STM32CubeProgrammer 2.18 also adds a new synthetic option byte view to see and edit multiple option bytes on a single row instead of having to scroll through detailed lists. For expert users who know exactly what they want to do, this synthetic view makes changing an option byte a lot quicker. Finally, to facilitate updates to RSSe binaries, STM32HSM-V2 personalization files, and option bytes templates, these elements are now delivered separately in the X-CUBE-RSSe expansion package supported by both STM32CubeProgrammer and Trusted Package Creator tools. Consequently, these elements are no longer part of the lastest version of CubeProgrammer and should be downloaded separately.

What is STM32CubeProgrammer? An STM32 flasher and debugger

At its core, STM32CubeProgrammer helps debug and flash STM32 microcontrollers. As a result, it includes features that optimize these two processes. For instance, version 2.6 introduced the ability to dump the entire register map and edit any register on the fly. Previously, changing a register’s value meant changing the source code, recompiling it, and flashing the firmware. Testing new parameters or determining if a value is causing a bug is much simpler today. Similarly, engineers can use STM32CubeProgrammer to flash all external memories simultaneously. Traditionally, flashing the external embedded storage and an SD card demanded developers launch each process separately. STM32CubeProgrammer can do it in one step.

Another challenge for developers is parsing the massive amount of information passing through STM32CubeProgrammer. Anyone who flashes firmware knows how difficult it is to track all logs. Hence, we brought custom traces that allow developers to assign a color to a particular function. It ensures developers can rapidly distinguish a specific output from the rest of the log. Debugging thus becomes a lot more straightforward and intuitive. Additionally, it can help developers coordinate their color scheme with STM32CubeIDE, another member of our unique ecosystem designed to empower creators.

STM32CubeProgrammerSTM32CubeProgrammer What are some of its key features? New MCU support

Most new versions of STM32CubeProgrammer support a slew of new MCUs. For instance, version 2.16 brought compatibility with the 256 KB version of the STM32U0s. The device was the new ultra-low power flagship model for entry-level applications thanks to a static power consumption of only 16 nA in standby. STM32CubeProgrammer 2.16 also brought support for the 512 KB version of the STM32H5, and the STM32H7R and STM32H7S, which come with less Flash so integrators that must use external memory anyway can reduce their costs. Put simply, ST strives to update STM32CubeProgrammer as rapidly as possible to ensure our community can take advantage of our newest platforms rapidly and efficiently.

SEGGER J-Link probe support

To help developers optimize workflow, we’ve worked with SEGGER to support the J-Link probe fully. This means that the hardware flasher has access to features that were previously only available on an ST-LINK module. For instance, the SEGGER system can program internal and external memory or tweak the read protection level (RDP). Furthermore, using the J-Link with STM32CubeProgrammer means developers can view and modify registers. And since version 2.17, we added the ability to generate serial numbers and automatically increment them within STM32CubeProgrammer, thus hastening the process of flashing multiple STM32s in one batch.

We know that many STM32 customers use the SEGGER probe because it enables them to work with more MCUs, it is fast, or they’ve adopted software by SEGGER. Hence, STM32CubeProgrammer made the J-Link vastly more useful, so developers can do more without leaving the ST software.

Exporting option bytes and editing memory fields

Other quality-of-life improvements aim to make STM32CubeProgrammer more intuitive. For instance, it is now possible to export an STM32’s option bytes. Very simply, they are a way to store configuration options, such as read-out protection levels, watchdog settings, power modes, and more. The MCU loads them early in the boot process, and they are stored in a specific part of the memory that’s only accessible by debugging tools or the bootloader. By offering the ability to export and import option bytes, STM32CubeProgrammer enables developers to configure MCUs much more easily. Similarly, version 2.17 can now edit memory fields in ASCII to make certain section a lot more readable.

Automating the installation of a Bluetooth LE stack

Until now, developers updating their Bluetooth LE wireless stack had to figure out the address of the first memory block to use, which varied based on the STM32WB and the type of stack used. For instance, installing the basic stack on the STM32WB5x would start at address 0x080D1000, whereas a full stack on the same device would start at 0x080C7000, and the same package starts at 0x0805A000 on the STM32WB3x with 512 KB of memory. Developers often had to find the start address in STM32CubeWB/Projects/STM32WB_Copro_Wireless_Binaries. The new version of STM32CubeProgrammer comes with an algorithm that determines the right start address based on the current wireless stack version, the device, and the stack to install.

A portal to security on STM32

Readers of the ST Blog know STM32CubeProgrammer as a central piece of the security solutions present in the STM32Cube EcosystemThe utility comes with Trusted Package Creator, which enables developers to upload an OEM key to a hardware secure module and to encrypt their firmware using this same key. OEMs then use STM32CubeProgrammer to securely install the firmware onto the STM32 SFI microcontroller. Developers can even use an I2C or SPI interface, which gives them greater flexibility. Additionally, the STM32H735, STM32H7B, STM32L5, STM32U5, and STM32H5 also support external secure firmware install (SFIx), meaning that OEMs can flash the encrypted binary on memory modules outside the microcontroller.

Secure Manager

Secure Manager is officially supported since STM32CubeProgrammer 2.14 and STM32CubeMX 1.13. Currently, the feature is exclusive to our new high-performance MCU, the STM32H573, which supports a secure ST firmware installation (SSFI) without requiring a hardware secure module (HSM). In a nutshell, it provides a straightforward way to manage the entire security ecosystem on an STM32 MCU thanks to binaries, libraries, code implementations, documentation, and more. Consequently, developers enjoy turnkey solutions in STM32CubeMX while flashing and debugging them with STM32CubeProgrammer. It is thus an example of how STM32H5 hardware and Secure Manager software come together to create something greater than the sum of its parts.

Other security features for the STM32H5

STM32CubeProgrammer enables many other security features on the STM32H5. For instance, the MCU now supports secure firmware installation on internal memory (SFI) and an external memory module (SFIx), which allows OEMs to flash encrypted firmware with the help of a hardware secure module (HSM). Similarly, it supports certificate generation on the new MCU when using Trusted Package Creator and an HSM. Finally, the utility adds SFI and SFIx support on STM32U5s with 2 MB and 4 MB of flash.

Making SFI more accessible STM32CubeProgrammerSTM32CubeProgrammer

Since version 2.11, STM32CubeProgrammer has received significant improvements to its secure firmware install (SFI) capabilities. For instance, in version 2.15, ST added support for the STM32WBA5. Additionally, we added a graphical user interface highlighting addresses and HSM information. The GUI for Trusted Package Creator also received a new layout under the SFI and SFIx tabs to expose the information needed when setting up a secure firmware install. The Trusted package creator also got a graphical representation of the various option bytes to facilitate their configuration.

Secure secret provisioning for STM32MPx

Since 2.12, STM32CubeProgrammer has a new graphical user interface to help developers set up parameters for the secure secret provisioning available on STM32MPx microprocessors. The mechanism has similarities with the secure firmware install available on STM32 microcontrollers. It uses a hardware secure module to store encryption keys and uses secure communication between the flasher and the device. However, the nature of a microprocessor means more parameters to configure. STM32CubeProgrammers’ GUI now exposes those settings previously available in the CLI version of the utility to expedite workflows.

Double authentication

Since version 2.9, the STM32CubeProgrammer supports a double authentication system when provisioning encryption keys via JTAG or a Boot Loader for the Bluetooth stack on the STM32WB. Put simply, the feature enables makers to protect their Bluetooth stack against updates from end-users. Indeed, developers can update the Bluetooth stack with ST’s secure firmware if they know what they are doing. However, a manufacturer may offer a particular environment and, therefore, may wish to protect it. As a result, the double authentication system prevents access to the update mechanism by the end user. ST published the application note AN5185 to offer more details.

PKCS#11 support

Since version 2.9, STM32CubeProgrammer supports PKCS#11 when encrypting firmware for the STM32MP1. The Public-Key Cryptography Standards (PKCS) 11, also called Cryptoki, is a standard that governs cryptographic processes at a low level. It is gaining popularity as APIs help embedded system developers exploit its mechanisms. On an STM32MP1, PKCS#11 allows engineers to segregate the storage of the private key and the encryption process for the secure secret provisioning (SSP).

SSP is the equivalent of a Secure Firmware Install for MPUs. Before sending their code to OEMs, developers encrypt their firmware with a private-public key system with STM32CubeProgrammer. The IP is thus unreadable by third parties. During assembly, OEMs use the provided hardware secure module (HSM) containing a protected encryption key to load the firmware that the MPU will decrypt internally. However, until now, developers encrypting the MPU’s code had access to the private key. The problem is that some organizations must limit access to such critical information. Thanks to the new STM32CubeProgrammer and PKCS#11, the private key remains hidden in an HSM, even during the encryption process by the developers.

Supporting new STM32 MCUs STM32C0, STM32MP25, and STM32WB05/6/7

Since version 2.17, STM32CubeProgrammer supports STM32C0s with 128 KB of flash. It also recognizes the STM32MP25, which includes a 1.35-TOPS NPU, and all the STM32WB0s, including the STM32WB05, STM32WB05xN, STM32WB06, and STM32WB07In the latter case, we brought support only a few weeks after their launch, thus showing that STM32CubeProgrammer keeps up with the latest releases to ensure developers can flash and debug their code on the newest STM32s as soon as possible.

Access to the STM32MP13’s bare metal

Microcontrollers demand real-time operating systems because of their limited resources, and event-driven paradigms often require a high level of determinism when executing tasks. Conversely, microprocessors have a lot more resources and can manage parallel tasks better, so they use a multitasking operating system, like OpenSTLinux, our Embedded Linux distributionHowever, many customers familiar with the STM32 MCU world have been asking for a way to run an RTOS on our MPUs as an alternative. In a nutshell, they want to enjoy the familiar ecosystem of an RTOS and the optimizations that come from running bare metal code while enjoying the resources of a microprocessor.

Consequently, we are releasing today STM32CubeMP13, which comes with the tools to run a real-time operating system on our MPU. We go into more detail about what’s in the package in our STM32MP13 blog post. Additionally, to make this initiative possible, ST updated its STM32Cube utilities, such as STM32CubeProgrammer. For instance, we had to ensure that developers could flash the NOR memory. Similarly, STM32CubeProgrammer enables the use of an RTOS on the STM32MP13 by supporting a one-time programmable (OTP) partition.

Traditionally, MPUs can use a bootloader, like U-Boot, to load the Linux kernel securely and efficiently. It thus serves as the ultimate first step in the boot process, which starts by reading the OTP partition. Hence, as developers move from a multitasking OS to an RTOS, it was essential that STM32CubeProgrammer enable them to program the OTP partition to ensure that they could load their operating system. The new STM32CubeProgrammer version also demonstrates how the ST ecosystem works together to release new features.

STM32WB and STM32WBA support

Since version 2.12, STM32CubeProgrammer has brought numerous improvements to the STM32WB series, which is increasingly popular in machine learning applications, as we saw at electronica 2022Specifically, the ST software brings new graphical tools and an updated wireless stack to assist developers. For instance, the tool has more explicit guidelines when encountering errors, such as when developers try to update a wireless stack with the anti-rollback activated but forget to load the previous stack. Similarly, new messages will ensure users know if a stack version is incompatible with a firmware update. Finally, STM32CubeProgrammer provides new links to download STM32WB patches and get new tips and tricks so developers don’t have to hunt for them.

Similarly, STM32CubeProgrammer supports the new STM32WBA, the first wireless Cortex-M33. Made official a few months ago, the MCU opens the way for a Bluetooth Low Energy 5.3 and SESIP Level 3 certification. The MCU also has a more powerful RF that can reach up to +10 dBm output power to create a more robust signal.

STM32H5 and STM32U5

The support for STM32H5 began with STM32CubeProgrammer 2.13, which added compatibility with MCUs, including anything from 128 KB up to 2 MB of flash. Initially, the utility brought security features like debug authentication and authentication key provisioning, which are critical when using the new life management system. The utility also supported key and certificate generation, firmware encryption, and signature. Over time, ST added support for the new STM32U535 and STM32U545 with 512 KB and 4 MB of flash. The MCUs benefit from RDP regression with a password to facilitate developments and SFI secure programming.

Additionally, STM32CubeProgrammer includes an interface for read-out protection (RDP) regression with a password for STM32U5xx. Developers can define a password and move from level 2, which turns off all debug features, to level 1, which protects the flash against certain reading or dumping operations, or to level 0, which has no protections. It will thus make prototyping vastly simpler.

STLINK-V3PWR

In many instances, developers use an STLINK probe with STM32CubeProgrammer to flash or debug their device. Hence, we quickly added support for our latest STLINK-PWR probe, the most extensive source measurement unit and programmer/debugger for STM32 devices. If users want to see energy profiles and visualize the current draw, they must use STM32CubeMonitor-Power. However, STM32CubeProgrammer will serve as an interface for all debug features. It can also work with all the probe’s interfaces, such as SPI, UART, I2C, and CAN.

Script mode

The software includes a command-line interface (CLI) to enable the creation of scripts. Since the script manager is part of the application, it doesn’t depend on the operating system or its shell environment. As a result, scripts are highly sharable. Another advantage is that the script manager can maintain connections to the target. Consequently, STM32CubeProgrammer CLI can keep a connection live throughout a session without reconnecting after every command. It can also handle local variables and even supports arithmetic or logic operations on these variables. Developers can thus create powerful macros to automate complex processes. To make STM32CubeProgrammer CLI even more powerful, the script manager also supports loops and conditional statements.

A unifying experience

STM32CubeProgrammer aims to unify the user experience. ST brought all the features of utilities like the ST-LINK Utility, DFUs, and others to STM32CubeProgrammer, which became a one-stop shop for developers working on embedded systems. We also designed it to work on all major operating systems and even embedded OpenJDK8-Liberica to facilitate its installation. Consequently, users do not need to install Java themselves and struggle with compatibility issues before experiencing STM32CubeProgrammer.

Qt 6 support

Since STM32CubeProgrammer 2.16, the ST utility uses Qt 6, the framework’s latest version. Consequently, STM32CubeProgrammer no longer runs on Windows 7 and Ubuntu 18.04. However, Qt 6 patches security vulnerabilities, brings bug fixes, and comes with significant quality-of-life improvements.

 

The post STM32CubeProgrammer 2.18: Improving the “flow” in “workflow” appeared first on ELE Times.

PSU exploded

Reddit:Electronics - Чтв, 03/20/2025 - 01:36
PSU exploded

Took this out of a unit cause it wasn’t turning on, flipped it over and multiple resisters and caps were gone. Most likely a power surge. Thought would be interesting to post cause don’t see this every day

submitted by /u/STUFFY69420
[link] [comments]

NUBURU eliminates long-term indebtedness and makes $5.15m strategic investment in Supply@ME Capital

Semiconductor today - Срд, 03/19/2025 - 21:13
NUBURU Inc of Centennial, CO, USA — which was founded in 2015 and develops and manufactures high-power industrial blue lasers — has announced a significant strategic investment in Supply@ME Capital Plc (SYME), a fintech platform focused on Inventory Monetisation solutions for manufacturing and trading companies. The strategic relationship aligns with NUBURU’s transformation plan as it seeks to build on its existing technology, while diversifying its assets in alignment with its announced growth strategy...

ROHM highlights Power Eco Family power semiconductor brand

Semiconductor today - Срд, 03/19/2025 - 19:44
ROHM Co Ltd of Kyoto, Japan has announced its commitment to addressing social challenges through electronics by focusing on the development of power semiconductors, which play a critical role in improving efficiency in high-power applications. Under the ‘Power Eco Family’ brand, ROHM offers comprehensive power semiconductor technologies and solutions that contribute to building a sustainable ecosystem...

Nexperia launches 1200V SiC MOSFETs in top-side-cooled X.PAK packages

Semiconductor today - Срд, 03/19/2025 - 14:57
Discrete device designer and manufacturer Nexperia of Nijmegen, the Netherlands (which operates wafer fabs in Hamburg, Germany, and Hazel Grove Manchester, UK) has introduced a range of highly efficient and robust industrial-grade 1200V silicon carbide (SiC) MOSFETs with what is claimed to be industry-leading temperature stability in innovative X.PAK surface-mount (SMD) top-side-cooled packaging technology. With its compact form factor of 14mm x 18.5mm, the package combines the assembly benefits of SMD with the cooling efficiency of through-hole technology, ensuring optimal heat dissipation...

Disposable vapes: Unnecessary, excessive waste in cylindrical shapes

EDN Network - Срд, 03/19/2025 - 14:32

My recent teardown of a rechargeable vape device was…wow…popular. I suspected upfront that it might cultivate a modicum of incremental traffic from the vape-using (and vape-curious) general public, but…wow. This long-planned follow-up focuses on non-rechargeable (i.e., disposable) vape counterparts, in part fueled by my own curiosity as to their contents but more generally and predominantly driven by my long-standing bleeds-green environmental outrage.

Here’s an example of what I mean, showcased in a recent Slashdot post that highlighted a writeup in The Guardian:

Thirteen vapes are thrown away every second in the UK — more than a million a day — leading to an “environmental nightmare,” according to research.

There has also been a rise in “big puff” vapes which are bigger and can hold up to 6,000 puffs per vape, with single use vapes averaging 600. Three million of these larger vapes are being bought every week according to the research, commissioned by Material Focus, and conducted by Opinium. 8.2 million vapes are now thrown away or recycled incorrectly every week.

From June 2025 it will be illegal to sell single-use vapes, a move designed to combat environmental damage and their widespread use by children. Vapes will only be allowed to be sold if they are rechargeable or contain a refillable cartridge.

But all types of vape contain lithium-ion batteries which are dangerous if crushed or damaged because they can cause fires in bin lorries or waste and recycling centres. These fires are on the rise across the UK, with an increase last year of 71% compared with 2022.

I have (at least) two questions:

  • If there are a million toxic chemical- and metal-leaching vapes headed to landfills (if we’re lucky; many, more likely, are sent directly into the water table via casual, irresponsible discard wherever it’s convenient for the owner to toss ‘em) in the UK alone, what’s that number look like when extrapolated to a worldwide count? Truthfully, from a blood pressure standpoint, I’m not sure I want to know the answer to that one.
  • And why are vapes that are “rechargeable or contain a refillable cartridge” (bolded emphasis mine) excluded from the upcoming UK ban? Why can’t (and shouldn’t) it instead be only those that are “rechargeable and contain a refillable cartridge”?

Rant off. One of the comments I posted as follow-up to last November’s initial entry in this vape-teardown series pointed readers to near-coincident published related coverage in Ars Technica:

Disposable vapes are indefensible. Many, or maybe most, of them contain rechargeable lithium-ion batteries, but manufacturers prefer to sell new ones. More than 260 million vape batteries are estimated to enter the trash stream every year in the UK alone. Vapers and vape makers are simply leaving an e-waste epidemic to the planet’s future residents to sort out.

To make a point about how wasteful this practice is—and to also make a pretty rad project and video—Chris Doel took 130 disposable vape batteries (the bigger “3,500 puff” types with model 20400 cells) found littered at a music festival and converted them into a 48-volt, 1,500-watt e-bike battery, one that powered an e-bike with almost no pedaling more than 20 miles.

The accompanying video is well worth your viewing time, IMHO.

and gave me the confidence to attempt my own teardown of conceptually similar vape devices, since Doel had confidently just ripped off the tip and back ends to get to their insides. Here’s the implement of destruction that I personally used:

And here are today’s victims, extracted from the trash as was the case with their rechargeable predecessor, and as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (not to mention roll-away prevention purposes):

The upper one is actually (supposedly, although there are still loopholes, apparently) no longer available in the US. It’s the “4000 puff” Noms X product variant (and Mojito Mint flavor) of the Esco Bars brand, manufactured by the Chinese company Shenzhen Innokin Technology. And no, I have no idea what “Pastel Cartel” means. The lower vape is Mr Fog ‘s “2000 puffs” Max pro model (and Raspberry Grape Black Currant flavor).

Here are their respective tips:

And their bottoms:

The black-color bottom end of the Esco Bars vape is fixed in position; note the two holes for incoming-airflow purposes. You’ll shortly see what secondary function the one in the middle also serves; that said, I’m not sure of the purpose of the incremental smaller second offset one. The white-color end of the Mr Fog vape, conversely, can be rotated to user-adjust the airflow. The two vents are on the sides of the end piece; here’s how airflow adjustment operates:

and briefly jumping ahead in time mid-teardown, here’s how it’s implemented:

Let’s start the disassembly process with the Esco Bars device, as previously mentioned by wrenching the bottom piece off with my pliers (see what I did there?).

That black rectangular spongy piece went flying when I pulled the bottom piece off, but I’m guessing from the lingering indentations that it normally sits in-between that thing that looks like a microphone (and fits inside the circular middle portion of the bottom piece) and the battery. And about that “thing that looks like a microphone”…I was initially a bit flummoxed when I saw it (no, I never thought it was actually a microphone, although other folks were amusingly-to-me apparently convinced otherwise), until I realized that neither vape has an on-off switch. Instead, what you do to “turn them on” (i.e., power up the heating coil) is to suck on the tip, which vapers refer to as a “draw”.

This “thing that looks like a microphone”, apparently, is a “draw sensor”; it detects the resultant user-generated airflow that’s initiated from the bottom and (as is already obvious even with the battery still in place) passes from there through the gap between the battery and vape body. This Quora thread has all the details, including pictures of a sensor that looks just like the one in the Esco Bars vape (and the Mr Fog one, for that matter, prematurely ruining the surprise…sorry). I’m guessing that the red and black wires route to the sensor from the battery, and the blue one carries a signal sent by the sensor to the heating coil when airflow is detected.

By repeatedly shaking the vape device (with a foam cushion underneath, in case the contents went flying) I got the battery out of the case far enough:

that I was then able to get a grip on it with my fingers and pull it the rest of the way out:

The remainder of the internals remained stubbornly stuck at the rear end of the tube until I started twisting on the tip with the wrench:

At which point the translucent tube fell out the bottom, too. Disgusting (and oily, too), huh?

From my research (I’ve learned more than I ever wanted to about vapes the past 24 hours or so), inside the plastic tube are apparently nicotine salts, soaked in the flavored vape juice. Here’s the entirety of the insides, stretched out:

And here’s what you’ve all been waiting for, the battery specs, 3.7V and 5.55 Wh:

Now for the Mr Fog vape. Again, I started with the white bottom piece, which initially didn’t get me very far (although look; another “microphone”):

So, I switched to the tip, which didn’t get me much further along…and yuck, again:

Back to the bottom for more twisting, this time of the clear plastic piece that as I showed you earlier, the white bottom piece fits around. That’s better:

Once again, a combination of shaking and two-finger pinching-and-pulling got the battery out:

But this time I had to then push from the top to get the rest out:

Greasy, smelly mission completed:

And the battery specs: once again 3.7V, but this time only 4.07 wH/1100 mAh, reflective of the Mr Fog vape’s comparative “half the puffs” estimate versus the Esco Bars alternative.

In closing, what most surprised me, I guess, is that neither of these vapes use standard 18650 cells found in a diversity of other devices (although from some of my research, their limited spec’d peak output current capabilities might be a coil-heating hinderance or, worse, a thermal safety complication in this particular application), or even the less common 20400 ones showcased in the video at the beginning of this writeup. With that, I’ll wrap up, take a deep draw (of nicotine-free air, mind you) and await your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Disposable vapes: Unnecessary, excessive waste in cylindrical shapes appeared first on EDN.

Nimy and Curtin University sign MoU to collaborate on gallium R&D and production

Semiconductor today - Срд, 03/19/2025 - 14:22
Mining exploration company Nimy Resources Ltd of Perth, Western Australia has signed a non-binding memorandum of understanding (MoU) with Perth-based Curtin University to collaborate on advancing gallium-related research, development and production. The strategic partnership aligns with Nimy’s aim to become a key supplier in the rapidly expanding global gallium market...

Advanced Packaging Solutions: Pushing the Limits of Semiconductor Performance

ELE Times - Срд, 03/19/2025 - 13:52
Introduction: The Paradigm Shift in Semiconductor Packaging

As Moore’s Law faces physical limitations, the semiconductor industry is increasingly turning to advanced packaging solutions to sustain performance gains. Traditional monolithic scaling is no longer viable for delivering the power efficiency and computational throughput required by next-generation applications like artificial intelligence (AI), high-performance computing (HPC), 5G, and edge computing. Instead, innovations in heterogeneous integration, 2.5D and 3D packaging, chiplet architectures, and fan-out wafer-level packaging (FOWLP) are redefining performance metrics.

This article provides an in-depth analysis of cutting-edge packaging technologies, their impact on semiconductor performance, and real-world case studies from leading industry players such as Broadcom, Nvidia, and GlobalFoundries.

The Evolution of Advanced Packaging Technologies
  1. 2.5D Integration: The Bridge Between Traditional and 3D Packaging

2.5D integration involves placing multiple semiconductor dies on a silicon interposer, allowing high-speed interconnections. Unlike conventional multi-chip modules (MCMs), 2.5D technology provides lower latency due to short interconnect distances, higher bandwidth through wide bus architectures, and reduced power consumption by eliminating long copper traces. These advantages make it an ideal solution for applications requiring high computational power and data transfer speeds.

Case Study

Broadcom’s 3.5D XDSiP for AI Acceleration Broadcom recently introduced 3.5D Extended Data Scale in Package (XDSiP) technology, enhancing AI chip interconnectivity using TSMC’s advanced packaging techniques. With production shipments expected by 2026, Broadcom aims to support hyperscale cloud providers in meeting AI’s high bandwidth demands by leveraging this innovative packaging solution.

  1. 3D Stacking: The Revolution in Vertical Integration

Unlike 2.5D, 3D stacking vertically integrates multiple dies using Through-Silicon Vias (TSVs) and wafer-to-wafer bonding. This architecture significantly reduces data transmission delays, lowers power dissipation, and increases computational density. By enabling high-speed data transfer with minimal signal loss, 3D stacking is particularly useful for applications requiring fast processing speeds. Additionally, the smaller form factors allow for more compact semiconductor devices, while improved thermal efficiency is achieved through optimized heat dissipation layers.

Case Study

Nvidia’s CoWoS-L in AI Chips Nvidia’s latest AI processor, Blackwell, utilizes Chip-on-Wafer-on-Substrate Large (CoWoS-L) technology, moving beyond traditional CoWoS-S to enhance interconnect performance. This advancement is part of Nvidia’s broader strategy to improve AI workload efficiency and silicon utilization, ensuring faster and more efficient data processing capabilities.

  1. Chiplet-Based Architectures: The Future of Modular Semiconductor Design

The industry is transitioning toward chiplet architectures, where small, specialized dies are interconnected within a package to increase performance flexibility and yield efficiency. Unlike monolithic designs, chiplets enable heterogeneous integration, allowing processors, memory, and accelerators to coexist within a single package. This approach reduces manufacturing costs by reusing tested chiplets while improving scalability by mixing process nodes within a package. Additionally, smaller die sizes contribute to better yield efficiency, ultimately enhancing semiconductor performance and reliability.

Case Study

AMD’s EPYC and Intel’s Meteor Lake AMD and Intel have embraced chiplet designs to improve scalability in their high-performance processors. AMD’s EPYC server CPUs leverage multiple CCD (Core Complex Die) chiplets, while Intel’s Meteor Lake integrates different chiplets for CPU, GPU, and AI acceleration, demonstrating the advantages of modular semiconductor design.

  1. Fan-Out Wafer-Level Packaging (FOWLP): Enhancing Thermal and Electrical Performance

FOWLP extends the package beyond the die’s boundaries, increasing I/O density while maintaining a compact footprint. This method eliminates wire bonding, improving electrical and thermal properties. With higher bandwidth compared to traditional wire-bond packaging, FOWLP enhances signal integrity while providing better heat dissipation for high-power applications. Furthermore, reduced parasitic capacitance ensures minimal signal interference, making this packaging technique essential for next-generation semiconductor devices.

Case Study

Apple’s A-Series Processors Apple extensively uses FOWLP in its A-series chips, ensuring high-performance computing in iPhones and iPads with minimized power loss and improved thermal control. By integrating this packaging solution, Apple enhances both power efficiency and processing capabilities, delivering seamless user experiences.

Impact of Advanced Packaging on Semiconductor Performance
  1. Performance Gains: Pushing Computational Boundaries

By reducing interconnect lengths and signal latency, advanced packaging significantly enhances processing speeds for AI and HPC applications. Improved memory bandwidth allows for faster data transfer, benefiting workloads such as AI model training and deep learning inference. Additionally, data center efficiency is greatly improved as power-hungry interconnect bottlenecks are minimized, ensuring higher computational throughput.

  1. Power Efficiency: Addressing Thermal Constraints

Advanced packaging solutions lower power consumption by optimizing shorter interconnect paths that reduce energy dissipation. Better thermal management is achieved using advanced cooling layers, preventing overheating issues in high-performance applications. The integration of energy-efficient AI accelerators, such as low-power chiplets, further enhances power efficiency, ensuring sustainable semiconductor performance.

  1. Miniaturization and Integration: The Path to More Compact Devices

With increasing demand for smaller form factors, advanced packaging enables higher transistor densities, improving device functionality. The integration of specialized components, such as RF, memory, and AI accelerators, allows for more efficient processing while maintaining compact device sizes. Heterogeneous system architectures facilitate multi-functional capabilities, paving the way for highly sophisticated semiconductor solutions.

Challenges in Advanced Packaging Adoption
  1. Manufacturing Complexity

The fabrication of interposers and TSVs in advanced packaging incurs high costs due to precision alignment requirements. Yield challenges arise as the complexity of packaging increases, necessitating stringent quality control measures to ensure production efficiency.

  1. Thermal Management Issues

As power density increases, overheating becomes a major challenge in advanced packaging. To counter this, new cooling solutions such as liquid and vapor chamber technologies are being explored to enhance heat dissipation and ensure thermal stability in high-performance devices.

  1. Design & Validation Bottlenecks

With the rise of chiplet-based designs, EDA tools need advancements to model complex architectures accurately. Testing complexity also increases due to heterogeneous integration, requiring innovative validation techniques to streamline semiconductor development.

Future Trends in Semiconductor Packaging
  1. Heterogeneous Integration at Scale

The future of semiconductor packaging lies in combining logic, memory, and RF components within a unified package. This integration will pave the way for neuromorphic and quantum computing applications, unlocking new possibilities in computational efficiency.

  1. Advanced Materials for Packaging

High-performance substrates, such as glass interposers, are gaining traction for improving signal integrity. Additionally, the development of low-k dielectrics is expected to reduce capacitance losses, further enhancing semiconductor performance.

  1. Standardization of Chiplet Interconnects

Industry efforts like UCIe (Universal Chiplet Interconnect Express) aim to create cross-compatible chiplet ecosystems, allowing seamless integration of different semiconductor components.

  1. AI-Driven Automation in Packaging Design

Generative AI algorithms are optimizing power, performance, and area (PPA) trade-offs, accelerating semiconductor design processes. AI-enabled defect detection and yield improvement strategies are also becoming integral to advanced packaging manufacturing.

Conclusion: The Road Ahead for Semiconductor Performance Enhancement

Advanced packaging is reshaping the future of semiconductor design, driving performance improvements across AI, HPC, and mobile computing. As the industry continues to innovate, overcoming challenges in manufacturing, thermal management, and validation will be crucial in sustaining growth. The next decade will witness a convergence of materials science, AI-driven automation, and heterogeneous integration, defining a new era of semiconductor technology.

The post Advanced Packaging Solutions: Pushing the Limits of Semiconductor Performance appeared first on ELE Times.

Data center power meets rising energy demands amid AI boom

EDN Network - Срд, 03/19/2025 - 08:59

Texas Instruments’ APEC-related releases are power management chips centered around supporting the AI-driven power demands in data centers. The releases include the first 48-V integrated hot-swap eFuse with power-path protection (TPS1685) and an integrated GaN power stage (gate driver + FET) in the industry-standard TOLL package. 

In a conversation with Priya Thanigai, VP and Business Unit Manager of power switches at Texas Instruments, EDN obtained some insights on meeting the needs of next-generation racks demanding the 48-V architecture.

Spotlight on data centers

Hot topics at APEC typically encompassed the use of wide bandgap semiconductors like silicon carbide (SiC) and gallium nitride (GaN) to yield higher efficiency subsystems in the steady electrification of technologies. Electrified end applications have spanned from e-mobility to industrial processes that are enabled by battery and smart grid advancements. 

Discussions this year have shifted more toward the power demands that generative AI has created for data centers. While much of the actual power consumption of these data centers remains secretive, it’s apparent that LLMs like ChatGPT and DeepSeek have created a substantial increase; the U.S. data center electricity usage tripled from 2014 to 2023 according to the U.S. department of energy (DoE). The number is anticipated to double or triple by 2028.

The international energy agency (IEA) also reported that data centers consumed ~1.4-1.7% of global electricity in 2022; this is also expected to double by 2026. According to the World Economic Forum, “the computational power needed for sustaining AI’s growth is doubling roughly every 100 days.”

Going nuclear

Hyperscalers are also making more apparent their plans to sustain the energy demands. In September 2024, plans to recommission the Three Mile Island nuclear plant were made public with a 20-year contract to help power Microsoft data centers. Other technology companies follow a similar nuclear path, augmenting power capabilities with small modular reactors (SMRs).

And as the semiconductor industry is feverishly fabricating chips that can efficiently run these compute-intensive training tasks through software-hardware codesign, the power demands continually soar. Further into the future, these nuclear reactors could be used with solid-state transformers to support data center processing.

The 48-V bus and beyond

The data center server room consists of a sea of IT racks supported by a sidecar that holds hot-swappable power supply units (PSUs) that facilitate replacing or upgrading a PSU without shutting down the server (Figure 1). These PSUs support much higher power densities moving from 6 kW with the 48-V bus to 100 MW with the 400-V bus.

Figure 1: Sidecar, IT rack, and supporting subsystems shown at the TI booth during APEC 2025. 

“While data centers have been ahead of the curve, cars are only now moving to 48 V,” said Thanigai. “But data centers have probably already been there for about a decade.” It’s just been very slow because earlier systems really didn’t need the compute power until LLMs exploded. Until then, it was only the high-end GPUs that needed that extra power at 48 V.

She mentioned how TI had been keeping a watchful eye on the relatively slow move from 12-V products for data centers 48-V and how recent pressures have brought on that inflection point. “Now we’re seeing more native 48-V systems ship and we’re talking about 400-V already,” Thanigai said. “So the transition from 12 V to 48 V may have taken a decade to hit the inflection point but 48 V to 400 V will probably be shorter and sharper because of how much energy is needed by data centers.”

Moving from discretes to integrated eFuses

Power path protection is tied directly to PSU reliability and is therefore a critical aspect of ensuring zero downtime deployments. The 48-V eFuse is a successor to the popular 12-V eFuse category; the shift to 48 V allows users to scale power to beyond 6 kW. 

“If you’re looking at the power design transition, generally power architectures will begin with discretes at the start of any design because they want to get a good feel of how to build something,” explained Thanigai. The building blocks of power path protection generally include the power FET, a gate or voltage drive to drive it, and components like a soft-start capacitor to control the inrush, comparators, and current-sense elements.

Thanigai described the moves toward more integration where the hot swap controller integrates the amplifiers, some of the protection features, and some of the smarts. However, there still remains an external FET and sensing element. 

“The last leg of the integration is eFuse where the FET, the controller, and all the smarts are in a single chip,” she said. “That’s a classic power design evolution, where you go from discrete to semi-integrated to fully integrated.” The TPS1685 eFuse includes protection features like rapid response to fault events with an integrated black box for fault logging. Then there is a user-configurable overcurrent blanking timer that avoids false tripping at peak inrush.

Advanced stacking for loads > 6 kW

Mismatches in the on-state resistance (Rdson) due to PCB trace resistance and comparator thresholds can create false tripping (Figure 2). The conventional discrete designs require power architects to hand calculate the margins to make sure the FETs are matched such that no single FET is taking on more thermal stress than the others.  

Figure 2: Discrete implementations require individual calculations per sense element and FET to take into account mismatches at each node; instead Rdson is actively adjusted via Vgs regulation and equal steady-state current across all devices is achieved through path resistance equalization. Source: Texas Instruments 

The IP in the TPS1685 eFuse actively measures and monitors the thermal stress at various areas of the FET within each of the eFuses and balances current between each automatically through a single-wire protocol. The integration designates one eFuse as the primary controller to monitor total system current by using the interconnected IMON pins, enabling active RDS(ON) shifting to ensure devices are current-sharing.

“You can basically stack unlimited eFuses,” said Thanigai, “We’ve shown up to 12 operational eFuses on a customer board and each of them can do 1 kW (~ 50 V @ 20 A), so we easily reach the 5-10 kW that you see with systems nowadays. But we can scale higher than that since there’s no upper limit.”

Figure 3: Image of 6 eFuses stacked in parallel on the top and bottom of a PCB to support a maximum load current of 120 A. 

Moving toward 400 V

When asked about the move toward supporting 400-V bus architectures, Thanigai responded, “There’s two aspects in these eFuses.” There’s the pure analog power domain, which is the FET architectures, and then there’s the digital domain which embodies smarts around the FET, she added.

All of the digital IP TI has developed scales from 12 V to 48 V to 400 V, and that while this particular device includes 48-V power FETs, TI is preparing to scale this up to 400 V.

Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

The post Data center power meets rising energy demands amid AI boom appeared first on EDN.

Lumentum chosen as NVIDIA silicon photonics ecosystem partner to advance AI networking at scale

Semiconductor today - Срд, 03/19/2025 - 00:15
Lumentum Holdings Inc of San Jose, CA, USA (which designs and makes optical and photonic products for optical networks and lasers for industrial and consumer markets) has been selected as a key contributor in NVIDIA’s silicon photonics ecosystem. Lumentum’s high-power, high-efficiency lasers have a crucial role in the development and deployment of new NVIDIA Spectrum-X Photonics networking switches...

Navitas exceeds new 80 PLUS ‘Ruby’ certification for highest level of efficiency in AI data-center power supplies

Semiconductor today - Втр, 03/18/2025 - 23:08
Gallium nitride (GaN) power IC and silicon carbide (SiC) technology firm Navitas Semiconductor Corp of Torrance, CA, USA says that its portfolio of 3.2kW, 4.5kW and 8.5kW AI data-center power supply unit (PSU) designs exceed the new 80 PLUS ‘Ruby’ certification, focused on the highest level of efficiency for redundant server data-center PSUs...

Сторінки

Subscribe to Кафедра Електронної Інженерії підбірка - Новини світу мікро- та наноелектроніки