Українською
  In English
Microelectronics world news
AOS devices power 800-VDC AI racks

GaN and SiC power semiconductors from AOS support NVIDIA’s 800-VDC power architecture for next-gen AI infrastructure, enabling data centers to deploy megawatt-scale racks for rapidly growing workloads. Moving from conventional 54-V distribution to 800 VDC reduces conversion steps, boosting efficiency, cutting copper use, and improving reliability.

The company’s wide-bandgap semiconductors are well-suited for the power conversion stages in AI factory 800‑VDC architectures. Key device roles include:
- High-Voltage Conversion: SiC devices (Gen3 AOM020V120X3, topside-cooled AOGT020V120X2Q) handle high voltages with low losses, supporting power sidecars or single-step conversion from 13.8 kV AC to 800 VDC. This simplifies the power chain and improves efficiency.
- High-Density DC/DC Conversion: 650-V GaN FETs (AOGT035V65GA1) and 100-V GaN FETs (AOFG018V10GA1) convert 800 VDC to GPU voltages at high frequency. Smaller, lighter converters free rack space for compute resources and enhance cooling.
- Packaging Flexibility: 80-V and 100-V stacked-die MOSFETs (AOPL68801) and 100-V GaN FETs share a common footprint, letting designers balance cost and efficiency in secondary LLC stages and 54-V to 12- V bus converters. Stacked-die packages boost secondary-side power density.
AOS power technologies help realize the advantages of 800‑VDC architectures, with up to 5% higher efficiency and 45% less copper. They also reduce maintenance and cooling costs.
The post AOS devices power 800-VDC AI racks appeared first on EDN.
Optical Tx tests ensure robust in-vehicle networks

Keysight’s AE6980T Optical Automotive Ethernet Transmitter Test Software qualifies optical transmitters in next-gen nGBASE-AU PHYs for IEEE 802.3cz compliance. The standard defines optical automotive Ethernet (2.5–50 Gbps) over multimode fiber, providing low-latency, EMI-resistant links with high bandwidth, and lighter cabling. Keysight’s platform helps enable faster, more reliable in-vehicle networks for software-defined and autonomous vehicles.

Paired with Keysight’s DCA-M sampling oscilloscope and FlexDCA software, the AE6980T offers Transmitter Distortion Figure of Merit (TDFOM) and TDFOM-assisted measurements, essential for evaluating optical signal quality. Device debugging is simplified through detailed margin and eye-quality evaluations. The compliance application also automates complex test setups and generates HTML reports showing how devices pass or fail against defined limits.
AE6980T software provides full compliance with IEEE 802.3cz-2023, Amendment 7, and Open Alliance TC7 test house specifications. It currently supports 10-Gbps data rates, with 25 Gbps planned for the future.
For more information about Keysight in-vehicle network test solutions and their automotive use cases, visit Streamline In-Vehicle Networking.
The post Optical Tx tests ensure robust in-vehicle networks appeared first on EDN.
Gate drivers tackle 220-V GaN designs

Two half-bridge GaN gate drivers from ST integrate a bootstrap diode and linear regulators to generate high- and low-side 6-V gate signals. The STDRIVEG210 and STDRIVEG211 target systems powered from industrial or telecom bus voltages, 72-V battery systems, and 110-V AC line-powered equipment.

The high-side driver of each device withstands rail voltages up to 220 V and is easily supplied through the embedded bootstrap diode. Separate gate-drive paths can sink 2.4 A and source 1.0 A, ensuring fast switching transitions and straightforward dV/dt tuning. Both devices provider short propagation delay with 10-ns matching for low dead-time operation.
ST’s gate drivers support a broad range of power-conversion applications, including power supplies, chargers, solar systems, lighting, and USB-C sources. The STDRIVEG210 works with both resonant and hard-switching topologies, offering a 300-ns startup time that minimizes wake-up delays in burst-mode operation. The STDRIVEG211 adds overcurrent detection and smart shutdown functions for motor drives in tools, e-bikes, pumps, servos, and class-D audio systems.
Now in production, the STDRIVEG210 and STDRIVEG211 come in 5×4-mm, 18-pin QFN packages. Prices start at $1.22 each in quantities of 1000 units. Evaluation boards are also available.
The post Gate drivers tackle 220-V GaN designs appeared first on EDN.
ST unveils prototype power delivery system for NVIDIA’s 800VDC power architecture
MIT-spinout Vertical Semiconductor raises $11m in seed funding round led by Playground Global
Apple’s M5: The SoC-and-systems cadence (sorta) continues to thrive

A month and a few days ago, Apple dedicated an in-person event (albeit with the usual pre-recorded presentations) to launching its latest mainstream and Pro A19 SoCs and the various iPhone 17s containing them, along with associated smart watch and earbuds upgrades. And at the end of my subsequent coverage of Amazon and Google’s in-person events, I alluded to additional Apple announcements that, judging from both leaks (some even straight from the FCC) and historical precedents, might still be on the way.
Well, earlier today (as I write these words on October 15), at least some of those additional announcements just arrived, in the form of the new baseline M5 SoC and the various upgraded systems containing it. But this time, again following historical precedent, they were delivered only in press release form. Any conclusions you might draw as the relative importance within Apple of smartphones versus other aspects of the overall product line are…well…

Looking at the historical trends of M-series SoC announcements, you’ll see that the initial >1.5-year latency between the baseline M1 (November 2020) and M2 (June 2022) chips subsequently shrunk to a yearly (plus or minus a few months) cadence. To wit, since the M4 came out last May but the M5 hadn’t yet arrived this year, I was assuming we’d see it soon. Otherwise, its lingering absence would likely be reflective of troubles within Apple’s chip design team and/or longstanding foundry partner TSMC. And indeed, the M5 has finally shown up. But my concerns about development and/or production troubles still aren’t completely alleviated.

Let’s parse through the press release.
Built using third-generation 3-nanometer technology…
This marks the third consecutive generation of M-series CPUs manufactured on a 3-nm litho process (at least for the baseline M5…I’ll delve into higher-end variants next). Consider this in light of Wikipedia’s note that TSMC began risk production on its first 2 nm process mid-last year and was originally scheduled to be in mass production on 2 nm in “2H 2025”. Admittedly, there are 2.5 more months to go until 2025 is over, but Apple would have had to make its process-choice decision for the M5 many months (if not several years) in the past.
Consider, too, that the larger die size Pro and Max (and potentially also Ultra) variants of the M5 haven’t yet arrived. This delay isn’t without precedent; there was a nearly six-month latency between the baseline M4 and its Pro and Max variants, for example. That said, the M4 had shown up in early May, with the Pro and Max following in late October, so they all still arrived in 2024. And here’s an even more notable contrast: all three variants of the M3 were launched concurrently in late October 2023. Consider all of this in the light of persistent rumors that M5 Pro- and Max-based systems may not show up until spring-or-later 2026.
M5 introduces a next-generation 10-core GPU architecture with a Neural Accelerator in each core, enabling GPU-based AI workloads to run dramatically faster, with over 4x the peak GPU compute performance compared to M4. The GPU also offers enhanced graphics capabilities and third-generation ray tracing that combined deliver a graphics performance that is up to 45 percent higher than M4.
Note that these Neural Accelerators are presumably different than those in the dedicated 16-core Neural Engine. The latter historically garnered the bulk of the AI-related press release “ink”, but this time it’s limited to terse “improved” and “faster” descriptions. What does this tell me?
- “Neural Accelerator” is likely a generic term reflective of AI-tailored shader and other functional block enhancements, analogous to the increasingly AI-optimized capabilities of NVIDIA’s various GPU generations.
- The Neural Engine, conversely, is (again, I’m guessing) largely unchanged here from the one in the M4 series, instead indirectly benefiting from a performance standpoint due to the boosted overall SoC-to-external memory bandwidth.
M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4.
Core count proportions and totals both match those of the M4. Aside from potential “Neural Accelerator” tweaks such as hardware-accelerated instruction set additions (à la Intel’s MMX and SSE), I suspect they’re largely the same as the prior generation, with any performance uplift resulting from overall external memory bandwidth improvements. Speaking of which…
M5 also features…a nearly 30 percent increase in unified memory bandwidth to 153GB/s.
And later…
M5 offers unified memory bandwidth of 153GB/s, providing a nearly 30 percent increase over M4 and more than 2x over M1. The unified memory architecture enables the entire chip to access a large single pool of memory, which allows MacBook Pro, iPad Pro, and Apple Vision Pro to run larger AI models completely on device. It fuels the faster CPU, GPU, and Neural Engine as well, offering higher multithreaded performance in apps, faster graphics performance in creative apps and games, and faster AI performance running models on the Neural Accelerators in the GPU or the Neural Engine.
The enhanced memory controller is, I suspect, the nexus of overall M4-to-M5 advancements, as well as explaining why Apple’s still able to cost-effectively (i.e., without exploding the total transistor count budget) fabricate the new chip on a legacy 3-nm lithography. How did the company achieve this bandwidth boost? While an even wider bus width than that used with the M4 might conceptually provide at least part of the answer, it’d also both balloon the required SoC pin count and complicate the possible total memory capacity increments. I therefore suspect a simpler approach is at play. The M4 used 7500 Mbps DDR5X SDRAM, while the M4 Pro and Max leveraged the faster 8533 Mbps DDR5X speed bin. But if you look at Samsung’s website (for example), you’ll see an even faster 9600 Mbps speed bin listed. 9600 Mbps is 28% more than 7500 Mbps…voila, there’s your “nearly 30 percent increase”.
There’s one other specification, this time not found in the SoC press release but instead in the announcement for one of the M5-based systems, that I’d like to highlight:
…up to 2x faster storage read and write speeds…
My guess here is that Apple has done a proprietary (or not)-interface equivalent to the industry-standard PCI Express 4.x-to-5.x and UFS 4.x-to-5.x evolutions, which also tout doubled peak transfer rate speeds.
Speaking of speeds…keep in mind when reading about SoC performance claims that they’re based on the chip running at its peak possible clock cadence, not to mention when outfitted with maximum available core counts. An especially power consumption-sensitive tablet computer, for example, might clock-throttle the processor compared to the SoC equivalent in a mobile or (especially) desktop computer. Yield-maximization (translating into cost-minimization) “binning” aspirations are another reason why the SoC in a particular system configuration may not perform to the same level as a processor-focused press release might otherwise suggest. Such schemes are particularly easy for someone like Apple—who doesn’t publish clock speeds anyway—to accomplish.
And speaking of cost minimization, reducing the guaranteed-functional core counts on a chip can significantly boost usable silicon yield, too. To wit, about those M5-based systems…
11” and 13” iPad Pros
Last May’s M4 unveil marked the first time that an iPad, versus a computer, was the initial system to receive a new M-series processor generation. More generally, the fifth-gen iPad Pro introduced in April 2021 was the first iPad to transition from Apple’s A-series SoCs to the M-series (the M1, to be precise). This was significant because, up to that point, M-series chips had been exclusively positioned as for computers, with A-series processors for iPhones and iPads.
This time, both the 11” and 13” iPad Pro get the M5, albeit with inconsistent core counts (and RAM allocations, for that matter) depending on the flash memory storage capacity and resultant price tag. From 9 to 5 Mac’s coverage:
- 256GB storage: 12GB memory, M5 with 9-core CPU, 10-core GPU
- 512GB storage: 12GB memory, M5 with 9-core CPU, 10-core GPU
- 1TB storage: 16GB memory, M5 with 10-core CPU, 10-core GPU
- 2TB storage: 16GB memory, M5 with 10-core CPU, 10-core GPU
It bears noting that the 12 GByte baseline capacity is 4 GBytes above what baseline M4 iPad Pros came with a year-plus ago. Also, the deprecated CPU core in the lower-end variants is one of the four performance cores; CPU efficiency core counts are the same across all models, as are—a pleasant surprise given historical precedents and a likely reflection of TSMC’s process maturity—the graphics core counts. And for the first time, a cellular-equipped iPad has switched from a Qualcomm modem to Apple’s own: the newest C1X, to be precise, along with the N1 for wireless communications, both of which we heard about for the first time just a month ago.
A brief aside: speaking of A-series to M-series iPad Pro transitions, mine is a second-generation 11” model (one of the fourth-generation iPad Pros) dating from March 2020 and based on the A12Z Bionic processor. It’s still running great, but I’ll bet Apple will drop software support for it soon (I’m frankly surprised that it survived this year’s iPadOS 26 cut, to be honest). My wife-and-I have a wedding anniversary next month. Then there’s Christmas. And my 60th birthday next May. So, if you’re reading this, honey…

This one was not-so-subtly foreshadowed by Apple’s marketing VP just yesterday. The big claim here, aside from the inevitable memory bandwidth-induced performance-boost predications, is “phenomenal battery life of up to 24 hours” (your mileage may vary, of course). And it bears noting that, in today’s tariff-rife era, the $1599 entry-level pricing is unchanged from last year.
The Vision Pro
The underlying rationale for the performance boost is more obvious here; the first-generation model teased in June 2023 with sales commencing the following February was based on the three-generations-older M2 SoC. That said, given the rampant rumors that Apple has redirected its ongoing development efforts to smart glasses, I wonder how long we’ll be stuck with this second-generation evolutionary tweak of the VR platform. A redesigned headband promises a more comfortable wearing experience. Apple will also start selling accessories from Logitech (the Muse pencil, available now) and Sony (the PlayStation VR2 Sense controller, next month).
Anything else?I should note, by the way, that the Beats Powerbeats Fit earbuds that I mentioned a month back, which had been teased over YouTube and elsewhere but were MIA at Apple’s event, were finally released at the end of September. And on that note, other products (some currently with evaporating inventories at retail, another common tipoff that a next-generation device is en route) are rumored candidates for near-future launch:
- Next-gen Apple TV 4K
- HomePod mini 2
- AirTag 2
- One (or multiple) new Apple Studio Display(s)
- (???)
We shall see. Until next time, I welcome your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Amazon and Google: Can you AI-upgrade the smart home while being frugal?
- The transition to Apple silicon Arm-based computers
- Apple’s 2022 WWDC: Generational evolution and dramatic obsolescence
- Apple’s Spring 2024: In-person announcements no more?
- Apple’s “October Surprise”: the M3 SoC Family and the A17 Bionic Reprise
- Apple’s 2H 2025 announcements: Tariff-touched but not bound, at least for this round
The post Apple’s M5: The SoC-and-systems cadence (sorta) continues to thrive appeared first on EDN.
Stony Brook orders two CVD Equipment PVT150 systems for onsemi Silicon Carbide Crystal Growth Center
ESP32 project
| | Hello, a little update from my recent post. I tweaked few things and organized a bit better. I also added the remote control. If you could please check and review the boards, it would help me a lot. Thank you in advance Project Description – 24V DC Motor Drive System with BLE Remote Control 1. OverviewThe project consists of a complete 24 V DC motor control system that integrates:
The system allows:
The remote control unit uses the Raytac MDBT42V (nRF52832) module to wirelessly transmit control commands (button presses) to the ESP32 receiver using Bluetooth Low Energy (BLE). 3.2 Hardware Design
[link] [comments] |
Synaptics Launches New Multimodal Gen AI Processors for Smart IoT Edge Designs
ams OSRAM and Nichia expand their intellectual property collaboration
ams OSRAM extends CFO Rainer Irle’s contract until 2030
Photon Design showcasing laser simulation tool developments at Optica Laser Congress
Microchip’s SkyWire Tech Enables Nanosecond-Level Clock Sync Across Locations
To protect critical infrastructure systems, SkyWire technology enables highly scalable and precise time traceability to metrology labs
Network clocks are the backbone of critical infrastructure operations, with the precise alignment of clocks becoming increasingly important for data centers, power utilities, wireless and wireline networks and financial institutions. For critical infrastructure operators to deploy timing architectures with reliability and resiliency, their clocks and timing references must be measured and verified to an authoritative time source such as Coordinated Universal Time (UTC). Microchip Technology announced its new SkyWire technology, a time measurement tool embedded in the BlueSky Firewall 2200, that is designed to measure, align and verify time to within nanoseconds even when clocks are long distances apart.
With the BlueSky GNSS Firewall 2200 and SkyWire technology, geographically dispersed timing systems can be compared to each other and compared to the time scale systems deployed at metrology labs within nanoseconds. Measurement of clock alignment and traceability to this level has typically only been done between metrology labs and scientific institutes. With Microchip’s solution, critical timing networks for air traffic control, transportation, public utilities and financial services can achieve alignment within nanoseconds between its clocks to protect their infrastructure no matter where the clocks are located.
“To ensure timing systems are delivering to stringent accuracy requirements, it’s important to measure and verify in an independent manner relative to UTC as managed by national laboratories and traceable to the Bureau International Poids et Mesures (BIPM),” said Randy Brudzinski, corporate vice president of Microchip’s frequency and timing systems business unit. “With the new SkyWire technology solution, we’re making UTC more widely accessible so that large deployments of clocks can be independently measured and verified against each other across long distances.”
The concept originated as an extension to the National Institute of Standards and Technology’s (NIST’s) pre-existing service called Time Measurement and Analysis Service (TMAS), which is utilized by entities that are required to maintain an accurate local time standard. The BlueSky GNSS Firewall 2200 with SkyWire technology provides a Commercial Off-The-Shelf (COTS) product to enable critical infrastructure operators to connect with the NIST TMAS Data Service for large-volume clock deployments.
“At NIST, our goal is to enable the most accurate time to support our country’s infrastructure,” said, Andrew Novick, engineer at NIST. “Our TMAS Data Service in conjunction with commercial hardware, provides a scalable solution for anyone who needs traceable and accurate timing.”
Nations around the globe can replicate this solution using Microchip’s SkyWire technology capabilities within its TimePictra software suite, which delivers similar features and functionality as that provided by the NIST TMAS Data Service. Metrology labs, government agencies and enterprises worldwide can deploy TimePictra software suite and the BlueSky GNSS Firewall 2200 with SkyWire technology and have their own end-to-end solution for traceable time measurement, alignment and verification.
The post Microchip’s SkyWire Tech Enables Nanosecond-Level Clock Sync Across Locations appeared first on ELE Times.
Next Generation Hybrid Systems Transforming Vehicles
The global automotive industry is undergoing a fundamental transformation moving from internal combustion engines (ICEs) to electric and hybrid vehicles that redefine mobility as sustainable, intelligent, and efficient.
This shift is not merely regulatory-driven; it’s fueled by a shared pursuit of carbon neutrality, cost-effectiveness, and consumer demand for cleaner mobility options.
Hybrid electric technology has proven to be the most practical bridge to date between traditional combustion and complete electrification. Providing the versatility of twin propulsion — electric motor and ICE — hybrid powertrains give the advantages of fuel economy, lower emissions, and a smoother transition for both consumers and manufacturers.
From regenerative braking to capture kinetic energy to AI-enabled energy management optimizing power delivery, hybrids are the sophisticated union of software and engineering. As countries pledge net-zero, and OEMs retool product strategies, hybrid technology is not merely a transition measure — it’s the strategic foundation of the auto decarbonization agenda.
Innovations Driving Hybrid Systems
- Solid-State & Next-Gen Lithium-Ion Batteries
Next-generation solid-state batteries hold the potential for greater energy density, quicker charging, and increased safety. Their ability to double the energy storage capacity and halve the charging time makes it a game-changer for hybrid and plug-in hybrid vehicles.
- Regenerative Braking & E-Axle Integration
Regenerative braking captures kinetic energy and converts it into electricity during braking, pumping it back into the battery. Coupled with e-axle technology, this integration optimizes drivetrain efficiency and performance.
- Lightweight Composites for Higher Efficiency
Advances in carbon-fiber-reinforced plastics and aluminum alloys allow automakers to shave weight, increase efficiency, and enhance range — all without sacrificing safety.
- AI-Powered Energy Management Systems
Artificial intelligence is now at the heart of hybrid optimization — learning driving habits, anticipating power needs, and controlling energy transfer between engine, motor, and battery for optimum efficiency.
Industry Insights: Mercedes-Benz on Hybrid Innovation
Rahul Kumar Shah, Senior Engineer at Mercedes-Benz Research & Development, outlines the engineering philosophy behind the company’s next-gen hybrid powertrains.
“At Mercedes-Benz, we view hybridisation not as an interim solution, but as a masterclass in energy orchestration. Our focus is on creating a seamless dialogue between the combustion engine and electric motor — ensuring the right power source is deployed at the right time to deliver maximum efficiency and a signature Mercedes driving experience.”
Optimizing Hybrid Powertrain Architectures:
“We have advanced from conventional parallel systems to sophisticated P2 and P3 architectures. By placing high-torque electric motors strategically within the drivetrain, we eliminate turbo lag and allow smaller, thermally efficient combustion engines to deliver spirited performance. Combined with predictive AI energy management, our vehicles decide in real time whether to operate in electric mode, recharge, or blend both power sources for optimal efficiency.”

Figure (1)

Figure (2)
The P2 Hybrid (first diagram) places the electric motor between the engine and transmission, allowing it to drive the wheels directly alongside the engine or independently. The P3 Hybrid (second diagram) places the electric motor on the transmission output shaft, leveraging greater torque multiplication but coupling the electric drive to the transmission output shaft. Both are Parallel PHEVs using a battery and inverter to manage power flow to the wheels.
Regenerative Braking:
“Our eDrive motors can decelerate the car up to 3 m/s² using purely regenerative energy. Coupled with our ESP HEV system, regenerative and mechanical braking are blended seamlessly to ensure vehicle stability and natural pedal feel. Integration with navigation and radar allows the vehicle to preemptively harvest energy — effectively ‘sailing on electricity.”

During regenerative braking, the wheels’ kinetic energy is converted by the motor (in generator mode) into electrical energy, which charges the battery. This process is triggered by a control signal from the brake pedal and ECU.
Thermal Management:
“Managing heat across the combustion engine, electric motor, and battery is essential. We maintain optimal battery temperatures between 20–40°C and reuse waste heat for cabin and coolant heating, reducing overall energy draw and improving efficiency.”
Solid-State Batteries:
“For hybrids, the real advantage lies in power density and durability. Solid-state cells can deliver and absorb charge much faster, enabling more efficient regenerative braking and smoother electric boosts. They are set to become the ultra-durable heart of next-generation hybrid powertrains.”
The Future of Hybrid Powertrains
The hybrid era is entering a smarter, cleaner, and more connected phase. Over the next decade, hybrid systems will evolve from being a bridge technology to a core pillar of sustainable mobility.
- Plug-in Hybrids Take Center Stage
Plug-in hybrids (PHEVs) will lead the transition, providing longer electric-only ranges and rapid charging. With bi-directional charging (V2G), they’ll also supply homes or return energy to the grid, making vehicles portable energy centers.
- Hydrogen Enters the Hybrid Mix
Hybrid powertrains assisted by hydrogen will appear, particularly in commercial and long-distance vehicles. Blending fuel cell stacks with electric drives, they will provide zero-emission mobility with rapid refueling and long range.
- Modular Electric Platforms
At-scale automakers are shifting to scalable modular architectures that integrate the battery, e-axle, and drive unit into flexible “skateboard” configurations. These platforms will reduce their costs and enable software-defined performance updates through over-the-air upgrades.
- AI-Optimized Energy Management
Artificial intelligence will power real-time power delivery, anticipating traffic and terrain to balance efficiency with performance. Future hybrids will be able to learn, adjust, and self-optimize, combining intelligence with propulsion.
- Smart Materials & Circular Manufacturing
The hybrids of tomorrow will be lighter and cleaner — constructed from recycled composites, graphene-reinforced metals, and bio-based plastics. Closed-loop recycling will enable hybrid production to become more sustainable from start to finish.
Conclusion

Hybrid powertrains are no longer a bridge they’re becoming the cornerstone of an intelligent, networked mobility ecosystem.
As electrification grows and carbon-neutral goals firm up, hybrids will become smarter, self-tuning systems that efficiently couple combustion with electric precision.
Next-generation hybrid platforms will talk to smart grids, learn from the behavior of drivers, and self-regulate energy across varying propulsion sources from batteries to hydrogen cells. Optimization using AI, predictive maintenance, and cloud-based analytics will transform vehicles into operating modes, charging, and interacting with their surroundings.
Hybrids aren’t just a bridge—they’re the bold intersection where combustion and electrification unite to rewrite the future.
The post Next Generation Hybrid Systems Transforming Vehicles appeared first on ELE Times.
TI launches power management devices for AI computing

Texas Instruments Inc. (TI) announced several power management devices and a reference design to help companies meet AI computing demands and scale power management architectures from 12 V to 48 V to 800 VDC. These products include a dual-phase smart power stage, a dual-phase smart power module for lateral power delivery, a gallium nitride (GaN) intermediate bus converter (IBC), and a 30-kW AI server power supply unit reference design.
“Data centers are very complex systems and they’re running very power-intensive workloads that demand a perfect balance of multiple critical factors,” said Chris Suchoski, general manager of TI’s data center systems engineering and marketing team. “Most important are power density, performance, safety, grid-to-gate efficiency, reliability, and robustness. These factors are particularly essential in developing next-generation, AI purpose-driven data centers, which are more power-hungry and critical today than ever before.”
(Source: Texas Instruments Inc.)
Suchoski describes grid-to-gate as the complete power path from the AC utility gird to the processor gates in the AI compute servers. “Throughout this path, it’s critical to maximize your efficiency and power density. We can help improve overall energy efficiency from the original power source to the computational workload,” he said.
TI is focused on helping customers improve efficiency, density, and security at every stage in the power data center by combining semiconductor innovation with system-level power infrastructure, allowing them to achieve high efficiency and high density, Suchoski said.
Power density and efficiency improvementsTI’s power conversion products for data centers address the need for increased power density and efficiency across the full 48-V power architecture for AI data centers. These include input power protection, 48-V DC/DC conversion, and high-current DC/DC conversion for the AI processor core and side rails. TI’s newest power management devices target these next-generation AI infrastructures.
One of the trends in the market is a move from single-phase to dual-phase power stages that enable higher current density for the multi-phase buck voltage regulators that power these AI processors, said Pradeep Shenoy, technologist for TI’s data center systems engineering and marketing team.
The dual-phase power stage has very high-current capabilities, 200-A peak, Shenoy said, and it is in a very small, 5 × 5-mm package that comes in a thermally enhanced package with top-side cooling, enabling a very efficient and reliable supply in a small area.
The CSD965203B dual-phase power stage claims the highest peak power density power stage on the market, with 100 A of peak current per phase, combining two power phases in a 5 × 5-mm quad-flat no-lead package. With this device, designers can increase phase count and power delivery across a small printed-circuit-board area, improving efficiency and performance.
Another related trend is the move to dual-phase power modules, Shenoy said. “These power modules combine the power stages with the inductors, all in a compact form factor.”
The dual-phase power module co-packages the power stages with other components on the bottom and the inductor on the top, and it offers both trans-inductor voltage regulator (TLVR) and non-TLVR options, he added. “They help improve the overall power density and current density of the solution with over a 2× reduction in size compared with discrete solutions.”
The CSDM65295 dual-phase power module delivers up to 180 A of peak output current in a 9 × 10 × 5-mm package. The module integrates two power stages and two inductors with TLVR options while maintaining high efficiency and reliable operation.
The GaN-based IBC achieves over 1.5 kW of output power with over 97.5% peak efficiency, and it also enables regulated output and active current sharing, Shenoy said. “This is important because as we see the power consumption and power loads are increasing in these data centers, we need to be able to parallel more of these IBCs, and so the current sharing helps make that very scalable and easy to use.”
The LMM104RM0 GaN converter module offers over 97.5% input-to-output power conversion efficiency and high light-load efficiency to enable active current sharing between multiple modules. It can deliver up to 1.6 kW of output power in a quarter-brick (58.4 × 36.8-mm) form factor.
TI also introduced a 39-kW dual-stage power supply reference design for AI servers that features a three-phase, three-level flying capacitor power-factor-correction converter paired with dual delta-delta three-phase inductor-inductor-capacitor converters. The power supply is configurable as a single 800-V output or separate output supplies.
30-kW HVDC AI data center reference design (Source: Texas Instruments Inc.)
TI also announced a white paper, “Power delivery trade-offs when preparing for the next wave of AI computing growth,” and its collaboration with Nvidia to develop power management devices to support 800-VDC power architectures.
The solutions will be on display at Open Compute Summit (OCP), Oct. 13–16, in San Jose, California. TI is exhibiting at Booth #C17. The company will also participate in technology sessions, including the OCP Global Summit Breakout Session and OCP Future Technologies Symposium.
The post TI launches power management devices for AI computing appeared first on EDN.
The progression of wafer sizes through the years at the fab I work at.
| 3 inch to 8 inch. Fab has been around since the 60s. Currently the 8 inch is our production size but the 6 inch is still used in the company and they float around as engineering wafers. [link] [comments] |
My humble workbench
| My simple lab in my dungeon. Recently picked up the Kepco Programmable Power Supply and Agilent 54622D oscilloscope from work. We’re moving buildings and they’re tossing a lot of stuff. I’m running an Intel NUC with Win11, HP Slice with Fedora, RPi 4b (in the 3D printed green and black Fractal case) with RPi OS, and a Dogbone running Debian. It’s a very simple setup compared to a lot of yours but I love it. A nice place to escape. [link] [comments] |
100-V GaN transistors meet automotive standard

Infineon Technologies AG unveils its first gallium nitride (GaN) transistor family qualified to the Automotive Electronics Council (AEC) standard for automotive applications. The new CoolGaN automotive transistor 100-V G1 family, including high-voltage (HV) CoolGaN automotive transistors and bidirectional switches, meet AEC-Q101.
(Source: Infineon Technologies AG)
This supports Infineon’s commitment to provide automotive solutions from low-voltage infotainment systems addressed by the new 100-V GaN transistor to future HV product solutions in onboard chargers and traction inverters. “Our 100-V GaN auto transistor solutions and the upcoming portfolio extension into the high-voltage range are an important milestone in the development of energy-efficient and reliable power transistors for automotive applications,” said Johannes Schoiswohl, Infineon’s head of the GaN business line, in a statement.
The new devices include the IGC033S10S1Q CoolGaN automotive transistor 100 V G1 in a 3 × 5-mm PQFN package, and the IGB110S10S1Q CoolGaN transistor 100 V G1 in a 3 × 3-mm PQFN. The IGC033S10S1Q features an Rds(on) of 3.3 mΩ and the IGB110S10S1Q has an Rds(on) of 11 mΩ. Other features include dual-side cooling, no reverse recovery charge, and ultra-low figures of merit.
These GaN e-mode power transistors target automotive applications such as advanced driver assistance systems and new climate control and infotainment systems that require higher power and more efficient power conversion solutions. GaN power devices offer higher energy efficiency in a smaller form factor and lower system cost compared to silicon-based components, Infineon said.
The new family of 100-V CoolGaN transistors target applications such as zone control and main DC/DC converters, high-performance auxiliary systems, and Class D Audio amplifiers. Samples of the pre-production automotive-qualified product range are now available. Infineon will showcase its automotive GaN solutions at the OktoberTech Silicon Valley, October 16, 2025.
The post 100-V GaN transistors meet automotive standard appeared first on EDN.
Voltage-to-period converter offers high linearity and fast operation

The circuit in Figure 1 converts the input DC voltage into a pulse train. The period of the pulses is proportional to the input voltage with a 50% duty cycle and a nonlinearity error of 0.01%. The maximum conversion time is less than 5 ms.
Figure 1 The circuit uses an integrator and a Schmitt trigger with variable hysteresis to convert a DC voltage into a pulse train where the period of the pulses is proportional to the input voltage.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The circuit is made of four sections. The op-amp IC1 and resistors R1 to R5 create two reference voltages for the integrator.
The integrator, built with IC2, RINT, and CINT, generates two linear ramps. Switch S1 changes the direction of the current going to the integrating capacitor; in turn, this changes the direction of the linear ramps. The rest of the circuit is a Schmitt trigger with variable hysteresis. The low trip point VLO is fixed, and the high trip point VHI is variable (the input voltage VIN comes in there).
The signal coming from the integrator sweeps between the two trip points of the trigger at an equal rate and in opposite directions. Since R4 = R5, the duty cycle is 50% and the transfer function is as follows:
![]()
To start oscillations, the following relation must be satisfied when the circuit gets power:
![]()
Figure 2 shows that the transfer function of the circuit is perfectly linear (the R² factor equals unity). In reality, there are slight deviations around the straight line; with respect to the span of the output period, these deviations do not exceed ± 0.01%. The slope of the line can be adjusted to 1000 µs/V by R2, and the offset can be easily cancelled by the microcontroller (µC).

Figure 2 The transfer function of the circuit in Figure 1. It is very linear and can be easily adjusted via R2.
Figure 1 shows that the µC converts period T into a number by filling the period with clock pulses of frequency fCLK = 1 MHz. It also adds 50 to the result to cancel the offset. The range of the obtained numbers is from 200 to 4800, i.e., the resolution is 1 count per mV.
Resolution can be easily increased by a factor of 10 by setting the clock frequency to 10 MHz. The great thing is that the nonlinearity error and conversion time remain the same, which is not possible for the voltage-to-frequency converters (VFCs). Here is an example.
Assume that a voltage-to-period converter (VPC) generates pulse periods T = 5 ms at a full-scale input of 5 V. Filling the period with 1 MHz clock pulses produces a number of 5000 (N = T * fCLK). The conversion time is 5 ms, which is the longest for this converter. As we already know, the nonlinearity is 0.01%.
Now consider a VFC which produces a frequency f = 5 kHz at a 5-V input. To get the number of 5000, this signal must be gated by a signal that is 1 second long (N = tG * f). Gate time is the conversion time.
The nonlinearity in this case is 0.002 % (see References), which is five times better than VPC’s nonlinearity. However, conversion time is 200 times longer (1 s vs. 5 ms). To get the same number of pulses N for the same conversion time as the VPC, the full-scale frequency of the VFC must go up to 1 MHz. However, nonlinearity at 1 MHz is 0.1%, ten times worse than VPC’s nonlinearity.
The contrast becomes more pronounced when the desired number is moved up to 50,000. Using the same analysis, it becomes clear that the VPC can do the job 10 times faster with 10 times better linearity than the VFCs. An additional advantage of the VPC is the lower cost.
If you plan to use the circuit, pay attention to the integrating capacitor. As CINT participates in the transfer function, it should be carefully selected in terms of tolerance, temperature stability, and dielectric material.
Jordan Dimitrov is an electrical engineer & PhD with 30 years of experience. Currently, he teaches electrical and electronics courses at a Toronto community college.
Related Content
- Voltage-to-period converter improves speed, cost, and linearity of A-D conversion
- Circuits help get or verify matched resistors
- RMS stands for: Remember, RMS measurements are slippery
References:
- AD650 voltage-to-frequency and frequency-to-voltage converter. Data sheet from Analog Devices; www.analog.com
- VFC320 voltage-to-frequency and frequency-to-voltage converter. Data sheet from Burr-Brown; www.ti.com
The post Voltage-to-period converter offers high linearity and fast operation appeared first on EDN.
“Flip ON Flop OFF” for 48-VDC systems with high-side switching

My Design Idea (DI), “Flip ON Flop OFF for 48-VDC systems,“ was published and referenced Stephen Woodward’s earlier “Flip ON Flop OFF” circuit. Other DIs published on this subject matter were for voltages less than 15 V, which is the voltage limit for CMOS ICs, while my DI was intended for higher DC voltages, typically 48 VDC. In this earlier DI, the ground line is switched, which means the input and output grounds are different. This is acceptable to many applications since the voltage is small and will not require earthing.
However, some readers in the comments section wanted a scheme to switch the high side, keeping the ground the same. To satisfy such a requirement, I modified the circuit as shown in Figure 1, where input and output grounds are kept the same and switching is done on the positive line side.

Figure 1 VCC is around 5 V and should be connected to the VCC of the ICs U1 and U2. The grounds of ICs U1 and U2 should also be connected to ground (connection not shown in the circuit). Switching is done in the high side, and the ground is the same for the input and output. Note, it is necessary for U1 to have a heat sink.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In this circuit, the voltage dividers R5 and R7 set the voltage at around 5 V at the emitter of Q2 (at VCC). This voltage is applied to ICs U1 and U2. A precise setting is not important, as these ICs can operate from 3 to 15 V. R2 and C2 are for the power ON reset of U1. R1 and C1 are for the push button (PB) switch debounce.
When you momentarily push PB once, the Q1-output of the U1 counter (not the Q1 FET) goes HIGH, saturating the Q3 transistor. Hence, the gate of Q1 (PMOSFET, IRF 9530N, VDSS=-100 V, IDS=-14 A, RDS=0.2 Ω) is pulled to ground. Q1 then conducts, and its output goes near 48 VDC.
Due to the 0.2-Ω RDS of Q1, there will be a small voltage drop depending on load current. When you push PB again, transistor Q3 turns OFF and Q1 stops conducting, and the voltage at the output becomes zero. Here, switching is done at the high side, and the ground is kept the same for the input and output sides.
If galvanic isolation is required (this may not always be the case), you may connect an ON/OFF mechanical switch prior to the input. In this topology, on-load switching is taken care of by the PB-operated circuit, and the ON/OFF switch switches zero current only, so it does not need to be bulky. You can select a switch that passes the required load current. While switching ON, first close the ON/OFF switch and then operate PB to connect. While switching OFF, first push PB to disconnect and operate the ON/OFF switch.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- Flip ON Flop OFF
- To press ON or hold OFF? This does both for AC voltages
- Flip ON Flop OFF without a Flip/Flop
- Elaborations of yet another Flip-On Flop-Off circuit
- Another simple flip ON flop OFF circuit
- Flip ON Flop OFF for 48-VDC systems
The post “Flip ON Flop OFF” for 48-VDC systems with high-side switching appeared first on EDN.



