Українською
  In English
EDN Network
Touch ICs scale across automotive display sizes

Two touchscreen controllers join Microchip’s maXTouch M1 family, expanding support for automotive displays over a wider range of form factors. The ATMXT3072M1-HC and ATMXT288M1 cover free-form widescreen displays up to 42 in., as well as compact screens in the 2- to 5-in. range. Both devices are compatible with display technologies such as OLED and microLED.

The AEC-Q100-qualified controllers leverage Smart Mutual acquisition technology to boost SNR by up to 15 dB compared to previous generations. They deliver reliable touch detection even for on-cell OLEDs, where embedded touch electrodes are subjected to high capacitive loads and increased noise coupling.
The ATMXT3072M1-HC targets large, continuous touch sensor designs that span both the cluster and center information display, enabling a single hardware design for left-hand and right-hand drive vehicles. For smaller screens, the ATMXT288M1 is available in a TFBGA60 package, reducing PCB area by 20% compared to the previous smallest automotive-qualified maXTouch product.
For pricing and sample orders, contact a Microchip sales representative or authorized dealer.
The post Touch ICs scale across automotive display sizes appeared first on EDN.
Keysight automates complex coexistence testing

Keysight’s Wireless Coexistence Test Solution (WCTS) is a scalable platform for validating wireless device performance in crowded RF environments. This automated, standards-aligned approach reduces manual setup, improves test repeatability, and enables earlier identification of coexistence risks during development.

To replicate real-world RF conditions, WCTS integrates a wideband vector signal generator. It covers 9 kHz to 8.5 GHz—scalable to 110 GHz—with modulation bandwidths up to 250 MHz (expandable to 2.5 GHz). A single RF port can generate up to eight virtual signals, enabling complex interference scenarios without additional hardware. Nearly 100 predefined, ANSI C63.27-compliant test scenarios are included, covering all three coexistence tiers.
Built on OpenTAP, an open-source, cross-platform test sequencer, WCTS delivers scalable and configurable testing through a user-friendly GUI and open architecture. Engineers can upload custom waveforms and validate test plans offline using simulation mode, accelerating test development and reducing lab time.
More information about the Keysight Wireless Coexistence Test Solution can be found here.
The post Keysight automates complex coexistence testing appeared first on EDN.
600-V MOSFET enables efficient, reliable power conversion

The first device in AOS’ αMOS E2 high-voltage Super Junction MOSFET platform is the AOTL037V60DE2, a 600-V N-channel MOSFET. It offers high efficiency and power density for mid- to high-power applications such as servers and workstations, telecom rectifiers, solar inverters, motor drives, and other industrial power systems.

Optimized for soft-switching topologies, the AOTL037V60DE2 delivers low switching losses and is well suited for Totem Pole PFC, LLC and PSFB converters, as well as CrCM H-4 and cyclo-inverter applications. The device is available in a TOLL package and features a maximum RDS(on) of 37 mΩ.
AOS engineered the αMOS E2 high-voltage Super Junction MOSFET platform with a robust intrinsic body diode to handle hard commutation events, such as reverse recovery during short-circuits or start-up transients. Evaluations by AOS showed that the body diode can withstand a di/dt of 1300 A/µs under specific forward current conditions at a junction temperature of 150 °C. Testing also confirmed strong Avalanche Unclamped Inductive Switching (UIS) capability and a long Short-Circuit Withstanding Time (SCWT), supporting reliable operation under abnormal conditions.
The AOTL037V60DE2 is available in production quantities at a unit price of $5.58 for 1000-piece orders.
The post 600-V MOSFET enables efficient, reliable power conversion appeared first on EDN.
Stable LDOs use small-output caps

Based on Rohm’s Nano Cap ultra-stable control technology, the BD9xxN5 series of LDO regulator ICs delivers 500 mA of output current. The series is intended for 12-V and 24-V primary power supply applications in automotive, industrial, and communication systems.

The BD9xxN5 series builds on the earlier BD9xxN1 series, increasing the output current from 150 mA to 500 mA while maintaining stability with small-output capacitors. The ICs provide low output voltage ripple (~250 mV) for load current changes from 1 mA to 500 mA within 1 µs. Using a typical output capacitance of 470 nF, they enable compact designs and flexible component selection.
All six new variants in the BD9xxN5 series are AEC-Q100 qualified and operate over a temperature range of –40°C to +125°C. Each device provides a single output of 3.3 V, 5 V, or an adjustable voltage from 1 V to 18 V, accurate to within ±2.0%. The absolute maximum input voltage rating is 45 V.
The BD9xxN5 LDO regulators are available now from Rohm’s authorized distributors. Datasheets for each variant can be accessed here.
The post Stable LDOs use small-output caps appeared first on EDN.
1200-V SiC modules enable direct upgrades

Five 1200-V SiC power modules in SOT-227 packages from Vishay serve as drop-in replacements for competing solutions. Based on the company’s latest generation of SiC MOSFETs, the modules deliver higher efficiency in medium- to high-frequency automotive, energy, industrial, and telecom applications.

The VS-SF50LA120, VS-SF50SA120, VS-SF100SA120, VS-SF150SA120, and VS-SF200SA120 power modules are available in single-switch and low-side chopper configurations. Each module’s SiC MOSFET integrates a soft body diode with low reverse recovery. This reduces switching losses and improves efficiency in solar inverters and EV chargers, as well as server, telecom, and industrial power supplies.
The modules support drain currents from 50 A to 200 A. The VS-SF50LA120 is a 50-A low-side chopper with 43-mΩ RDS(on), while the VS-SF50SA120 is a 50-A single-switch device rated at 47 mΩ. Single-switch options scale to 100 A, 150 A, and 200 A with RDS(on) values of 23 mΩ, 16.8 mΩ, and 12.1 mΩ, respectively.
Samples and production quantities of the VS-SF50LA120, VS-SF50SA120, VS-SF100SA120, VS-SF150SA120, and VS-SF200SA120 are available now, with lead times of 13 weeks.
The post 1200-V SiC modules enable direct upgrades appeared first on EDN.
Chandra X-Ray Mirror

There is a Neil deGrasse Tyson video covering the topic of the Chandra X-ray Observatory. This essay is in part derived from that video. I suggest that you view the discussion. It will be sixty-five minutes well spent.
This device doesn’t look anything like a planar mirror because X-ray photons cannot be reflected by any known surface in the way you see your reflection above your bathroom sink.
If you aim a stream of X-ray photons directly toward any particular surface, either a silvered mirror or some kind of intended lens, those photons will either pass right on through (which is what your medical X-rays do) or they will be absorbed. You will not be able alter the trajectory of an X-ray photon stream, at least not with any device like that.
However, X-ray photons can be grazed off a reflective surface to achieve a slight trajectory change if their initial angle of approach to the mirror surface is kept very small. With the surface of the Chandra X-ray mirror made extremely smooth, almost down to the atomic level, repeated grazing permits X-ray focus to be achieved. This is the operating principle of the Chandra X-ray Telescope’s mirror, as shown in Figure 1.

Figure 1 The Chandra X-Ray Observatory mirrors showing a perspective view, a cut-away view, and x-ray photon trajectories. (Source: StarTalk Podcast)
The Chandra Observatory was launched on July 23, 1999, and has been doing great things ever since. Regrettably, however, its continued operation is in some jeopardy. Please see the following Google search result.

Figure 2 Google search result of the Chandra Telescope showing science funding budget cuts for the Chandra X-ray Observatory going from $69 million to zero. (Source: Google, 2026)
I’m keeping my fingers crossed.
John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Image of the Day: Hand of God spotted by NASA telescope
- The latest and most spectacular photos of space exploration projects
- Image of the day: First ever ‘time lapse’ X-ray image of a celestial nova
- NASA’s NuSTAR probe captures never before seen X-ray images of the sun
The post Chandra X-Ray Mirror appeared first on EDN.
Successive approximation

Analog-to-digital conversion methods abound, but we are going to take a look at a particular approach as shown in Figure 1.
Figure 1 An analog-to-digital converter where an analog input signal is compared to a voltage reference that has been scaled via a resistive ladder network. (Source: John Dunn)
In this approach, in very simplified language, an analog input signal is compared to a voltage reference that has been scaled via a resistive ladder network. Scaling is adjusted by finding that digital word for which a scaled version of Vref becomes equal to the analog input. The number of bits in the digital word can be chosen pretty much arbitrarily, but sixteen bits is not unusual. However, for illustrative purposes, we will illustrate the use of only seven bits.
Referring to a couple of examples as seen in Figure 2, the process runs something like this.

Figure 2 Two digital word acquisition examples using successive approximation. (Source: John Dunn)
For descriptive purposes, let the analog input be called our “target”. We first set the most significant bit (the MSB) of our digital word to 1 and all of the lower bits to 0. We compare the scaled Vref to the target to see if we have equality. If the scaled Vref is lower than the target, we leave the MSB at 1, or if the scaled Vref is greater than the target, we return the MSB to 0. If the two are equal, we have completion.
In either case, if we do not have completion, we then set the next lower bit to 1, and again we compare the scaled Vref to the target to see if we have equality. If the scaled Vref is lower than the target, we leave this second bit at 1, or if the scaled Vref is greater than the target, we return this second bit back to 0. If the two are equal, we have completion.
Again, in either case, if we do not have completion, we then set the next lower bit to 1, and again we compare the scaled Vref to the target to see if we have equality. If the scaled Vref is lower than the target, we leave this third bit at 1, or if the scaled Vref is greater than the target, we return this third bit to 0. If the two are equal, we have completion.
Sorry for the monotony, but that is the process. We repeat this process until we achieve equality, which can take as many steps as there are bits, and therein lies the beauty of this method.
We will achieve equality in no more steps than there are bits. For the seven-bit examples shown here, the maximum number of steps to completion will be seven. Of course, it’s not that we actually have seven-bit converters offered by any company, but the number “seven” simply allows viewable examples to be drawn below. Fewer bits might not make things clear, while more bits could have us squinting at the page with a magnifying glass.
If we did a simple counting process starting from all zeros, the maximum number of steps could be as high as 27+1 or one-hundred-twenty-eight, which could/would be really slow.
Slow, straight-out counting would be a “tracking” process, which is sometimes used and which does have its own virtues. However, we can speed things up with what is called “successive approximation”.
Please note that the “1”, the “-1”, and the “0” highlighted in blue are merely indicators of which value is greater than, less than, or equal to the other.
A verbal description of this process for the target value of 101 may help shed some light. We then proceed as follows. (Yes, this is going to be verbose, but please trace it through.)
We first set the most significant bit with its weight value of 64 to a logic 1 and discover that the numerical value of the bit pattern is just that, the value 64. When we compare this to our target number of 101, we find that we’re too low. We will leave that bit where it is and move on.
We set the next lower significant bit with its weight value of 32 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 = 96. When we compare this to our target number of 101, we find that we’re still too low. We will leave the pair of bits where they are and move on.
We set the next lower bit again with its weight value of 16 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 16 = 112. When we compare this to our target number of 101, we find that we are now too high. We will leave the first two most significant bits where they are, but we will return the third most significant bit to logic 0 and move on.
We set the next lower bit again with its weight value of 8 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 0 + 8 = 104. When we compare this to our target number of 101, we find that we are now again too high. We will leave the first three most significant bits where they are, but we will return the fourth most significant bit to logic 0 and move on.
We set the next lower bit again with its weight value of 4 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 0 + 0 + 4 = 100. When we compare this to our target number of 101, we find that we’re once again too low. We will leave the quintet of bits where they are and move on.
We set the next lower bit again with its weight value of 2 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 0 + 0 + 4 + 2 = 102. When we compare this to our target number of 101, we find that we are now once again too high. We will leave the first five most significant bits where they are, but we will return the sixth most significant bit to logic 0 and move on.
We set the lowest bit with its weight value of 1 to a logic 1 and discover that the sum yielding the numerical value is now 101, there is no error. We have completed our conversion in only seven counting steps, which is far and away, way less than the number of steps that would have been required in a simple, direct counting scheme.
It may be helpful to look at a larger number of digital word acquisition examples, as in Figure 3.

Figure 3 Digital word acquisitions with number paths. (Source: John Dunn)
Remember the old movie “Seven Brides for Seven Brothers”? For these examples, think “Seven Steps for Seven Bits”.
John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- ADCs for High Dynamic Range: Successive-Approximation or Sigma-Delta?
- ADC Basics, Part 3: Using Successive-Approximation Register ADC in Designs
- Challenges & Requirements: Voltage Reference Design for Precision Successive-Approximation ADCs, Part 4
- “Golden Gloves” A/D Converter Match: Successive-approximation register vs. sigma-delta topology
The post Successive approximation appeared first on EDN.
Apple CarPlay and Google Android Auto: Usage impressions and manufacturer tensions

What happens to manufacturers when your ability to differentiate whose vehicle you’re currently traveling in, far from piloting, disappears?
My wife’s 2018 Land Rover Discovery:

not only now has upgraded LED headlights courtesy of yours truly, I also persuaded the dealer a while ago to gratis-activate the vehicle’s previously latent Apple CarPlay and Google Android Audio facilities for us (gratis in conjunction with a fairly pricey maintenance bill, mind you…). I recently finally got around to trying them both out, and the concept’s pretty cool, with the implementation a close second. Here’s what CarPlay’s UI looks like, courtesy of Wikipedia’s topic entry:

And here’s the competitive Android Auto counterpart:

As you can see, this is more than just a simple mirroring of the default smartphone user interface; after the mobile device and vehicle successfully activate a bidirectional handshake, the phone switches into an alternative UI that’s more vehicle (specifically: mindful of driver-distraction potential) amenable, and tailored for its larger albeit potentially lower overall resolution dashboard-integrated display.
The baseline support for both protocols in our particular vehicle is wired, which means that you plug the phone into one of the USB-A ports located within the storage console located between the front seats. My wife’s legacy iPhone is still Lightning-based, so I’ve snagged both a set of inexpensive ($4.99 for three) coiled Lightning-to-USB-A cords for her:

and a similarly (albeit not quite as impressively) penny-pinching ($6.67 for two) suite of USB-C-to-USB-A coiled cords for my Google Pixel phones:

The wired approach is convenient because a single cord handles both communication-with-vehicle and phone charging tasks. That said, a lengthy strand of wire, even coiled, spanning the gap from the console to the magnetic mount located at the dashboard vent:

is aesthetically and otherwise unappealing, especially considering that the mount at the phone end also already redundantly supports both MagSafe (iPhone) and Qi (Pixel, in conjunction with a magnet-augmented case) charging functions:

Therefore, I’ve also pressed into service a couple of inexpensive (~$10 each, sourced from Amazon’s Warehouse-now-Resale section) wireless adapters that mimic the integrated wireless facilities of newer model-year vehicles and even comprehend both the CarPlay and Android Auto protocols. One comes from a retailer called VCARLINKPLAY:

The other is from the “PakWizz Store”:

The approach here is somewhat more complicated. The phone first pairs with the adapter, already plugged into and powered by the car’s USB-A port, over Bluetooth. The adapter then switches both itself and the phone to a common and (understandably, given the aggregate data payload now involved) beefier 5 GHz Wi-Fi Direct link.
Particularly considering the interference potential from other ISM band (both 2.4 GHz for Bluetooth and 5 GHz for Wi-Fi) occupants contending for the same scarce spectrum, I’m pleasantly surprised at how reliable everything is, although initial setup admittedly wasn’t tailored for the masses and even caused techie-me to twitch a bit.
Encroaching on vehicle manufacturers’ turfAs such, I’ve been especially curious to follow recent news trends regarding both CarPlay and Android Auto. Rivian and Tesla, for example, have long resisted adding support for either protocol to their vehicles, although rumors persist that both companies are continuing to develop support internally for potential rollout in the future.
Automotive manufacturers’ broader embrace (public at least) for next-generation CarPlay Ultra has to date been muted at best. And GM is actively phasing out both CarPlay and Android Auto from new vehicle models, in favor of an internally developed entertainment software-and-display stack alternative.
What’s going on? Consider this direct quote from Apple’s May 2025 CarPlay Ultra press release:
CarPlay Ultra builds on the capabilities of CarPlay and provides the ultimate in-car experience by deeply integrating with the vehicle to deliver the best of iPhone and the best of the car. It provides information for all of the driver’s screens, including real-time content and gauges in the instrument cluster.
Granted, Apple has noted that in developing CarPlay Ultra, it’s “reflecting the automaker’s look and feel” (along with “offering drivers a customizable experience”). But given that all Apple showed last May was an Aston Martin logo next to its own:
I’d argue that Apple’s “partnership” claims are dubious, and maybe even specious. And per comments from Ford’s CEO Jim Farley in a recent interview, he seems to agree (the full interview is excellent and well worth a read):
Are you going to allow OEMs to control the vehicles? How far do you want the Apple brand to go? Do you want the Apple brand to start the car? Do you want the Apple brand to limit the speed? Do you want the Apple brand to limit access?
The bottom line, as I see it, is that Apple can pontificate all it wants that:
CarPlay Ultra allows automakers to express their distinct design philosophy with the look and feel their customers expect. Custom themes are crafted in close collaboration between Apple and the automaker’s design team, resulting in experiences that feel tailor-made for each vehicle.
But automakers like Ford and GM are obviously (and understandably so, IMHO) worried that with Apple and Google already taking over key aspects of the visual, touch (and audible; don’t forget about the Siri and Google Assistant-now-Gemini voice) interfaces, not to mention their even more aggressive aspirations (along with historical behavior in other markets as a guide to future behavior here), the manufacturer, brand and model uniqueness currently experienced by vehicle occupants will evaporate in response.
More to comeI’ll be curious to see (and cover) how this situation continues to develop. For now, I welcome your thoughts in the comments on what I’ve shared so far in this post. And FYI, I’ve also got two single-protocol wireless adapter candidates sitting in my teardown pile awaiting attention: a CarPlay-only unit from the “Luckymore Store”:

And an Android Auto-only unit, the v1 AAWireless, which I’d bought several years back in its original Indiegogo crowdfunding form:

Stay tuned for those, as well!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
The post Apple CarPlay and Google Android Auto: Usage impressions and manufacturer tensions appeared first on EDN.
Is this low-inductance power-device package the real deal?

While semiconductor die get so much of the attention due to their ever-shrinking feature size and ever-increasing substrate size, the ability to effectively package them and thus use them in a circuit is also critical. For this reason, considerable effort is devoted to developing and perfecting practical, technically advanced, thermally suitable cost-effective packages for components ranging from switching power devices to multi-gigahertz RF devices.
Regardless of frequency, package parasitic inductance is a detrimental issue, as it slows down slewing needed for switching crispness of digital devices and responsiveness of analog ones (of course, reality is that digital switching performance is still constrained by analog principles.).
Now, a researcher team at the US Department of Energy’s National Renewable Energy Laboratory (NREL; recently renamed as the National Laboratory of the Rockies) has developed a silicon-carbide half-bridge module that uses organic direct-bonded copper in a novel layout design to enable a high degree of magnetic-flux cancellation, Figure 1.
Figure 1 (left) 3D CAD drawing of new half-bridge inverter module; (right) Early prototype of polyimide-based half-bridge module. Source: NREL
Their Ultra-Low Inductance Smart (ULIS) package is a 1200 V, 400 A half bridge silicon carbide (SiC) power module that can be pushed beyond 200-kHz switching frequency at maximum power. The low-cost ULIS also allows the converter to become easier to manufacture, addressing issues related to both bulkiness and costs.
Preliminary results show that it has approximately seven to nine times lower loop inductances and higher switching speeds at similar voltages/current levels, and five times the energy density of earlier designs — while occupying a smaller footprint, Figure 2.

Figure 2 The complete ULIS package is very different than conventional packages and offers far lower loop inductance compared to exiting approaches. Source: NREL
In addition to being powerful and lightweight, the module continuously tracks its own condition and can anticipate component failures before they happen.
In traditional designs, the power modules conduct electricity and dissipate excess heat by bonding copper sheets directly to a ceramic base—an effective, but rigid, solution. ULIS bonds copper to a flexible Dupont Temprion polymer create a thinner, lighter, more configurable design.
Unlike typical power modules which assemble semiconductor devices inside a brick-like package, ULIS winds its circuits around a flat, octagonal design, Figure 3. The disk-like shape allows more devices to be housed in a smaller area, making the overall package smaller and lighter.

Figure 3 This “exploded” drawing of the complete half-bridge power module shows the arrangement of the electrical and structural elements. Source: NREL
At the same time, its novel current routing allows for maximum cancellation of magnetic flux, contributing to the power module’s clean, low-loss electrical output, meaning ultrahigh efficiency.
While conventional power modules rely on bulky and inflexible materials, ULIS takes a new approach. Traditional designs call for power modules to conduct electricity and dissipate excess heat by bonding copper sheets directly to a ceramic base—an effective but rigid solution. ULIS bonds copper to the flexible, electrically insulating Temprion to create a thinner and lighter module.
The stacked module layout greatly improves energy density and reduces parasitic inductance (based on simulation data). Typical half-bridge module inductance is 2.2 to 5.5 nanohenries, compared to 20 to 25 nH for existing designs. Further, reliability is enhanced as the compliance of Temprion reduces the strain caused by the differences in the coefficient of thermal expansion (CTE) between mated materials.
Since the material bonds easily to copper using just pressure and heat, and because its parts can be machined using widely available equipment, the team maintains that the ULIS can be fabricated quickly and inexpensively, with manifesting costs in the hundreds of dollars rather than thousands, Figure 4.

Figure 4 The ULIS can be machined using widely available equipment, thus significantly reducing the manufacturing costs for the power module. Source: NREL
Another innovation allows the ULIS to function wirelessly as an isolated unit that can be controlled and monitored without external cables. A patent is pending for this low-latency wireless communication protocol.
The ULIS design is a good example of the challenges and dead-end paths that innovation can take on its path to a successful conclusion. According to the team’s report, one of the original layouts looked like a flower with a semiconductor at the tip of each petal. Another idea was to create a hollow cylinder with components wired to the inside.
Every idea the team came up with was either too expensive or too difficult to fabricate—until they stopped thinking in three dimensions and flattened the design into nearly two dimensions, which made it possible to build the module balancing complexity with cost and performance.
The details of the work are in their readable and detailed IEEE APEC paper “Organic Direct Bonded Copper-Based Rapid Prototyping for Silicon Carbide Power Module Packaging” but it is behind a paywall. However, there is a nice “poster” summary of their work posted at the NLR site here.
I wonder is this innovation will catch on and be adopted, but I certainly don’t know. What I do know is that some innovations are slow to catch on, and many do not because of real-world problems related to scaling up, volume production unforeseen technical issues, testability…it’s a long list of what can get in the way.
If you don’t think so, just look at batteries: every month, we see news of dramatic advances that will supposedly revolutionize their performance, yet these breakthroughs don’t seem to get traction. Sometimes it is due to technical or implementation problems, but often it is because the actual improvement they provide does not outweigh the disruption they create in getting there.
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related content
- Stop blaming the supply for your dissipation woes
- GaN gets game
- New Horizons spacecraft’s power issues make yours look trivial
- Paralleling supplies: good, bad, or ugly?
The post Is this low-inductance power-device package the real deal? appeared first on EDN.
Top 10 edge AI chips

As edge devices become increasingly AI-enabled, more and more chips are emerging to fill every application niche. At the extremes, applications such as speech recognition can be done in always-on power envelopes, while tens of watts will be enough for even larger generative AI models today.
Here, in no particular order, are 10 of EDN’s selections for a range of edge AI applications. These devices range from those capable of handling multimodal large language models (LLMs) in edge devices to those designed for vision processing and minimizing power consumption for always-on applications.
Multiple camera streamsFor vision applications, Ambarella Inc.’s latest release is the CV7 edge AI vision system-on-chip (SoC) for processing multiple high-quality camera streams simultaneously via convolutional neural networks (CNNs) or transformer networks. The CV7 features the latest generation of Ambarella’s proprietary AI accelerator, plus an in-house image-signal processor (ISP), which uses both traditional ISP algorithms and AI-driven features. This family also includes quad Arm Cortex-A73 cores, hardware video codecs on-chip, and a new, 64-bit DRAM interface.
Ambarella is targeting this family for AI-based 8K consumer products such as action cameras, multicamera security systems, robotics and drones, industrial automation, and video conferencing. It will also be suitable for automotive applications such as telematics and advanced driver-assistance systems.
Ambarella’s CV7 vision SoC (Source: Ambarella Inc.)
Fallback CPU
The MLSoC Modalix from SiMa Technologies Inc. is now available in production quantities, along with its LLiMa software framework for deployment of LLMs and generative AI models on Modalix. Modalix is SiMa’s second-generation architecture, which comes as a family of SoCs designed to host full applications.
Modalix chips have eight Arm A-class CPU cores on-chip alongside the accelerator, important for running application-level code, but also allows programs to fall back on the CPU just in case a particular math operation isn’t supported by the accelerator. Also on the SoC are an on-chip ISP and digital-signal processor (DSP). Modalix will come in 25-, 50-, 100-, and 200-TOPS (INT8) versions. The 50-TOPS version will be first to market and can run Llama2-7B at more than 10 tokens per second, with a power envelope of 8–10 W.
Open-source NPUSynaptics Inc.’s Astra series of AI-enabled IoT SoCs range from application processors to microcontroller (MCU)-level parts. This family is purpose-built for the IoT.
The SL2610 family of multimodal edge AI processors is for applications between smart appliances, retail point-of-sale terminals, and drones. All parts in the family have two Arm Cortex-A55 cores, and some have a neural processing unit (NPU) subsystem. The Coral NPU included was developed at Google—it’s an open-source RISC-V CPU with scalar instructions—sitting alongside Synaptics’ homegrown AI accelerator, the T1, which offers 1-TOPS (INT8) performance for transformers and CNNs.
Synaptics’ SL2610 multimodal edge AI processors (Source: Synaptics Inc.)
Raspberry Pi compatibility
The Hailo-10H edge AI accelerator from Hailo Technologies Ltd. is gaining a large developer base, as it is available in a form factor that plugs into hobbyist platform Raspberry Pi. However, the Hailo-10H is also used by HP in add-on cards for its point-of-sale systems, and it’s also automotive-qualified.
The 10H is the same silicon as the Hailo-10 but runs at a lower power-performance point: The 10H can run 2B-parameter LLMs in about 2.5 W. The architecture of this AI co-processor is based on Hailo’s second-generation architecture, which has improved support for transformer architectures and more flexible number representation. Multiple models can be inferenced concurrently.
Hailo’s Hailo-10H edge AI accelerator (Source: Hailo Technologies Ltd.)
Analog acceleration
Startup EnCharge AI announced its first product, the EN100. This chip is a 200-TOPS (INT8) accelerator targeted squarely at the AI PC, achieving an impressive 40 TOPS/W. The device is based on EnCharge’s capacitance-based analog compute-in-memory technology, which the company says is less temperature-sensitive than resistance-based schemes. The accelerator’s output is a voltage (not a current), meaning transimpedance amplifiers aren’t needed, saving power.
Alongside the analog accelerator on-chip are some digital cores that can be used if higher precision is required, or floating-point maths. The EN100 will be available on a single-chip M.2 card with 32-GB LPDDR, with a power envelope of 8.25 W. A four-chip, half-height, half-length PCIe card offers up to 1 TOPS (INT8) in a 40-W power envelope, with 128-GB LPDDR memory.
Encharge AI’s EN100 M.2 card (Source: Encharge AI)
SNNs
For microwatt applications, Innatera Nanosystems B.V. has developed an AI-equipped MCU that can run inference at very, very low power. The Pulsar neuromorphic MCU targets always-on sensor applications: It consumes 600 µW for radar-based presence detection and 400 µW for audio scene classification, for example.
The neural processor uses Innatera’s spiking neural network (SNN) accelerators—there are both analog and digital spiking accelerators on-chip, which can be used for different types of applications and workloads. Innatera says its software stack, Talamo, means developers don’t have to be SNN experts to use the device. Talamo interfaces directly with PyTorch and a PyTorch-based simulator and can enable power consumption estimations at any stage of development.
Innatera’s Pulsar spiking neural processor (Source: Innatera Nanosystems B.V.)
Generative AI
Axelera AI’s second-generation chip, Europa, can support both multi-user generative AI and computer vision applications in endpoint devices or edge servers. This eight-core chip can deliver 629 TOPS (INT8). The accelerator has large vector engines for AI computation alongside two clusters of eight RISC-V CPU cores for pre- and post-processing of data. There is also an H.264/H.265 decoder on-chip, meaning the host CPU can be kept free for application-level software. Given the importance of ensuring compute cores are fed quickly with data from memory, the Europa AI processor unit provides 128 MB of L2 SRAM and a 256-bit LPDDR5 interface.
Axelera’s Voyager software development kit covers both Europa and the company’s first-generation chip, Metis, reserved for more classical CNNs and vision tasks. Europa is available both as a chip or on a PCIe card. The cards are intended for edge server applications in which processing multiple 4K video streams is needed.
Butter wouldn’t meltMost members of the DX-M1 series from South Korean chip company DeepX Co. Ltd. provide 25-TOPS (INT8) performance in the 2- to 5-W power envelope (the exception being the DX-M1M-L, offering 13 TOPS). One of the company’s most memorable demos involves placing a blob of butter directly on its chip while running inference to show that it doesn’t get hot enough for the butter to melt.
Delivering 25 TOPS in this co-processor chip is plenty for vision tasks such as pose estimation or facial recognition in drones, robots, or other camera systems. Under development, the DX-M2 will run generative AI workloads at the edge. Part of the company’s secret sauce is in its quantization scheme, which can run INT8-quantized networks with accuracy comparable to the FP32 original. DeepX sells chips, modules/cards, and small, multichip systems based on its technology for different edge applications.
Voice interfaceThe latest ultra-low-power edge AI accelerator from Syntiant Corp., the NDP250, offers 5× the tensor throughput versus its processor. This device is designed for computer vision, speech recognition, and sensor data processing. It can run on as little as microwatts, but for full, always-on vision processing, the consumption is closer to tens of milliwatts.
As with other parts in Syntiant’s range, the devices use the company’s AI accelerator core (30 GOPS [INT8]) alongside an Arm Cortex-M0 MCU core and an on-chip Tensilica HiFi 3 DSP. On-chip memory can store up to 6-million-bit parameters. The NDP250’s DSP supports floating-point maths for the first time in the Syntiant range. The company suggests that the ability to run both automatic speech recognition and text-to-speech models will lend the NDP250 to voice interfaces in particular.
Multiple power modesNvidia Corp.’s Jetson Orin Nano is designed for AI in all kinds of edge devices, targeting robotics in particular. It’s an Ampere-generation GPU module with either 8 GB or 4 GB of LPDDR5. The 8-GB version can do 33 TOPS (dense INT8) or 17 TFLOPS (FP16). It has three power modes: 7-W, 15-W, and a new, 25-W mode, which boosts memory bandwidth to 102 GB/s (from 65 GB/s for the 15-W mode) by increasing GPU, memory, and CPU clocks. The module’s CPU has six Arm Cortex-A78AE 64-bit cores. Jetson Orin Nano will be a good fit for multimodal and generative AI at the edge, including vision transformer and various small language models (in general, those with <7 billion parameters).
Nvidia’s Jetson Orin Nano (Source: Nvidia Corporation)
The post Top 10 edge AI chips appeared first on EDN.
Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs

The saying “round pegs do not fit square holes” persists because it captures a deep engineering reality: inefficiency most often arises not from flawed components, but from misalignment between a system’s assumptions and the problem it is asked to solve. A square hole is not poorly made; it’s simply optimized for square pegs.
Modern large language models (LLMs) now find themselves in exactly this situation. Although they are overwhelmingly executed on general-purpose graphics processing units (GPGPUs), these processors were never shaped around the needs of enormous inference-based matrix multiplications.
GPUs dominate not because they are a perfect match, but because they were already available, massively parallel, and economically scalable when deep learning began to grow, especially for training AI models.
What follows is not an indictment of GPUs, but a careful explanation of why they are extraordinarily effective when the workload is rather dynamic and unpredictable, such as graphic processing, and disappointedly inefficient when the workload is essentially regular and predictable, such as AI/LLM inference execution.
The inefficiencies that emerge are not accidental; they are structural, predictable, and increasingly expensive as models continue to evolve.
Execution geometry and the meaning of “square”
When a GPU renders a graphic scene, it deals with a workload that is considerably irregular at the macro level, but rather regular at the micro level. A graphic scene changes in real time with significant variations in content—changes in triangles and illumination—but in an image, there is usually a lot of local regularity.
One frame displays a simple brick wall, the next, an explosion creating thousands of tiny triangles and complex lighting changes. To handle this, the GPU architecture relies on a single-instruction multiple threads (SIMT) or wave/warp-based approach where all threads in a “wave” or “warp,” usually between 16 and 128, receive the same instruction at once.
This works rather efficiently for graphics because, while the whole scene is a mess, local patches of pixels are usually doing the same thing. This allows the GPU to be a “micro-manager,” constantly and dynamically scheduling these tiny waves to react to the scene’s chaos.
However, when applied to AI and LLMs, the workload changes entirely. AI processing is built on tensor math and matrix multiplication, which is fundamentally regular and predictable. Unlike a highly dynamic game scene, matrix math is just an immense but steady flow of numbers. Because AI is so consistent, the GPU’s fancy, high-speed micro-management becomes unnecessary. In this context, that hardware is just “overhead,” consuming power and space for a flexibility that the AI doesn’t actually use.
This leaves the GPGPU in a bit of a paradox: it’s simultaneously too dynamic and not dynamic enough. It’s too dynamic because it wastes energy on micro-level programming and complex scheduling that a steady AI workload doesn’t require. Yet it’s not dynamic enough because it is bound by the rigid size of its “waves.”
If the AI math doesn’t perfectly fit into a warp of 32, the GPU must use “padding,” effectively leaving seats empty on the bus. While the GPU is a perfect match for solving irregular graphics problems, it’s an imperfect fit for the sheer, repetitive scale of modern tensor processing.
Wasted area as a physical quantity
This inefficiency can be understood geometrically. A circle inscribed in a square leaves about 21% of the square’s area unused. In processing hardware terms, the “area” corresponds to execution lanes, cycles, bandwidth, and joules. Any portion of these resources that performs work that does not advance the model’s output is wasted area.
The utilization gap (MFU)
The primary way to quantify this inefficiency is through Model FLOPs Utilization (MFU). This metric measures how much of the chip’s theoretical peak math power is actually being used for the model’s calculations versus how much is wasted on overhead, data movement, or idling.
For an LLM like GPT-4 running on GPGPT-based accelerators operating in interactive mode, the MFU drops by an order of magnitude with the hardware busy with “bookkeeping,” which encompasses moving data between memory levels, managing thread synchronization, or waiting for the next “wave” of instructions to be decoded.
The energy cost of flexibility
The inefficiency is even more visible in power consumption. A significant portion of that energy is spent powering the “dynamic micromanagement,” namely, the logic gates that handle warp scheduling, branch prediction, and instruction fetching for irregular tasks.
The “padding” penalty
Finally, there is the “padding” inefficiency. Because a GPGPU-based accelerator operates in fixed wave sizes (typically 32 or 64 threads), if the specific calculation doesn’t perfectly align with those multiples, often happening in the “Attention” mechanism of the LLM model, the GPGPU still burns the power for a full wave while some threads sit idle.
These effects multiply rather than add. A GPU may be promoted with a high throughput, but once deployed, may deliver only a fraction of its peak useful throughput for LLM inference, while drawing close to peak power.
The memory wall and idle compute
Even if compute utilization was perfect, LLM inference would still collide with the memory wall, the growing disparity between how fast processors can compute and how fast they can access memory. LLM inference has low arithmetic intensity, meaning that relatively few floating-point operations are performed per byte of data fetched. Much of the execution time is spent reading and writing the key-value (KV) cache.
GPUs attempt to hide memory latency using massive concurrency. Each streaming multiprocessor (SM) holds many warps and switches between them while others wait for memory. This strategy works well when memory accesses are staggered and independent. In LLM inference, however, many warps stall simultaneously while waiting for similar memory accesses.
As a result, SMs spend large fractions of idle time, not because they lack instructions, but because data cannot arrive fast enough. Measurements commonly show that 50–70% of cycles during inference are lost to memory stalls. Importantly, the power draw does not scale down proportionally since clocks continue toggling and control logic remains active, resulting in poor energy efficiency.
Predictable stride assumptions and the cost of generality
To maximize bandwidth, GPUs rely on predictable stride assumptions; that is, the expectation that memory accesses follow regular patterns. This enables techniques such as cache line coalescing and memory swizzling, a remapping of addresses designed to avoid bank conflicts and improve locality.
LLM memory access patterns violate these assumptions. Accesses into the KV cache depend on token position, sequence length, and request interleaving across users. The result is reduced cache effectiveness and increased pressure on address-generation logic. The hardware expends additional cycles and energy rearranging data that cannot be reused.
This is often described as a “generality tax.”
Why GPUs still dominate
Given these inefficiencies, it’s natural to ask why GPUs remain dominant. The answer lies in history rather than optimality. Early deep learning workloads were dominated by dense linear algebra, which mapped reasonably well onto GPU hardware. Training budgets were large enough that inefficiency could be absorbed.
Inference changes priorities. Latency, cost per token, and energy efficiency now matter more than peak throughput. At this stage, structural inefficiencies are no longer abstract; they directly translate into operational cost.
From adapting models to aligning hardware
For years, the industry focused on adapting models to hardware such as larger batches, heavier padding, and more aggressive quantization. These techniques smooth the mismatch but do not remove it.
A growing alternative is architectural alignment: building hardware whose execution model matches the structure of LLMs themselves. Such designs schedule work around tokens rather than warps, and memory systems are optimized for KV locality instead of predictable strides. By eliminating unused execution lanes entirely, these systems reclaim the wasted area rather than hiding it.
The inefficiencies seen in modern AI data centers—idle compute, memory stalls, padding overhead, and excess power draw—are not signs of poor engineering. They are the inevitable result of forcing a smooth, temporal workload into a rigid, geometric execution model.
GPUs remain masterfully engineered square holes. LLMs remain inherently round pegs. As AI becomes a key ingredient in global infrastructure, the cost of this mismatch becomes the problem itself. The next phase of AI computing will belong not to those who shave the peg more cleverly, but to those who reshape the hole to match the true geometry of the workload.
Lauro Rizzatti is a business advisor to VSORA, a technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
- The AI-tuned DRAM solutions for edge AI workloads
- Designing edge AI for industrial applications
The post Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs appeared first on EDN.
Tune 555 frequency over 4 decades

The versatility of the venerable LMC555 CMOS analog timer is so well known it’s virtually a cliche, but sometimes it can still surprise us. The circuit in Figure 1 is an example. In it a single linear pot in a simple RC network sets the frequency of 555 square wave oscillation over a greater than 10 Hz to 100 kHz range, exceeding a 10,000:1 four decade, thirteen octave ratio. Here’s how it works.
Figure 1 R1 sets U1 frequency from < 10Hz to > 100kHz.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Potentiometer R1 provides variable attenuation of U1’s 0 to V+ peak-to-peak square wave output to the R4R5C1 divider/integrator. The result is a sum of an abbreviated timing ramp component developed by C1 sitting on top of an attenuated square wave component developed by R5. This composite waveshape is input to the Trigger and Threshold pins of U1, resulting in the frequency vs R1 position function plotted on Figure 2′s semi-log graph.

Figure 2 U1 oscillation range vs R1 setting is so wide it needs a log scale to accommodate it.
Curvature of the function does get pretty radical as R1 approaches its limits of travel. Nevertheless, log conformity is fairly decent over the middle 10% to 90% of the pot’s travel and the resulting 2 decades of frequency range. This is sketched in red in Figure 3.

Figure 3 Reasonably good log conformity is seen over mid-80% of R1’s travel.
Of course, as R1 is dialed to near its limits, frequency precision (or lack of it) becomes very sensitive to production tolerances in U1’s internal voltage divider network and those of the circuits external resistors.
This is why U1’s frequency output is taken from pin 7 (Discharge) instead of pin 3 (Output) to at least minimize the effects of loading from making further contributions to instability.
Nevertheless, the strong suit of this design is definitely its dynamic range. Precision? Not so much.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Another weird 555 ADC
- Gated 555 astable hits the ground running
- More gated 555 astable multivibrators hit the ground running
- Inverted MOSFET helps 555 oscillator ignore power supply and temp variations
The post Tune 555 frequency over 4 decades appeared first on EDN.
Emerging trends in battery energy storage systems

Battery energy storage systems (BESSes) are increasingly being adopted to improve efficiency and stability in power distribution networks. By storing energy from both renewable sources, such as solar and wind, and the conventional power grid, BESSes balance supply and demand, stabilizing power grids and optimizing energy use.
This article examines emerging trends in BESS applications, including advances in battery technologies, the development of hybrid energy storage systems (HESSes), and the introduction of AI-based solutions for optimization.
Battery technologiesLithium-ion (Li-ion) is currently the main battery technology used in BESSes. Despite the use of expensive raw materials, such as lithium, cobalt, and nickel, the global average price of Li-ion battery packs has declined in 2025.
BloombergNEF reports that Li-ion battery pack prices have fallen to a new low this year, reaching $108/kWh, an 8% decrease from the previous year. The research firm attributes this decline to excess cell manufacturing capacity, economies of scale, the increasing use of lower-cost lithium-iron-phosphate (LFP) chemistries, and a deceleration in the growth of electric-vehicle sales.
Using iron phosphate as the cathode material, LFP batteries achieve high energy density, long cycle life, and good performance at high temperatures. They are often used in applications in which durability and reliable operation under adverse conditions are important, such as grid energy storage systems. However, their energy density is lower than that of traditional Li-ion batteries.
Although Li-ion batteries will continue to lead the BESS market due to their higher efficiency, longer lifespan, and deeper depth of discharge compared with alternative battery technologies, other chemistries are making progress.
Flow batteriesLong-life storage systems, capable of storing energy for eight to 10 hours or more, are suited for managing electricity demand, reducing peaks, and stabilizing power grids. In this context, “reduction-oxidation [redox] flow batteries” show great promise.
Unlike conventional Li-ion batteries, the liquid electrolytes in flow batteries are stored separately and then flow (hence the name) into the central cell, where they react in the charging and discharging phases.
Flow batteries offer several key advantages, particularly for grid applications with high shares of renewables. They enable long-duration energy storage, covering many hours, such as nighttime, when solar generation is not present. Their raw materials, such as vanadium, are generally abundant and face limited supply constraints. Material concerns are further mitigated by high recyclability and are even less significant for emerging iron-, zinc-, or organic-electrolyte technologies.
Flow batteries are also modular and compact, inherently safe due to the absence of fire risk, and highly durable, with service lifetimes of at least 20 years with minimal performance degradation.
The BESSt Company, a U.S.-based startup founded by a former Tesla engineer, has unveiled a redox flow battery technology that is claimed to achieve an energy density up to 20× higher than that of traditional, vanadium-based flow storage systems.
The novel technology relies on a zinc-polyiodide (ZnI2) electrolyte, originally developed by the U.S. Department of Energy’s Pacific Northwest National Laboratory, as well as a proprietary cell stack architecture that relies on undisclosed, Earth-abundant alloy materials sourced domestically in the U.S.
The company’s residential offering is designed with a nominal power output of 20 kW, paired with an energy storage capacity of 25 kWh, corresponding to an average operational duration of approximately five hours. For commercial and industrial applications, the proposed system is designed to scale to a power rating of 40 kW and an energy capacity of 100 kWh, enabling an average usage time of approximately 6.5 hours.
This technology (Figure 1) is well-suited for integration with solar generation and other renewable energy installations, where it can deliver long-duration energy storage without performance degradation.
Figure 1: The BESSt Company’s ZnI2 redox flow battery system (Source: The BESSt Company)
Sodium-ion batteries
Sodium-ion batteries are a promising alternative to Li-ion batteries, primarily because they rely on more abundant raw materials. Sodium is widely available in nature, whereas lithium is relatively scarce and subject to supply chains that are vulnerable to price volatility and geopolitical constraints. In addition, sodium-ion batteries use aluminum as a current collector instead of copper, further reducing their overall cost.
Blue Current, a California-based company specializing in solid-state batteries, has received an $80 million Series D investment from Amazon to advance the commercialization of its silicon solid-state battery technology for stationary storage and mobility applications. The company aims to establish a pilot line for sodium-ion battery cells by 2026.
Its approach leverages Earth-abundant silicon and elastic polymer anodes, paired with fully dry electrolytes across multiple formulations optimized for both stationary energy storage and mobility. Blue Current said its fully dry chemistry can be manufactured using the same high-volume equipment employed in the production of Li-ion pouch cells.
Sodium-ion batteries can be used in stationary energy storage, solar-powered battery systems, and consumer electronics. They can be transported in a fully discharged state, making them inherently safer than Li-ion batteries, which can suffer degradation when fully discharged.
Aluminum-ion batteriesProject INNOBATT, coordinated by the Fraunhofer Institute for Integrated Systems and Device Technology (IISB), has completed a functional battery system demonstrator based on aluminum-graphite dual-ion batteries (AGDIB).
Rechargeable aluminum-ion batteries represent a low-cost and inherently non-flammable energy storage approach, relying on widely available materials such as aluminum and graphite. When natural graphite is used as the cathode, AGDIB cells reach gravimetric energy densities of up to 160 Wh/kg while delivering power densities above 9 kW/kg. The electrochemical system is optimized for high-power operation, enabling rapid charge and discharge at elevated C rates and making it suitable for applications requiring a fast dynamic response.
In the representative system-level test (Figure 2), the demonstrator combines eight AGDIB pouch cells with a wireless battery management system (BMS) derived from the open-source foxBMS platform. Secure RF communication is employed in conjunction with a high-resolution current sensor based on nitrogen-vacancy centers in diamond, enabling precise current measurement under dynamic operating conditions.
Figure 2: A detailed block diagram of the INNOBATT battery system components (Source: Elisabeth Iglhaut/Fraunhofer IISB)
Li-ion battery recycling
Second-life Li-ion batteries retired from applications such as EVs often maintain a residual storage capacity and can therefore be repurposed for BESSes, supporting circular economy standards. In Europe, the EU Battery Passport—mandatory beginning in 2027 for EV, industrial, BESS (over 2 kWh), and light transport batteries—will digitally track batteries by providing a QR code with verified data on their composition, state of health, performance (efficiency, capacity), and carbon footprint.
This initiative aims to create a circular economy, improving product sustainability, transparency, and recyclability through digital records that detail information about product composition, origin, environmental impact, repair, and recycling.
HESSesA growing area of innovation is represented by the HESS, which integrates batteries with alternative energy storage technologies, such as supercapacitors or flywheels. Batteries offer high energy density but relatively low power density, whereas flywheels and supercapacitors provide high power density for rapid energy delivery but store less energy overall.
By combining these technologies, HESSes can better balance both energy and power requirements. Such systems are well-suited for applications such as grid and microgrid stabilization, as well as renewable energy installations, particularly solar and wind power systems.
Utility provider Rocky Mountain Power (RMP) and Torus Inc., an energy storage solutions company, are collaborating on a major flywheel and BESS project in Utah. The project integrates Torus’s mechanical flywheel technology with battery systems to support grid stability, demand response, and virtual power plant applications.
Torus will deploy its Nova Spin flywheel-based energy storage system (Figure 3) as part of the project. Flywheels operate using a large, rapidly spinning cylinder enclosed within a vacuum-sealed structure. During charging, electrical energy powers a motor that accelerates the flywheel, while during discharge, the same motor operates as a generator, converting the rotational energy back into electricity. Flywheel systems offer advantages such as longer lifespans compared with most chemical batteries and reduced sensitivity to extreme temperatures.
This collaboration is part of Utah’s Operation Gigawatt initiative, which aims to expand the state’s power generation capacity over the next decade. By combining the rapid response of flywheels with the longer-duration storage of batteries, the project delivers a robust hybrid solution designed for a service life of more than 25 years while leveraging RMP’s Wattsmart Battery program to enhance grid resilience.
Figure 3: Torus Nova Spin flywheel-based energy storage (Source: Torus Inc.)
AI adoption in BESSes
By utilizing its simulation and testing solution Simcenter, Siemens Digital Industries Software demonstrates how AI reinforcement learning (RL) can help develop more efficient, faster, and smarter BESSes.
The primary challenge of managing renewable energy sources, such as wind power, is determining the optimal charge and discharge timing based on dynamic variables such as real-time electricity pricing, grid load conditions, weather forecasts, and historical generation patterns.
Traditional control systems rely on simple, manually entered rules, such as storing energy when prices fall below weekly averages and discharging when prices rise. On the other hand, RL is an AI approach that trains intelligent agents through trial and error in simulated environments using historical data. For BESS applications, the RL agent learns from two years of weather patterns to develop sophisticated control strategies that provide better results than manual programming capabilities.
The RL-powered smart controller continuously processes wind speed forecasts, grid demand levels, and market prices to make informed, real-time decisions. It learns to charge batteries during periods of abundant wind generation and low prices, then discharge during demand spikes and price peaks.
The practical implementation of Siemens’s proposed approach combines system simulation tools to create digital twins of BESS infrastructure with RL training environments. The resulting controller can be deployed directly to hardware systems.
The post Emerging trends in battery energy storage systems appeared first on EDN.
Designing edge AI for industrial applications

Industrial manufacturing systems demand real-time decision-making, adaptive control, and autonomous operation. However, many cloud-dependent architectures can’t deliver the millisecond response required for safety-critical functions such as robotic collision avoidance, in-line quality inspection, and emergency shutdown.
Network latency (typically 50–200 ms round-trip) and bandwidth constraints prevent cloud processing from achieving sub-10 ms response requirements, shifting intelligence to the industrial edge for real-time control.
Edge AI addresses these high-performance, low-latency requirements by embedding intelligence directly into industrial devices and enabling local processing without reliance on the cloud. This edge-based approach supports machine-vision workloads for real-time defect detection, adaptive process control, and responsive human–machine interfaces that react instantly to dynamic conditions.
This article outlines a comprehensive approach to designing edge AI systems for industrial applications, covering everything from requirements analysis to deployment and maintenance. It highlights practical design methodologies and proven hardware platforms needed to bring AI from prototyping to production in demanding environments.
Defining industrial requirements
Designing scalable industrial edge AI systems begins with clearly defining hardware, software, and performance requirements. Manufacturing environments necessitate wide temperature ranges from –40°C to +85°C, resistance to vibration and electromagnetic interference (EMI), and zero tolerance for failure.
Edge AI hardware installed on machinery and production lines must tolerate these conditions in place, unlike cloud servers operating in climate-controlled environments.
Latency constraints are equally demanding: robotic assembly lines require inference times under 10 milliseconds for collision avoidance and motion control, in-line inspection systems must detect and reject defective parts in real time, and safety interlocks depend on millisecond-level response to protect operators and equipment.

Figure 1 Robotic assembly lines require inference times under 10 milliseconds for collision avoidance and motion control. Source: Infineon
Accuracy is also critical, with quality control often targeting greater than 99% defect detection, and predictive maintenance typically aiming for high-90s accuracy while minimizing false alarm rates.
Data collection and preprocessing
Meeting these performance standards requires systematic data collection and preprocessing, especially when defect rates fall below 5% of samples. Industrial sensors generate diverse signals such as vibration, thermal images, acoustic traces, and process parameters. These signals demand application-specific workflows to handle missing values, reduce dimensionality, rebalance classes, and normalize inputs for model development.
Continuous streaming of raw high-resolution sensor data can exceed 100 Mbps per device, which is unrealistic for most factory networks. As a result, preprocessing must occur at the industrial edge, where compute resources are located directly on or near the equipment.
Class-balancing techniques such as SMOTE or ADASYN address class imbalance in training data, with the latter adapting to local density variations. Many applications also benefit from domain-specific augmentation, such as rotating thermal images to simulate multiple views or injecting controlled noise into vibration traces to reflect sensor variability.
Outlier detection is equally important, with clustering-based methods flagging and correcting anomalous readings before they distort model training. Synthetic data generation can introduce rare events such as thermal hotspots or sudden vibration spikes, improving anomaly detection when real-world samples are limited.
With cleaner inputs established, focus shifts to model design. Convolutional neural networks (CNNs) handle visual inspection, while recurrent neural networks (RNNs) process time-series data. Transformers, though still resource-intensive, increasingly perform industrial time-series analysis. Efficient execution of these architectures necessitates careful optimization and specialized hardware support.
Hardware-accelerated processing
Efficient edge inference requires optimized machine learning models supported by hardware that accelerates computation within strict power and memory budgets. These local computations must stay within typical power envelopes below 5 W and operate without network dependency, which cloud-connected systems can’t guarantee in production environments.
Training neural networks for industrial applications can be challenging, especially when processing vibration signals, acoustic traces, or thermal images. Traditional workflows require data science expertise to select model architectures, tune hyperparameters, and manage preprocessing steps.
Even with specialized hardware, deploying deep learning models at the industrial edge demands additional optimization. Compression techniques shrink models by 80–95% while retaining over 95% accuracy, reducing size and accelerating inference to meet edge constraints. These include:
- Quantization converts 32-bit floating-point models into 8- or 16-bit integer formats, reducing memory use and accelerating inference. Post-training quantization meets most industrial needs, while quantization-aware training maintains accuracy in safety-critical cases.
- Pruning removes redundant neural connections, typically reducing parameters by 70–90% with minimal accuracy loss. Overparameterized models, especially those trained on smaller industrial datasets, benefit significantly from pruning.
- Knowledge distillation trains a smaller student model to replicate the behavior of a larger teacher model, retaining accuracy while achieving the efficiency required for edge deployment.
Deployment frameworks and tools
After compression and optimization, engineers deploy machine learning models using inference frameworks, such as TensorFlow Lite Micro and ExecuTorch, which are the industry standards. TensorFlow Lite Micro offers hardware acceleration through its delegate system, which is especially useful on platforms with supported specialized processors.
While these frameworks handle model execution, scaling from prototype to production also requires integration with development environments, control interfaces, and connectivity options. Beyond toolchains, dedicated development platforms further streamline edge AI workflows.
Once engineers develop and deploy models, they test them under real-world industrial conditions. Validation must account for environmental variation, EMI, and long-term stability under continuous operation. Stress testing should replicate production factors such as varying line speeds, material types, and ambient conditions to confirm consistent performance and response times across operational states.
Industrial applications also require metrics beyond accuracy. Quality inspection systems must balance false positives against false negatives, where the geometric mean (GM) provides a balanced measure on imbalanced datasets common in manufacturing. Predictive maintenance workloads rely on indicators such as mean time between false positives (MTBFP) and detection latency.

Figure 2 Quality inspection systems must balance false positives against false negatives. Source: Infineon
Validated MCU-based deployments demonstrate that optimized inference—even under resource constraints—can maintain near-baseline accuracy with minimal loss.
Monitoring and maintenance strategies
Validation confirms performance before deployment, yet real-world operation requires continuous monitoring and proactive maintenance. Edge deployments demand distributed monitoring architectures that continue functioning offline, while hybrid edge-to-cloud models provide centralized telemetry and management without compromising local autonomy.
A key focus of monitoring is data drift detection, as input distributions can shift with tool wear, process changes, or seasonal variation. Monitoring drift at both device and fleet levels enables early alerts without requiring constant cloud connectivity. Secure over-the-air (OTA) updates extend this framework, supporting safe model improvements, updates, and bug fixes.
Features such as secure boot, signed updates, isolated execution, and secure storage ensure only authenticated models run in production, helping manufacturers comply with regulatory frameworks such as the EU Cyber Resilience Act.
Take, for instance, an industrial edge AI case study about predictive maintenance. A logistics operator piloted edge AI silicon on a fleet of forklifts, enabling real-time navigation assistance and collision avoidance in busy warehouse environments.
The deployment reduced safety incidents and improved route efficiency, achieving better ROI. The system proved scalable across multiple facilities, highlighting how edge AI delivers measurable performance, reliability, and efficiency gains in demanding industrial settings.
The upgraded forklifts highlighted key lessons for AI at the edge: systematic data preprocessing, balanced model training, and early stress testing were essential for reliability, while underestimating data drift remained a common pitfall.
Best practices included integrating navigation AI with existing fleet management systems, leveraging multimodal sensing to improve accuracy, and optimizing inference for low latency in real-time safety applications.
Sam Al-Attiyah is head of machine learning at Infineon Technologies.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
- The AI-tuned DRAM solutions for edge AI workloads
The post Designing edge AI for industrial applications appeared first on EDN.
Another silly simple precision 0/20mA to 4/20mA converter

A recent Design Idea (DI), “Silly simple precision 0/20mA to 4/20mA converter,” by prolific DI contributor Stephen Woodward uses the venerable LM337 regulator in a creative configuration along with a few passive components, to translate an input 0-20 mA current source (say from a sensor with a separate power source that outputs a 0-20 mA signal current) into a 4-20 mA two-wire transmitter current loop (a standard 2 terminal industrial current source).
Below is another novel, ‘silly simple’ way of implementing the same function using the LM337. It relies on tapering off an initial 4 mA current to zero in proportion to the input 0-20 mA, and adding the input and the tapered off 4mA signal to create a 2-wire 4-20 mA output loop. It is loosely based on another Woodward gem [3]. Refer to Figure 1.

Figure 1 An input 0-20 mA is added to a tapered-off 4-0 mA at OUT to give an output 4-20 mA.
Wow the engineering world with your unique design: Design Ideas Submission Guide
First, imagine ‘0 mA’ current input (input loop open). The series arrangement of R1 parallel ‘R2 + Pz’ (‘Rz’@250E) and R3 parallel ‘R4+Ps’ (‘Rs’@62.5E) having a nominal value of 312.5E, sets the value of output loop current into OUT at 0mA+4mA (1.25V/312.5E), set using Pz.
Now, feeding a 20mA input current, imagine it pulled from junction X and pushed into the OUT terminal. This current is sourced from the output loop ‘+’, dropping 62.5E x 20mA=1.25V in Rs, in a direction opposing the internal reference voltage. With proper calibration, this reduces the drop across Rz to zero, and in doing so, reduces the original 4 mA contribution through Rz into OUT, also to zero.
The output loop current is now equal to the input current of 20mA+0mA (added at OUT), transferred from the input loop to the output loop from OUT to IN of U1. We have converted a current source input of 0-20 mA to a 2-wire loop current of 4-20 mA. The 20 mA setting is done by Ps.
Accurate current setting requires 2 S/Z passes to set the output current to within 0.05% or (much) better. Pots should be multi turn 3296 types or similar, but single turn trimmers will also work fairly well as both pots have a small trim range, by design.
The performance is excellent. The input to output linearity of the basic circuit is 0.02%. With a small heat sink, short term stability is within 0.02%, and change in loop current is 0.05% over a voltage from 5 V to 32 V. Transfer accuracy and stability are high because we aren’t transforming the input signal, only transferring it into the output loop. Reference drift affects only the basic 4 mA current and thus has a smaller effect on overall drift. The heat sink improves drift and di/dv by a factor of 3 to 4.
For intermediate input currents, the 4mA basic current via Rz into OUT is tapered off in proportion to the input 0-20 mA current. Thus at 10 mA (half) input current, the voltage at X changes suitably to maintain @500 mV across Rz, this supporting a contribution of 2 mA into OUT, down from the original 4 mA set at 0 mA input current. Output loop current into OUT is now the input 10mA+2mA=12mA, the halfway point of the 4-20 mA loop too. Similar reasoning applies to other input/output loop currents relationships.
A reverse protection diode is recommended in the 4-20 mA loop. Current limiting should be applied to limit fault current to safe levels. A series 2-transistor current limiter with appropriate resistance values is an excellent candidate, being low drop, low cost, fast acting and free from oscillation. A 40-mA ptc ‘polyfuse’ in the loop will protect the load from a complete short across both circuits (an unlikely event).
The basic drop seen by the 0-20 mA signal is -1 V to 0 V. Two diodes or an LED in series with the + of the 0-20-mA input allow the source to always see a positive drop.
Regarding stability: only the 68E(R3) and the 270E(R1) need to be 25 ppm 1% types to give low overall temperature drift, which is a significant plus. Pot drift, typically larger than that of fixed resistors, has less effect in the configuration used, wherein pots Ps and Pz, relatively high valued, control only a small part of the main current. Larger pot values also help minimize the effect of varying pot contact resistance.
A 3-V minimum operating voltage allows as much as 1000E of loop resistance with a 24-V supply, for the basic circuit.
It is a given that one of the loops will (need to) be floating. This is usually the source loop, as the instrument generating the 0-20 mA is powered from a separate supply.
Ashutosh Sapre lives and works in a large city in western India. Drifting uninspired through an EE degree way back in the late nineteen eighties, he was lucky enough to stumble across and be electrified by the Art of Electronics 1 and 2. Cut to now, he is a confirmed circuit addict, running a business designing, manufacturing and selling industrial signal processing modules. He is proud of his many dozens of design pads consisting mostly of crossed out design ideas.
Related Content/References
- Silly simple precision 0/20mA to 4/20mA converter
- A 0-20mA source current to 4-20mA loop current converter
- PWM-programmed LM317 constant current source
- https://www.radiolocman.com/shem/schematics.html?di=150983
The post Another silly simple precision 0/20mA to 4/20mA converter appeared first on EDN.
Choosing power supply components for New Space

Satellites in geostationary orbit (GEO) face a harsher environment due to plasma, trapped electrons, solar particles, and cosmic rays, with the environmental effect higher in magnitude compared with low Earth orbit (LEO)-Low Inclination, LEO-Polar, and International Space Station orbits. This is the primary reason why power supplies used in these satellites need to comply with stringent MIL standards for design, manufacturability, and quality.
GEO satellites circle around the earth in approximately 24 hours at about 3 km/s, at an altitude of about 35,786 km. There are only three main satellites that can cover the full globe, as these satellites are far from Earth.
In comparison, LEO satellites travel around the earth at of 7.8 km/s, at an altitude of less than 1,000 km, but they could be as low as 160 km above Earth. This is lower than GEO but still >10× higher than a commercial plane altitude at 14 km.
Total ionizing dose (TID) and single-event effects (SEEs) are two of the key radiation effects that need to be addressed by power supplies in space. Satellites placed in GEO face harsher conditions due to radiation compared with those in LEO.
GEO being farther from Earth is more susceptible to radiation; hence, the components used in GEO satellite power supplies need to be radiation-hardened (rad-hard) by design, which means all of the components must comply with TID and SEEs, as high as 100 Krad and 82 MeV cm2/mg, respectively.
In comparison, the LEO satellite components need to be radiation-tolerant with a relatively lower level of requirement of TID and SEEs. However, using no shielding from these harsh conditions may result in failure.
While individual satellites can be used for higher-resolution imaging, typically constellations of a large number of exact or similar types of relatively smaller satellites form a web or net around the earth to provide uninterrupted coverage. By working in tandem, these constellations provide simultaneous coverage for applications such as internet services and telecommunication.
The emergence of New Space has enabled the launch of multiple smaller satellites with lighter payloads for commercial purposes. Satellite internet services are slowly and steadily competing with traditional broadband and are providing more reliable connectivity for remote areas, passenger vehicles, and even aerospace.
Microchip offers a scalable approach to space solutions based on the mission. (Source: Microchip Technology Inc.)
Configurability for customization
The configurability of power supplies is an important factor for meeting a variety of space mission specifications. Voltage levels in the electrical power bus are generally standardized to certain values; however, the voltage of the solar array is not always standardized. This calls for a redesign of all the converters in the power subsystems, depending on the nature of the mission.
This redesign increases costs and development time. Thus, it is inherently important to provide DC/DC converters and low-dropout regulators (LDOs) across the power architecture that have standard specifications while providing the flexibility for customization depending on the system and load voltages. Functions such as paralleling, synchronization, and series connection are of paramount importance for power supplies when considering the specifications of different space missions.
Size, weight, power, and costDue to the limited volume available and the resource-intensive task of sending the objects into space against the pull of gravity, it is imperative to have smaller footprints, smaller size (volume), and lower weight while packing more power (kilowatts) in the given volume. This calls for higher power density for space optimization and higher efficiency (>80%) to get the maximum performance out of the resources available in the power system.
The load regulations need to be optimal to make sure that the output of the DC/DC converter feeds the next stage (LDOs and direct loads), matching the regulation requirements. Additionally, the tolerances of regulation against temperature variations are key in providing ruggedness and durability.
Space satellites use solar energy as the main source to power their loads. Some of the commonly used bus voltages are 28 V, 50 V, 72 V, 100 V, and 120 V. A DC/DC converter converts these voltages to secondary voltages such as 3.3 V, 5 V, 12 V, 15 V, and 28 V. Secondary bus voltages are further converted into usable voltages such as 0.8 V, 1.2 V, and 1.5 V with the help of points of load such as LDOs to feed to the microcontrollers (MCUs) and field-programable gate arrays (FPGAs) that drive the spacecraft loads.
A simplified power architecture for satellite applications, using Microchip’s standard rad-hard SA50-120 series of 50-W DC/DC power converters (Source: Microchip Technology Inc.)
Environmental effects in space
The space environment consists of effects such as solar plasma, protons, electrons, galactic cosmic rays, and solar flare ions. This harsh environment causes environmental effects such as displacement damage, TID, and SEEs that result in device-level effects.
The power converter considerations should be in line with the orbits in which the satellite operates, as well as the mission time. For example, GEO has more stringent radiation requirements than LEO.
The volume requirement for LEO tends to be higher due to the number of smaller satellites launched to form the constellations. The satellites’ power management faces stringent requirements and needs to comply with various MIL standards to withstand the harsh environment. The power supplies used in these satellites also need to minimize size, weight, power, and cost (SWaP-C).
Microchip provides DC/DC space converters that are suitable for these applications with the standard rad-hard SA50 series for deep space or traditional space satellites in GEO/MEO and the standard radiation-tolerant LE50 series for LEO/New Space applications. Using standard components in a non-hybrid structure (die and wire bond with hermetically sealed construction) can prevent lot jeopardy and mission schedule risk to ensure reliable and rugged solutions with faster time to market at the desired cost.
In addition to the ruggedness and SWaP-C requirements, power supply solutions also need to be scalable to cover a wide range of quality levels within the same product series. This also includes offering a range of packaging materials and qualification options to meet mission goals.
For example, Microchip’s LE50-28 isolated DC/DC power converters are available in nine variants, with single and triple outputs for optimal design configurability. The power converters have a companion EMI filter and enable engineers to design to scale and customize by choosing one to three outputs based on the voltage range needed for the end application. This series provides flexibility with up to four power converters to reach 200 W. It offers space-grade radiation tolerance with 50-Krad TID and SEE latch-up immunity of 37-MeV·cm2/mg linear energy transfer.
The space-grade LE50-28 series is based on a forward topology that offers higher efficiency and <1% output ripple. It is housed in a compact package, measuring 3.055 × 2.055 × 0.55 inches with a low weight of just 120 grams. These standard non-hybrid, radiation-tolerant devices in a surface-mount package comply with MIL-STD-461, MIL-STD-883, and MIL-STD-202.
In addition, the LE50-28 DC/DC power converters, designed for 28-V bus systems, can be integrated with Microchip’s PolarFire FPGAs, MCUs, and LX7720-RT motor control sensors for a complete electrical system solution. This enables customers to use cost-effective, standard LE50 converters to customize and configure solutions using paralleling and synchronization features to form more intricate power systems that can meet the requirements of LEO power management.
For New Space’s low- to mid-volume satellite constellations with stringent cost and schedule requirements, sub-Qualified Manufacturers List (QML) versions in plastic packages are the optimal solutions that provide the radiation tolerance of QML (space-grade) components to enable lower screening requirements for lower cost and shorter lead times. LE50 companions in this category are RTG4 FPGA plastic versions and the PIC64 high-performance spaceflight computing (PIC64-HPSC) LEO variant.
The post Choosing power supply components for New Space appeared first on EDN.
A battery charger that does even more

Multifunction devices are great…as long as you can find uses for all (or at least some) of those additional functions that you end up paying for, that is.
All other factors being equal (or at least roughly comparable), I tend to gravitate toward multifunction devices instead of a suite of single-function widget alternatives. The versatile smartphone is one obvious example of this trend; while I still own a collection of both still and video cameras, for example, they mostly collect dust on my shelves while I instead regularly reach for the front and rear cameras built into my Google Pixel phones. And most folks have already bailed on standalone cameras (if they ever even had one in the first place) long ago.
Speaking of multi-function devices, as well as of cameras, for that matter, let’s take a look at today’s teardown victim, NEEWER’s Replacement Battery and Charger Set:

It comes in three variants, supporting (and bundled with two examples of) batteries for Canon (shown here), Nikon, and Sony cameras, with MSRPs ranging from $36.49 to $73.99. It’s not only a charger, over both USB-C and micro-USB input options (a USB-A to micro-USB adapter cable is included, too), but also acts as a travel storage case for those batteries as well as memory cards:

And assuming the batteries are already charged, you can use them not only to power your camera but also to recharge an external device, such as a smartphone, via the USB-A output. My only critique would be that the USB-C connector isn’t bidirectional, too, i.e., able to do double-duty as both a charging input and an external-powering output.

As part of Amazon’s most recent early-October Prime Big Deal Days promotion, the company marked down a portion of the inventory in its Resale (formerly Warehouse) section, containing “Quality pre-owned, used, and open box products” (their words, not mine, and in summary: where Amazon resells past customer returns). I’ve regularly mentioned it in the past as a source of widgets for both my ongoing use and in teardowns, the latter often the result of my receiving something that didn’t work or was otherwise not-as-advertised, and Amazon refunding me what I paid and telling me not to bother returning it. Resale-sourced acquisitions don’t always pan out, but they do often enough (and the savings are significant enough) that I keep coming back.
Take the NEEWER Replacement Battery and Charger Set for Canon LP-E6 batteries, for example. It was already marked down from $36.49 to $26.63 by virtue of its inclusion in the Resale section, and the Prime Big Deal Days promotion knocked off an additional 25%, dropping the per-unit price to $19.97. So, I bought all three units that were available for sale, since LP-E6 batteries are compatible not only with my two Canon EOS 5D Mark IV DSLRs and my first-generation Blackmagic Design Pocket Cinema 6K video camera but also, courtesy of their ubiquity (along with that of the Sony-originated L-series, i.e., NP-F battery form factor) useful as portable power options for field monitors, flash and constant illumination sources, and the like.
From past experience with Warehouse-now-Resale-sourced acquisitions, I expected the packaging to be less-than-pristine compared to a brand-new alternative, and reality matched the lowered expectations. Here are the front and back panels of the first two devices’ outer boxes, in the first image accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, which you’ll also see in other photos in this piece:




Flip up the top, however, and the insides were a) complete and b) in cosmetically acceptable and fully functional shape. Here are the contents of the first box shown earlier, for example:

The aforementioned USB-A to micro-USB adapter cable:

One of the two included batteries:




The device outsides:






And finally, its insides:




The third device, on the other hand…when I saw the clear plastic bag that it came in, I knew I was in for trouble:



Removing the box from the bag only made matters visually, at least, worse:



And when I flipped open the top…yikes (I’d already taken out the LP-E6 batteries, which ended up looking and working fine, from the box when I snapped the following shots):






From a charging-and-powering standpoint, the device still worked fine, believe it or not. But the inability to securely attach the lid to the base rendered it of low value at best (there are always, of course, thick rubber bands as an alternative lid-securing scheme, but they’d still leave a gap).
So, I got in touch with Amazon, who gave me a full refund and told me to keep the device to do with as I wished. I relocated the batteries to my Blackmagic camera case. And then I added the battery charger to my teardown pile. On that note, by the way, I’ve intentionally waited until now to show you the packaging underside:


Case underside:


And one of the slips of literature:

This was the only one of the three devices I bought that had the same warning in all three places. If I didn’t know better, I’d think they’d foreseen what I later had planned for it!
Difficulty in diving inTime to get inside:

As with my recent Amazon Smart Plug teardown, I had a heck of a time punching through the seemingly straightforward seam around the edges of the interior portion:

But finally, after some colorful language, along with collateral damage:

I wrenched my way inside, surmounting the seemingly ineffective glue above the PCB in the process. The design’s likely hardware modularity is perhaps obvious; the portion containing the battery bays is unique to a particular product proliferation, with the remainder common to all three variants.

Remove the three screws holding the PCB in place:

And it lifts right out:

That chunk out of one corner of the wire-wound inductor in the middle came courtesy of yours truly and his habit of blindly jabbing various tools inside the device during the ham-fisted disassembly process. The foam along the left edge precludes the underside LEDs (which you’ll see shortly) from shining upward, instead redirecting their outputs out the front.
IC conundrumsThe large IC to the right of the foam strip, marked as follows:
0X895D45
is an enigma; my research of both the topside marked text (via traditional Google search) and the image (via Google Lens) was fruitless. I’m guessing that it’s the power management controller, handling both battery charging and output sequencing functions; more precise information from knowledgeable readers would be appreciated in the comments.
The two identical ICs along the top edge, in eight-lead SOP packages, were unfortunately no easier to ID. They’re marked as follows:
PSD (company logo) AKJG
PAP8801
And along the right edge is another IC, also in an eight-lead SOP but this time with the leads connected to the package’s long edges, and top-side stamped thusly:
SPT (company logo) SP1081F
25CT03
This last one I’m more confident of. It appears to be the SP1081F synchronous buck regulator from Chinese semiconductor supplier Wuxi Silicon Power Microelectronics. And intermingled with all these ICs are various surface-mounted passives and such.
For additional perspective, next are some side-view shots:
And, last but not least, here’s the PCB underside, revealing the four aforementioned LEDs, a smattering of test points, and not much else (unless you’re into traces, that is):
There you have it! As always, please share your insights in the comments.
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Building a Battery Charger with the CC/CV Method
- Power Tips #97: Shape an LLC-SRC gain curve to meet battery charger needs
- Low-cost NiCd battery charger with charge level indicator
- A budget battery charger that also elevates blood pressure
The post A battery charger that does even more appeared first on EDN.
The shift to 800-VDC power architectures in AI factories

The wide adoption of artificial-intelligence models has led to a redesign of data center infrastructure. Traditional data centers are being replaced with AI factories, specifically designed to meet the computational capacity and power requirements required by today’s machine-learning and generative AI workloads.
Data centers traditionally relied on a microprocessor-centric (CPU) architecture to support cloud computing, data storage, and general-purpose compute needs. However, with the introduction of large language models and generative AI applications, this architecture can no longer keep pace with the growing demand for computational capacity, power density, and power delivery required by AI models.
AI factories, by contrast, are purpose-built for large-scale training, inference, and fine-tuning of machine-learning models. A single AI factory can integrate several thousand GPUs, reaching power consumption levels in the megawatt range. According to a report from the International Energy Agency, global data center electricity consumption is expected to double from about 415 TWh in 2024 to approximately 945 TWh by 2030, representing almost 3% of total global electricity consumption.
To meet this power demand, a simple data center upgrade would be insufficient. It is therefore necessary to introduce an architecture capable of delivering high efficiency and greater power density.
Following a trend already seen in the automotive sector, particularly in electric vehicles, Nvidia Corporation presented at Computex 2025 an 800-VDC power architecture designed to efficiently support the multi-megawatt power demand required by the compute racks of next-generation AI factories.
Power requirements of AI factoriesThe power profile of an AI factory differs significantly from that of a traditional data center. Because of the large number of GPUs employed, an AI factory’s architecture requires high power density, low latency, and broad bandwidth.
To maximize computational throughput, an increasing number of GPUs must be packed into ever-smaller spaces and interconnected using high-speed copper links. This inevitably leads to a sharp rise in per-rack power demand, increasing from just a few dozen kilowatts in traditional data centers to several hundred kilowatts in AI factories.
The ability to deliver such high current levels using traditional low-voltage rails, such as 12, 48, and 54 VDC, is both technically and economically impractical. Resistive power losses, as shown in the following formula, increase exponentially with rising current, leading to a significant reduction in efficiency and requiring the use of copper connections with extremely large cross-sectional areas.
Presistive loss = V × I = R × I2
To support high-speed connectivity among multiple GPUs, Nvidia developed the NVLink point-to-point interconnect system. Now in its fifth generation, NVLink enables thousands of GPUs to share memory and computing resources for training and inference tasks as if they were operating within a single address space.
A single Nvidia GPU based on the Blackwell architecture (Figure 1) supports up to 18 NVLink connections at 100 GB/s, for a total bandwidth of 1.8 TB/s, twice that of the previous generation and 14× higher than PCIe Gen5.
Figure 1: Blackwell-architecture GPUs integrate two reticle-limit GPU dies into a single unit, connected by a 10-TB/s chip-to-chip link. (Source: Nvidia Corporation)
800-VDC power architecture
Traditional data center power distribution typically uses multiple, cascading power conversion stages, including utility medium-voltage AC (MVAC), low-voltage AC (LVAC, typically 415/480 VAC), uninterruptible power supply, and power distribution units (PDUs). Within the IT rack, multiple power supply units (PSUs) execute an AC-to-DC conversion before final DC-to-DC conversions (e.g., 54 VDC to 12 VDC) on the compute tray itself.
This architecture is inefficient for three main reasons. First, each conversion stage introduces power losses that limit overall efficiency. Second, the low-voltage rails must carry high currents, requiring large copper busbars and connectors. Third, the management of three-phase AC power, including phase balancing and reactive power compensation, requires a complex design.
Conversely, the transition to an 800-VDC power backbone minimizes I2R resistive losses. By doubling the distribution voltage from the industry-standard high end (e.g., 400 VDC) to 800 VDC, the system can deliver the same power output while halving the current (P = V × I), reducing power loss by a factor of four for a given conductor resistance.
By adopting this solution, next-generation AI factories will have a centralized primary AC-to-DC conversion outside the IT data hall, capable of converting MVAC directly to a regulated 800-VDC bus voltage. This 800 VDC can then be distributed directly to the compute racks via a simpler, two-conductor DC busway (positive and return), eliminating the need for AC switchgear, LVAC PDUs, and the inefficient AC/DC PSUs within the rack.
Nvidia’s Kyber rack architecture is designed to leverage this simplified bus. Power conversion within the rack is reduced to a single-stage, high-ratio DC-to-DC conversion (800 VDC to the 12-VDC rail used by the GPU complex), often employing highly efficient LLC resonant converters. This late-stage conversion minimizes resistive losses, provides more space within the rack for compute, and improves thermal management.
This solution is also capable of scaling power delivery from the current 100-kW racks to over 1 MW per rack using the same infrastructure, ensuring that the AI factory’s power-delivery infrastructure can support future increased GPU energy requirements.
The 800-VDC architecture also mitigates the volatility of synchronous AI workloads, which are characterized by short-duration, high-power spikes. Supercapacitors located near the racks help attenuate sub-second peaks, while battery energy storage systems connected to the DC bus manage slower events (seconds to minutes), decoupling the AI factory’s power demand from the grid’s stability requirements.
The role of wide-bandgap semiconductorsThe implementation of 800-VDC architecture can benefit from the superior performance and efficiency offered by wide-bandgap semiconductors such as silicon carbide and gallium nitride.
SiC MOSFETs are the preferred technology for the high-voltage front-end conversion stages (e.g., AC/DC conversion of 13.8-kV utility voltage to 800 VDC, or in solid-state transformers). SiC devices, typically rated for 1,200 V or higher, offer higher breakdown voltage and lower conduction losses compared with silicon at these voltage levels, despite operating at moderately high switching frequencies. Their maturity and robustness make them the best candidates for handling the primary power entry point into the data center.
GaN HEMTs, on the other hand, are suitable for high-density, high-frequency DC/DC conversion stages within the IT rack (e.g., 800 VDC to 54 VDC or 54 VDC to 12 VDC). GaN’s material properties, such as higher electron mobility, lower specific on-resistance, and reduced gate charge, enable switching frequencies into the megahertz range.
This high-frequency operation permits the use of smaller passive components (inductors and capacitors), reducing the size, weight, and volume of the converters. GaN-based converters have demonstrated power densities exceeding 4.2 kW/l, ensuring that the necessary power conversion stages can fit within the constrained physical space near the GPU load, maximizing the compute-to-power-delivery ratio.
Market readinessLeading semiconductor companies, including component manufacturers, system integrators, and silicon providers, are actively collaborating with Nvidia to develop full portfolios of SiC, GaN, and specialized silicon components to support the supply chain for this 800-VDC transition.
For example, Efficient Power Conversion (EPC), a company specializing in advanced GaN-based solutions, has introduced the EPC91123 evaluation board, a compact, GaN-based 6-kW converter that supports the transition to 800-VDC power distribution in emerging AI data centers.
The converter (Figure 2) steps 800 VDC down to 12.5 VDC using an LLC topology in an input-series, output-parallel (ISOP) configuration. Its GaN design delivers high power density, occupying under 5,000 mm2 with a height of 8 mm, well-suited for tightly packed server boards. Placing the conversion stage close to the load reduces power losses and increases overall efficiency.
Figure 2: The EPC GaN converter evaluation board integrates the 150-V EPC2305 and the 40-V EPC2366 GaN FETs. (Source: Efficient Power Conversion)
Navitas Semiconductor, a semiconductor company offering both SiC and GaN devices, has also partnered with Nvidia to develop an 800-VDC architecture for the emerging Kyber rack platform. The system uses Navitas’s GaNFast, GaNSafe, and GeneSiC technologies to deliver efficient, scalable power tailored to heavy AI workloads.
Navitas introduced 100-V GaN FETs in dual-side-cooled packages designed for the lower-voltage DC/DC stages used on GPU power boards, along with a new line of 650-V GaN FETs and GaNSafe power ICs that integrate control, drive, sensing, and built-in protection functions. Completing the portfolio are GeneSiC devices, built on the company’s proprietary trench-assisted planar technology, that offer one of the industry’s widest voltage ranges—from 650 V to 6,500 V—and are already deployed in multiple megawatt-scale energy storage systems and grid-tied inverter projects.
Alpha and Omega Semiconductor Limited (AOS) also provides a portfolio of components (Figure 3) suitable for the demanding power conversion stages in an AI factory’s 800-VDC architecture. Among these are the Gen3 AOM020V120X3 and the top-side-cooled AOGT020V120X2Q SiC devices, both suited for use in power-sidecar configurations or in single-step systems that convert 13.8-kV AC grid input directly to 800 VDC at the data center’s edge.
Inside the racks, AOS supports high-density power delivery through its 650-V and 100-V GaN FET families, which efficiently step the 800-VDC bus down to the lower-voltage rails required by GPUs.
In addition, the company’s 80-V and 100-V stacked-die MOSFETs, along with its 100-V GaN FETs, are offered in a shared package footprint. This commonality gives designers flexibility to balance cost and efficiency in the secondary stage of LLC converters as well as in 54-V to 12-V bus architectures. AOS’s stacked-die packaging technology further boosts achievable power density within secondary-side LLC sockets.
Figure 3: AOS’s portfolio supports 800-VDC AI factories. (Source: Alpha and Omega Semiconductor Limited)
Other leading semiconductor companies also announced their readiness to support the transition to 800-VDC power architecture, including Renesas Electronics Corp. (GaN power devices) and Innoscience (GaN power devices), onsemi (SiC and silicon devices), Texas Instruments Inc. (GaN and silicon power modules and high-density power stages), and Infineon Technologies AG (GaN, SiC, and silicon power devices).
For example, Texas Instruments recently released a 30-kW reference design for powering AI servers. The design uses a two-stage architecture built around a three-phase, three-level flying-capacitor PFC converter, which is then followed by a pair of delta-delta three-phase LLC converters. Depending on system needs, the unit can be configured to deliver a unified 800-VDC output or split into multiple isolated outputs.
Infineon, besides offering its CoolSiC, CoolGaN, CoolMOS, and OptiMOS families of power devices, also introduced a 48-V smart eFuse family and a reference board for hot-swap controllers, designed for 400-V and 800-V power architectures in AI data centers. This enables developers to design a reliable, robust, and scalable solution to protect and monitor energy flow.
The reference design (Figure 4) centers on Infineon’s XDP hot-swap controller. Among high-voltage devices suitable for a DC bus, the 1,200-V CoolSiC JFET offers the right balance of low on-resistance and ruggedness for hot-swap operation. Combined with this SiC JFET technology, the digital controller can drive the device in linear mode, allowing the power system to remain safe and stable during overvoltage conditions. The reference board also lets designers program the inrush-current profile according to the device’s safety operating area, supporting a nominal thermal design power of 12 kW.
Figure 4: Infineon’s XDP hot-swap controller reference design supports 400-V/800-V data center architectures. (Source: Infineon Technologies AG)
The post The shift to 800-VDC power architectures in AI factories appeared first on EDN.
Delay lines demystified: Theory into practice

Delay lines are more than passive timing tricks—they are deliberate design elements that shape how signals align, synchronize, and stabilize across systems. From their theoretical roots in controlled propagation to their practical role in high-speed communication, test equipment, and signal conditioning, delay lines bridge abstract timing concepts with hands-on engineering solutions.
This article unpacks their principles, highlights key applications, and shows how understanding delay lines can sharpen both design insight and performance outcomes.
Delay lines: Fundamentals and classifications
Delay lines remain a fundamental building block in circuit design, offering engineers a straightforward means of controlling signal timing. From acoustic propagation experiments to precision imaging in optical coherence tomography, these elements underpin a wide spectrum of applications where accurate delay management is critical.
Although delay lines are ubiquitous, many engineers rarely encounter their underlying principles. At its core, a delay line is a device that shifts a signal in time, a deceptively simple function with wide-ranging utility. Depending on the application, this capability finds its way into countless systems. Broadly, delay lines fall into three physical categories—electrical, optical, and mechanical—and, from a signal-processing perspective, into two functional classes: analog and digital.
Analog delay lines (ADLs), often referred to as passive delay lines, are built from fundamental electrical components such as capacitors and inductors. They can process both analog and digital signals, and their passive nature allows attenuation between input and output terminals.
In contrast, digital delay lines (DDLs), commonly described as active delay lines, operate exclusively on digital signals. Constructed entirely from digital logic, they do not provide attenuation across terminals. Among DDL implementations, CMOS technology remains by far the most widely adopted logic family.
When classified by time control, delay lines fall into two categories: fixed and variable. Fixed delay lines provide a preset delay period determined by the manufacturer, which cannot be altered by the circuit designer. While generally less expensive, they are often less flexible in practical use.
Variable delay lines, by contrast, allow designers to adjust the magnitude of the delay. However, this tunability is bounded—the delay can only be varied within limits specified by the manufacturer, rather than across an unlimited range.
As a quick aside, bucket-brigade delay lines (BBDs) represent a distinctive form of analog delay. Implemented as a chain of capacitors clocked in sequence, they pass the signal step-by-step much like a line of workers handing buckets of water. The result is a time-shifted output whose delay depends on both the number of stages and the clock frequency.
While limited in bandwidth and prone to noise, BBDs became iconic in audio processing—powering classic chorus, flanger, and delay effects—and remain valued today for their warm, characterful sound despite the dominance of digital alternatives.
Other specialized forms of delay lines include acoustic devices (often ultrasonic), magnetostrictive implementations, surface acoustic wave (SAW) structures, and electromagnetic bandgap (EBG) delay lines. These advanced designs exploit material properties or engineered periodic structures to achieve controlled signal delay in niche applications ranging from ultrasonic sensing to microwave phased arrays.
There are more delay line types, but I deliberately omitted them here to keep the focus on the most widely used and practically relevant categories for designers.

Figure 1 The nostalgic MN3004 BBD showcases its classic package and vintage analog heritage. Source: Panasonic
Retro Note: Many grey-bearded veterans can recall the era when memory was not etched in silicon but rippled through wire. In magnetostrictive delay line memories, bits were stored as acoustic pulses traveling through nickel wire. A magnetic coil would twist the wire to launch a pulse—which propagated mechanically—and was sensed at the far end, then amplified and recirculated.
These memories were sequential, rhythmic, and beautifully analog, echoing the pulse logic of early radar and computing systems. Mercury delay line memories offered a similar acoustic storage medium in liquid form, prized for its stable acoustic properties. Though long obsolete, they remain a tactile reminder of a time when data moved not as electrons, but as vibrations.
And from my recollection of color television delay lines, a delay line keeps the faster, high-definition luminance signal (Y) in step with the slower, low-definition chrominance signal (C). Because the narrow-band chrominance requires more processing than the wide-band luminance, a brief but significant delay is introduced. The delay line compensates for this difference, ensuring that both signals begin scanning across the television screen in perfect synchrony.
Selecting the right delay line
It’s now time to focus on choosing a delay line that will function effectively in your circuit. To ensure compatibility with your electrical network, you should pay close attention to three key specifications. The first is line type, which determines whether you need a fixed or variable delay line and whether it must handle analog or digital signals.
The second is rise time, generally defined as the interval required for a signal’s magnitude to increase from 10% to 90% of its final amplitude. The third is time delay, the actual duration by which the delay line slows down the signal, expressed in units of time. Considering these parameters together will guide you toward a delay line that matches both the functional and performance requirements of your design.

Figure 2 A retouched snip from the legacy DS1021 datasheet shows its key specifications. Source: Analog Devices
Keep in mind that the DS1021 device, once a staple programmable delay line, is now obsolete. Comparable functionality is available on DS1023 or in modern timing ICs such as the LTC6994, which deliver finer programmability and ongoing support.
Digital-to-time converters: Modern descendants of delay lines
Digital-to-time converters (DTCs) represent the contemporary evolution of delay line concepts. Whereas early delay lines stored bits as acoustic pulses traveling through wire or mercury, a DTC instead maps a digital input word directly into a precise time delay or phase shift.
This enables designers to control timing edges with sub-nanosecond accuracy, a capability central to modern frequency synthesizers, clock generation, and high-speed signal processing. In effect, DTCs carry forward the spirit of delay lines—transforming digital code into controlled timing—but with the precision, programmability, and integration demanded by today’s systems.
Coming to practical points on DTC, unlike classic delay line ICs that were sold as standalone parts, DTCs are typically embedded within larger timing devices such as fractional-N PLLs, clock-generation ICs, or implemented in FPGAs and ASICs. Designers will not usually find a catalog chip labeled “DTC,” but they will encounter the function inside modern frequency synthesizers and RF transceivers.
This integration reflects the shift from discrete delay elements to highly integrated timing blocks, where DTCs deliver picosecond-level resolution, built-in calibration, and jitter control as part of a broader system-on-chip (SoC) solution.
Wrap-up: Delay lines for makers
For hobbyists and makers, the PT2399 IC has become a refreshing antidote to the fog of complexity.

Figure 3 PT2399’s block diagram illustrates internal functional blocks. Source: PTC
Originally designed as a digital echo processor, it integrates a simple delay line engine that can be coaxed into audio experiments without the steep learning curve of PLLs or custom DTC blocks. With just a handful of passive components, PT2399 lets enthusiasts explore echoes, reverbs, and time-domain tricks, inspiring them to get their hands dirty with audio and delay line projects.
In many ways, it democratizes the spirit of delay lines, bringing timing control out of the lab and into the workshop, where curiosity and soldering irons meet. And yes, I will add some complex design pointers in the seasoned landscape—but after some lines of delay.
Well, delay lines may have shifted from acoustic pulses to embedded timing blocks, but they still invite engineers to explore timing hands‑on.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Trip points for IC-timing analysis
- Timing is everything in SOC design
- On-chip variation and timing closure
- Timing semiconductors get software aid
- Deriving design margins for successful timing closure
The post Delay lines demystified: Theory into practice appeared first on EDN.
CES 2026: AI, automotive, and robotics dominate

If the Consumer Electronics Show (CES) is a benchmark for what’s next in the electronic component industry, you’ll find that artificial intelligence permeates across all industries, from consumer electronics and wearables to automotive and robotics. Many chipmakers are placing big bets on edge AI as a key growth area along with robotics and IoT.
Here’s a sampling of the latest devices and technologies launched at CES 2026, covering AI advances for automotive, robotics, and wearables applications.
AI SoCs, chiplets, and developmentAmbarella Inc. announced its CV7 edge AI vision system-on-chip (SoC), optimized for a wide range of AI perception applications, such as advanced AI-based 8K consumer products (action and 360° cameras), multi-imager enterprise security cameras, robotics (aerial drones), industrial automation, and high-performance video conferencing devices. The 4-nm SoC provides simultaneous multi-stream video and advanced on-device edge AI processing while consuming very low power.
The CV7 may also be used for multi-stream automotive designs, particularly for those running convolutional neural networks (CNNs) and transformer-based networks at the edge, such as AI vision gateways and hubs in fleet video telematics, 360° surround-view and video-recording applications, and passive advanced driver-assistance systems (ADAS).
Compared with its predecessor, the CV7 consumes 20% less power, thanks in part to Samsung’s 4-nm process technology, which is Ambarella’s first on this node, the company said. It incorporates Ambarella’s proprietary AI accelerator, image-signal processor (ISP), and video encoding, together with Arm cores, I/Os, and other functions for an efficient AI vision SoC.
The high AI performance is powered by Ambarella’s proprietary, third-generation CVflow AI accelerator, with more than 2.5× AI performance over the previous-generation CV5 SoC. This allows the CV7 to support a combination of CNNs and transformer networks, running in tandem.
In addition, the CV7 provides higher-performance ISP, including high dynamic range (HDR), dewarping for fisheye cameras, and 3D motion-compensated temporal filtering with better image quality than its predecessor, thanks to both traditional ISP techniques and AI enhancements. It provides high image quality in low light, down to 0.01 lux, as well as improved HDR for video and images.
Other upgrades include its hardware-accelerated video encoding (H.264, H.265, MJPEG), which boosts encode performance by 2× over the CV5 and its on-chip general-purpose processing upgrade to a quad-core Arm Cortex-A73, offering 2× higher CPU performance over the previous SoC. It also provides a 64-bit DRAM interface, delivering a significant improvement in available DRAM bandwidth compared with the CV5, Ambarella said. CV7 SoC samples are available now.
Ambiq Micro Inc. delivers the industry’s first ultra-low-power neural processing unit (NPU) built on its Subthreshold Power Optimized Technology (SPOT) platform. It is designed for real-time, always-on AI at the edge.
Delivering both performance and low power consumption, the SPOT-optimized NPU is claimed as the first to leverage sub- and near-threshold voltage operation for AI acceleration to deliver leading power efficiency for complex edge AI workloads. It leverages the Arm Ethos-U85 NPU, which supports sparsity and on-the-fly decompression, enabling compute-intensive workloads directly on-device, with 200 GOPS of on-device AI performance.
It also incorporates SPOT-based ultra-wide-range dynamic voltage and frequency scaling that enables operation at lower voltage and lower power than previously possible, Ambiq said, making room in the power budget for higher levels of intelligence.
Ambiq said the Atomiq SoC enables a new class of high-performance, battery-powered devices that were previously impractical due to power and thermal constraints. One example is smart cameras and security for always-on, high-resolution object recognition and tracking without frequent recharging or active cooling.
For development, Ambiq offers the Helia AI platform, together with its AI development kits and the modular neuralSPOT software development kit.
Ambiq’s Atomiq SoC (Source: Ambiq Micro Inc.)
On the development side, Cadence Design Systems Inc. and its IP partners are delivering pre-validated chiplets, targeting physical AI, data center, and high-performance computing (HPC) applications. Cadence announced at CES a partner ecosystem to deliver pre-validated chiplet solutions, based on the Cadence physical AI chiplet platform. Initial IP partners include Arm, Arteris, eMemory, M31 Technology, Silicon Creations, and Trilinear Technologies, as well as silicon analytics partner proteanTecs.
The new chiplet spec-to-packaged parts ecosystem is designed to reduce engineering complexity and accelerate time to market for developing chiplets. To help reduce risk, Cadence is also collaborating with Samsung Foundry to build out a silicon prototype demonstration of the Cadence physical AI chiplet platform. This includes pre-integrated partner IP on the Samsung Foundry SF5A process.
Extending its close collaboration with Arm, Cadence will use Arm’s advanced Zena Compute Subsystem and other essential IP for the physical AI chiplet platform and chiplet framework. The solutions will meet edge AI processing requirements for automobiles, robotics, and drones, as well as standards-based I/O and memory chiplets for data center, cloud, and HPC applications.
These chiplet architectures are standards-compliant for broad interoperability across the chiplet ecosystem, including the Arm Chiplet System Architecture and future OCP Foundational Chiplet System Architecture. Cadence’s Universal Chiplet Interconnect Express (UCIe) IP provides industry-standard die-to-die connectivity, with a protocol IP portfolio that enables fast integration of interfaces such as LPDDR6/5X, DDR5-MRDIMM, PCI Express 7.0, and HBM4.
Cadence’s physical AI chiplet platform (Source: Cadence Design Systems Inc.)
NXP Semiconductors N.V. launched its eIQ Agentic AI Framework at CES 2026, which simplifies agentic AI development and deployment for both expert and novice device makers. It is one of the first solutions to enable agentic AI development at the edge, according to the company. The framework works together with NXP’s secure edge AI hardware to help simplify agentic AI development and deployment for autonomous AI systems at the edge and eliminate development bottlenecks with deterministic real-time decision-making and multi-model coordination.
Offering low latency and built-in security, the eIQ Agentic AI Framework is designed for real-time, multi-model agentic workloads, including applications in robotics, industrial control, smart buildings, and transportation. A few examples cited include instantly controlling factory equipment to mitigate safety risks, alerting medical staff to urgent conditions, updating patient data in real time, and autonomously adjusting HVAC systems, without cloud connectivity.
For expert developers, they can integrate sophisticated, multi-agent workflows into existing toolchains, while novice developers can quickly build functional edge-native agentic systems without deep technical experience.
The framework integrates hardware-aware model preparation and automated tuning workflows. It enables developers to run multiple models in parallel, including vision, audio, time series, and control, while maintaining deterministic performance in constrained environments, NXP said. Workloads are distributed across CPU, NPU, and integrated accelerators using an intelligent scheduling engine.
The eIQ Agentic AI Framework supports the i.MX 8 and i.MX 9 families of application processors and Ara discrete NPUs. It aligns with open agentic standards, including Agent to Agent and Model Context Protocol.
NXP has also introduced its eIQ AI Hub, a cloud-based developer platform that gives users access to edge AI development tools for faster prototyping. Developers can deploy on cloud-connected hardware boards but still have the option for on-premise deployments.
NXP’s Agentic AI framework (Source: NXP Semiconductors N.V.)
Sensing solutions
Bosch Sensortec launched its BMI5 motion sensor platform at CES 2026, targeting high-precision performance for a range of applications, including immersive XR systems, advanced robotics, and wearables. The new generation of inertial sensors—BMI560, BMI563, and BMI570—is built on the same hardware and is adapted through intelligent software.
Based on Bosch’s latest MEMS architecture, these inertial sensors, housed in an LGA package, claim ultra-low noise and exceptional vibration robustness. They offer twice the full-scale range of the previous generation. Key specifications include a latency of less than 0.5 ms, combined with a time increment of approximately 0.6 µs, and a timing resolution of 1 ns, which can deliver responsive motion tracking in highly dynamic environments.
The sensors also leverage a programmable edge AI classification engine that supports always-on functionality by analyzing motion patterns directly on the sensor. This reduces system power consumption and accelerates customer-specific use cases, the company said.
The BMI560, optimized for XR headsets and glasses, delivers low noise, low latency, and precise time synchronization. Its advanced OIS+ performance helps capture high-quality footage even in dynamic environments for smartphones and action cameras.
Targeting robotics and XR controllers, the BMI563 offers an extended full-scale range with the platform’s vibration robustness. It supports simultaneous localization and mapping, high dynamic XR motion tracking, and motion-based automatic scene tagging in action cameras.
The BMI570, optimized for wearables and hearables, delivers activity tracking, advanced gesture recognition, and accurate head-orientation data for spatial audio. Thanks to its robustness, it is suited for next-generation wearables and hearables.
Samples are now available for direct customers. High-volume production is expected to start in the third quarter of 2026.
Bosch also announced the BMI423 inertial measurement unit (IMU) at CES. The BMI423 IMU offers an extended measurement range of ±32 g (accelerometer) and ±4,000 dps (gyroscope), which enable precise tracking of fast, dynamic motion, making it suited for wearables, hearables, and robotics applications.
The BMI423 delivers low current consumption of 25 µA for always-on, acceleration-based applications in small devices. Other key specifications include low noise levels of 5.5 mdps/√Hz (gyro) and 90 µg/√Hz (≤ 8 g) or 120 µg/√Hz (≥ 16 g) (accelerometer), along with several interface options including I3C, I2C, and serial peripheral interface (SPI).
For wearables and hearables, the BMI423 integrates voice activity detection based on bone-conduction sensing, which helps save power while enhancing privacy, Bosch said. The sensor detects when a user is speaking and activates the microphone only when required. Other on-board functions include wrist-gesture recognition, multi-tap detection, and step counting, allowing the main processor to remain in sleep mode until needed and extending battery life in compact devices such as smartwatches, earbuds, and fitness bands.
The BMI423 is housed in a compact, 2.5 × 3 × 0.8-mm3 LGA package for space-constrained devices. The BMI423 will be available through Bosch Sensortec’s distribution partners starting in the third quarter of 2026.
Bosch Sensortec’s BMI563 IMU for robotics (Source: Bosch Sensortec)
Also targeting hearables and wearables, TDK Corp. launched a suite of InvenSense SmartMotion custom sensing solutions for true wireless stereo (TWS) earbuds, AI glasses, augmented-reality eyewear, smartwatches, fitness bands, and other IoT devices. The three newest IMUs are based on TDK’s latest ultra-low-power, high-performance ICM-456xx family that offers edge intelligence for consumer devices at the highest motion-tracking accuracy, according to the company.
Instead of relying on central processors, SmartMotion on-chip software enables computational processing related to motion tracking to be offloaded to the motion sensor itself so that intelligence decisions may be made locally, which allows other parts of the system to remain in low-power mode, TDK said. In addition, the sensor fusion algorithm and machine-learning capability are reported to deliver seamless motion sensing with minimum software effort by the customer.
The SmartMotion solutions, based on the ICM-456xx family of six-axis IMUs, include the SmartMotion ICM-45606 for TWS applications including earbuds, headphones, and other hearable products; the SmartMotion ICM-45687 for wearable and IoT technology; and the SmartMotion for Smart Glasses ICM-45685, which now enables new features, including sensing whether users are putting glasses on or taking glasses off (wear detection) and vocal vibration detection for identifying the source of the speech through its on-chip sensor fusion algorithms. The ICM-45685 also enables high-precision head-orientation tracking, optical/electronic image stabilization, intuitive UI control, posture recognition, and real-time translation.
TDK’s SmartMotion ICM-45685 (Source: TDK Corp.)
TDK also announced a new group company, TDK AIsight, to address technologies needed for AI glasses. The company will focus on the development of custom chips, cameras, and AI algorithms enabling end-to-end system solutions. This includes combining software technologies such as eye intent/tracking and multiple TDK technologies, such as sensors, batteries, and passive components.
As part of the launch, TDK AIsight introduced the SED0112 microprocessor for AI glasses. The next-generation, ultra-low-power digital-signal processor (DSP) platform integrates a microcontroller (MCU), state machine, and hardware CNN engine. The built-in hardware CNN architecture is optimized for eye intent. The MCU features ultra-low-power DSP processing, eyeGenI sensors, and connection to a host processor.
The SED0112, housed in a 4.6 × 4.6-mm package, supports the TDK AIsight eyeGI software and multiple vision sensors at different resolutions. Commercial samples are available now.
SDV devices and developmentInfineon Technologies AG and Flex launched their Zone Controller Development Kit. The modular design for zone control units (ZCUs) is aimed at accelerating the development of software-defined-vehicle (SDV)-ready electrical/electronic architectures. Delivering a scalable solution, the development kit combines about 30 unique building blocks.
With the building block approach, developers can right-size their designs for different implementations while preserving feature headroom for future models, the company said. The design platform enables over 50 power distribution, 40 connectivity, and 10 load control channels for evaluation and early application development. A dual MCU plug-on module is available for high-end ZCU implementations that need high I/O density and computational power.
The development kit enables all essential zone control functions, including I2t (ampere-squared seconds), overcurrent protection, overvoltage protection, capacitive load switching, reverse-polarity protection, secure data routing with hardware accelerators, A/B swap for over-the-air software updates, and cybersecurity. The pre-validated hardware combines automotive semiconductor components from Infineon, including AURIX MCUs, OPTIREG power supply, PROFET and SPOC smart power switches, and MOTIX motor control solutions with Flex’s design, integration, and industrialization expertise. Pre-orders for the Zone Controller Development Kit are open now.
Infineon and Flex’s Zone Controller Development Kit (Source: Infineon Technologies AG)
Infineon also announced a deeper collaboration with HL Klemove to advance technologies in vehicle electronic architectures for SDVs and autonomous driving. This strategic partnership will leverage Infineon’s semiconductor and system expertise with HL Klemove’s capabilities in advanced autonomous-driving systems.
The three key areas of collaboration are ZCUs, vehicle Ethernet-based ADAS and camera solutions, and radar technologies.
The companies will jointly develop zone controller applications using Infineon’s MCUs and power semiconductors, with HL Klemove as the lead in application development. Enabling high-speed in-vehicle network solutions, the partnership will also develop front camera modules and ADAS parking control units, leveraging Infineon’s Ethernet technology, while HL Klemove handles system and product development.
Lastly, HL Klemove will use Infineon’s radar semiconductor solutions to develop high-resolution and short-range satellite radar. They will also develop high-resolution imaging radar for precise object recognition.
NXP introduced its S32N7 super-integration processor series, designed to centralize core vehicle functions, including propulsion, vehicle dynamics, body, gateway, and safety domains. Targeting SDVs, the S32N7 series, with access to core vehicle data and high compute performance, becomes the central AI control point.
Enabling scalable hardware and software across models and brands, the S32N7 simplifies vehicle architectures and reduces total cost of ownership by as much as 20%, according to NXP, by eliminating dozens of hardware modules and delivering enhanced efficiencies in wiring, electronics, and software.
NXP said that by centralizing intelligence, automakers can scale intelligent features, such as personalized driving, predictive maintenance, and virtual sensors. In addition, the high-performance data backbone on the S32N7 series provides a future-proof path for upgrading to the latest AI silicon without re-architecting the vehicle.
The S32N7 series, part of NXP’s S32 automotive processing platform, offers 32 compatible variants that provide application and real-time compute with high-performance networking, hardware isolation technology, AI, and data acceleration on an SoC. They also meet the strict timing, safety, and security requirements of the vehicle core.
Bosch announced that it is the first to deploy the S32N7 in its vehicle integration platform. NXP and Bosch have co-developed reference designs, safety frameworks, hardware integration, and an expert enablement program.
The S32N79, the superset of the series, is sampling now with customers.
NXP’s S32N7 super-integration processor series (Source: NXP Semiconductors N.V.)
Texas Instruments Inc. (TI) expanded its automotive portfolio for ADAS and SDVs with a range of automotive semiconductors and development resources for automotive safety and autonomy across vehicle models. The devices include the scalable TDA5 HPC SoC family, which offers power- and safety-optimized processing and edge AI; the single-chip AWR2188 8 × 8 4D imaging radar transceiver, designed to simplify high-resolution radar systems; and the DP83TD555J-Q1 10BASE-T1S Ethernet physical layer (PHY).
The TDA5 SoC family offers edge AI acceleration from 10 TOPS to 1,200 TOPS, with power efficiency beyond 24 TOPS/W. This scalability is enabled by its chiplet-ready design with UCIe interface technology, TI said, enabling designers to implement different feature sets.
The TDA5 SoCs provide up to 12× the AI computing of previous generations with similar power consumption, thanks to the integration of TI’s C7 NPU, eliminating the need for thermal solutions. This performance supports billions of parameters within language models and transformer networks, which increases in-vehicle intelligence while maintaining cross-domain functionality, the company said. It also features the latest Arm Cortex-A720AE cores, enabling the integration of more safety, security, and computing applications.
Supporting up to SAE Level 3 vehicle autonomy, the TDA5 SoCs target cross-domain fusion of ADAS, in-vehicle infotainment, and gateway systems within a single chip and help automakers meet ASIL-D safety standards without external components.
TI is partnering with Synopsys to provide a virtual development kit for TDA5 SoCs. The digital-twin capabilities help engineers accelerate time to market for their SDVs by up to 12 months, TI said.
The AWR2188 4D imaging radar transceiver integrates eight transmitters and eight receivers into a single launch-on-package chip for both satellite and edge architectures. This integration simplifies higher-resolution radar systems because 8 × 8 configurations do not require cascading, TI said, while scaling up to higher channel counts requires fewer devices.
The AWR2188 offers enhanced analog-to-digital converter data processing and a radar chirp signal slope engine, both supporting 30% faster performance than currently available solutions, according to the company. It supports advanced radar use cases such as detecting lost cargo, distinguishing between closely positioned vehicles, and identifying objects in HDR scenarios. The transceiver can detect objects with greater accuracy at distances greater than 350 meters.
With Ethernet an enabler of SDVs and higher levels of autonomy, the DP83TD555J-Q1 10BASE-T1S Ethernet SPI PHY with an integrated media access controller offers nanosecond time synchronization, as well as high reliability and Power over Data Line capabilities. This brings high-performance Ethernet to vehicle edge nodes and reduces cable design complexity and costs, TI said.
The TDA54 software development kit is now available on TI.com. Samples of the TDA54-Q1 SoC, the first device in the family, will be sampling to select automotive customers by the end of 2026. Pre-production quantities of the AWR2188 transceiver, AWR2188 evaluation module, DP83TD555J-Q1 10BASE-T1S Ethernet PHY, and evaluation module are now available on request at TI.com.
Robotics: processors and modulesQualcomm Technologies Inc. introduced a next-generation robotics comprehensive-stack architecture that integrates hardware, software, and compound AI. As part of the launch, Qualcomm also introduced its latest, high-performance robotics processor, the Dragonwing IQ10 Series, for industrial autonomous mobile robots and advanced full-sized humanoids.
The Dragonwing industrial processor roadmap supports a range of general-purpose robotics form factors, including humanoid robots from Booster, VinMotion, and other global robotics providers. This architecture supports advanced-perception, motion planning with end-to-end AI models such as VLAs and VMAs. These features enable generalized manipulation capabilities and human-robot interaction.
Qualcomm’s general-purpose robotics architecture with the Dragonwing IQ10 combines heterogeneous edge computing, edge AI, mixed-criticality systems, software, machine-learning operations, and an AI data flywheel, along with a partner ecosystem and a suite of developer tools. This portfolio enables robots to reason and adapt to the spatial and temporal environments intelligently, Qualcomm said, and is optimized to scale across various form factors with industrial-grade reliability.
Qualcomm’s growing partner ecosystem for its robotics platforms includes Advantech, APLUX, AutoCore, Booster, Figure, Kuka Robotics, Robotec.ai, and VinMotion.
Qualcomm’s Dragonwing IQ10 industrial processor (Source: Qualcomm Technologies Inc.)
Quectel Wireless Solutions released its SH602HA-AP smart robotic computing module. Based on the D-Robotics Sunrise 5 (X5M) chip platform and with an integrated Ubuntu operating system, the module features up to 10 TOPS of brain-processing-unit computing power. The robotic computing modules target demanding robotic workloads, supporting advanced large-scale models such as Transformer, Bird’s-Eye View, and Occupancy.
The module works seamlessly with Quectel’s independent LTE Cat 1, LTE Cat 4, 5G, Wi-Fi 6, and GNSS modules, offering expanded connectivity options and a broader range of robotics use cases. These include smart displays, express lockers, electricity equipment, industrial control terminals, and smart home appliances.
The module, measuring 40.5 × 40.5 × 2.9 mm, operates over the –25°C to 85°C temperature range. It supplies a default memory of 4 GB plus 32 GB and numerous memory options. It supports data input and fusion processing for multiple sensors, including LiDAR, structured light, time-of-flight, and voice, meeting the AI and vision requirements in robotic applications.
The module supports 4k video at 60 fps with video encoding and decoding, binocular depth processing, AI and visual simultaneous localization and mapping, speech recognition, 3D point-cloud computing, and other mainstream robot perception algorithms. It provides Bluetooth, DSI, RGMII, USB 3.0, USB 2.0, SDIO, QSPI, seven UART, seven I2C, and two I2S interfaces.
The module integrates easily with additional Quectel modules, such as the KG200Z LoRa and the FCS950 Wi-Fi and Bluetooth module for more connectivity options.
Quectel’s SH602HA-AP smart robotic computing module (Source: Quectel Wireless Solutions)
The post CES 2026: AI, automotive, and robotics dominate appeared first on EDN.










