EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 8 min ago

EDN announces winners of the 2025 Product of the Year Awards

Tue, 02/03/2026 - 15:05
Electronic Products of the Year 2025 logo.

The annual awards, now in its 50th year, recognizes outstanding products that represent any of the following qualities: a significant advancement in a technology or its application, an exceptionally innovative design, a substantial achievement in price/performance, improvements in design performance, and the potential for new product designs/opportunities. EDN editors evaluated 100+ products across 13 categories. There are two ties, in the power and sensors categories. Here are this year’s winners:

  • Allegro MicroSystems Inc. and SensiBel (Sensors)
  • Ambiq (Development Kits)
  • Cree LED (Optoelectronics)
  • Circuits Integrated Hellas (Modules)
  • Empower Semiconductor and Ferric Corp. (Power)
  • Littelfuse Inc. (Passives)
  • Marvell Technology Inc. (Interconnects)
  • Morse Micro Ltd. (IoT Platforms)
  • Renesas Electronics Corp. (Digital ICs)
  • Rohde & Schwarz (Test & Measurement)
  • Semtech Corp. (RF/Microwave)
  • Sensata Technologies (Electromechanical)
  • Stathera Inc. (Analog/Mixed-Signal ICs)
Allegro MicroSystems Inc. Sensors: ACS37100 magnetic current sensor

Allegro MicroSystems’ ACS37100 is a fully integrated tunneling magnetoresistive (TMR) current sensor that delivers high accuracy and low noise for demanding control loop applications. Marking a critical inflection point for magnetic sensors, it is the industry’s first commercially available magnetic current sensor to achieve 10-MHz bandwidth and 50-ns response time, the company said.

The ACS37100 magnetic current sensor, based on Allegro’s proprietary XtremeSense TMR technology, is 10× faster and generates 4× lower noise than alternative Hall-based sensors. This performance solves challenges in high-voltage power conversion, especially related to gallium nitride (GaN) and silicon carbide (SiC) solutions. The ACS37100 helps power system designers leverage the full potential of fast-switching GaN and SiC FETs by providing precise current measurement and integrated overcurrent fault detection.

The current sensor delivers a low noise of 26-mA root mean square across the full 10-MHz bandwidth, enabling precise, high-speed current measurements for more accurate and responsive system performance.

While GaN and SiC promise greater power density and efficiency, the faster switching speeds of wide-bandgap semiconductors create significant control challenges. At sub-megahertz frequencies, conventional magnetic current sensors lack the speed and precision to provide the high-fidelity, real-time data required for stable control and protection loops, Allegro MicroSystems said.

Target applications include electric vehicles, clean-energy power conversion systems, and AI data center power supplies, in which the 10-MHz bandwidth and 50-ns response time provide the high-fidelity data needed. The operating temperature range is –40°C to 150°C.

Allegro MicroSystems’ ACS37100 TMR magnetic current sensor.Allegro MicroSystems’ ACS37100 TMR magnetic current sensor (Source: Allegro MicroSystems Inc.) Ambiq Development Kits: neuralSPOT AI development kit

Ambiq’s neuralSPOT software development kit (SDK) is designed specifically for embedded AI on the company’s ultra-low-power Apollo system-on-chips (SoCs). It helps AI developers handle the complex process of model integration with a streamlined and scalable workflow.

The SDK provides a comprehensive toolkit comprising Ambiq-optimized libraries, feature extractors, device drivers, and pre-trained AI models, making it easier for developers to quickly prototype, test, and deploy models using real-world sensor data while integrating optimized static libraries into production applications. This reduces both development effort and energy consumption.

The neuralSPOT SDK and Toolkit bridge the gap between AI model creation, deployment, and optimization, Ambiq said, enabling developers to move from concept to prototype in minutes, not days. This is thanks in part to its intuitive workflow, pre-validated model templates, and seamless hardware integration.

The latest neuralSPOT V1.2.0 Beta release includes ready-to-use example implementations of popular AI applications, such as human activity recognition for wearable and fitness analytics, ECG monitoring, keyword spotting, speech enhancement, and speaker identification.

Key challenges that the neuralSPOT SDK addresses include high power consumption, energy limits, limited development tools, and complex setup. This is particularly important when enabling AI on compact, battery-powered edge devices in which manufacturers must balance performance, power efficiency, and usability.

The SDK provides a unified, developer-friendly toolkit with Ambiq-optimized libraries, drivers, and ready-to-deploy AI models, which reduces setup and integration time from days to hours. It also simplifies model validation for consistent results and quicker debugging and provides real-time insights into energy performance, helping developers meet efficiency goals early in the design process.

Ambiq’s neuralSPOT for the Apollo5 SoCs.Ambiq’s neuralSPOT for the Apollo5 SoCs (Source: Ambiq) Circuits Integrated Hellas Modules: Kythrion antenna-in-package

The Kythrion chipset from Circuits Integrated Hellas (CIH) is called a game-changer for satellite communications. It is the first chipset to integrate transmit, receive, and antenna functions into a proprietary 3D antenna-in-package and system-in-package architecture.

By vertically stacking III-V semiconductors (such as gallium arsenide and GaN) with silicon, Kythrion achieves more than 60% reductions in size, weight, power, and cost compared with traditional flat-panel antenna modules, according to the company. This integration eliminates unnecessary printed-circuit-board (PCB) layers by consolidating RF, logic, and antenna elements into a dense 3D chip for miniaturization and optimized thermal management within the package. This also simplifies system complexity by combining RF and logic control on-chip.

CIH said this leap in miniaturization allows satellites to carry more advanced payloads without increasing mass or launch costs, while its 20× bandwidth improvement delivers real-time, high-throughput connectivity. These features deliver benefits to aerospace, defense, and commercial networks, with applications in satellite broadband, 5G infrastructure, IoT networks, wireless power, and defense and aviation systems.

Compared with traditional commercial off-the-shelf phased-array antennas, which typically require hundreds of separate chips (e.g., 250 transmit and 250 receive chips) and require a larger footprint around 4U, Kythrion reduces the module count to just 50 integrated modules, fitting into a compact, 1U form factor. This results in a weight reduction from 3 kg to 4 kg, down to approximately 1.5 kg, while power consumption is lowered by 15%. Cost per unit is also significantly reduced, CIH said.

The company also considered sustainability when designing the Kythrion antenna-in-package. It uses existing semiconductor processes to eliminate capital-intensive retooling, which lowers carbon impact. In addition, by reducing satellite mass, each kilogram saved in satellite payload can reduce up to 300 kg of CO2 emissions per launch, according to CIH.

CIH’s Kythrion antenna-in-package.CIH’s Kythrion antenna-in-package (Source: Circuits Integrated Hellas) Cree LED, a Penguin Solutions brand Optoelectronics: XLAMP XP-L Photo Red S Line LEDs

Advancing horticulture lighting, Cree LED, a Penguin Solutions brand, launched the XLAMP XP-L Photo Red S Line LEDs, optimized for large-scale growing operations, including greenhouses and vertical farms, with higher efficiency and durability.

Claiming a new standard in efficiency and durability for horticultural LED lighting, the XLAMP XP-L Photo Red S Line LEDs provide a 6% improvement in typical wall-plug efficiency over the previous generation, reaching 83.5% at 700 mA and 25°C. Horticultural customers can reduce operating costs with the same output with less power consumption, or they can lower initial costs with a redesign that cuts the number of Photo Red LEDs required by up to 35%, Cree LED said.

Thanks to its advanced S Line technology, the XP-L Photo Red LEDs offer high sulfur and corrosion resistance that extend their lifespan and deliver reliable performance. These features reduce maintenance costs while enabling the devices to withstand harsh greenhouse environments, the company said.

Other key specifications include a maximum drive current of 1,500 mA, a low thermal resistance of 1.15°C/W, and a wide viewing angle of 125°. The LEDs are binned at 25°C. They are RoHS- and REACH-compliant.

These LEDs also provide seamless upgrades in existing designs with the same 3.45 × 3.45-mm XP package as the previous XP-G3 Photo Red S Line LEDs.

Cree LED’s XLamp XP-L Photo Red S Line LEDs.Cree LED’s XLamp XP-L Photo Red S Line LEDs (Source: Cree LED, a Penguin Solutions brand) Empower Semiconductor Power: Crescendo vertical power delivery

Empower Semiconductor describes Crescendo as the industry’s first true vertical power delivery platform designed for AI and high-performance-computing processors. The Crescendo chipset sets a new industry benchmark with 20× faster response and breakthrough sustainability and enables gigawatt-hours in energy savings per year for a typical AI data center.

The vertical architecture achieves multi-megahertz bandwidth, 5× higher power density, and over 20% lower delivery losses while minimizing voltage droop and accelerating transient response. The result is up to 15% lower xPU power consumption and a significant boost in performance per watt, claiming a new benchmark for efficiency and scalability in AI data center systems.

The Crescendo platform is powered by Empower’s patented FinFast architecture. Scalable beyond 3,000 A, Crescendo integrates the regulators, magnetics, and capacitors into a single, ultra-thin package that enables direct placement underneath the SoC. This relocates power conversion to where it’s needed most for optimum energy and performance, according to the company.

Empower said the Crescendo platform is priced to be on par with existing power delivery solutions while offering greater performance, energy savings, and lower total cost of ownership for data centers.

Empower’s Crescendo vertical power delivery.Empower’s Crescendo vertical power delivery (Source: Empower Semiconductor) Ferric Corp. Power: Fe1766 DC/DC step-down power converter

Ferric’s Fe1766 160-A DC/DC step-down power converter offers industry-leading power density and performance in an ultra-compact, 35-mm2 package with just 1-mm height. The Fe1766 is a game-changer for high-performance computing, AI accelerators, and data center processors with its extremely compact form factor, high power density, and 100× faster switching speeds for precise, high-bandwidth regulation, Ferric said.

Integrating inductors, capacitors, FETs, and a controller into a single module, the Fe1766 offers 4.5-A/mm2 power density, which makes it 25× smaller than traditional alternatives, according to the company. The integrated design translates into a board area reduction of up to 83%.

The FE1766 switches at 30 to 100 MHz, ensuring extremely fast power conversion with high-bandwidth regulation and 30% better efficiency than conventional solutions and 20% reduced cost compared with existing designs. Other features include real-time telemetry (input voltage, output voltage, current, and temperature) and comprehensive fault protection (UVLO, OVP, UVP, OCP, OTP, etc.), providing both reliability and performance.

However, the most significant feature is its scalability, with gang operation of up to 64 devices in parallel for a power delivery exceeding 10 kA directly to the processor core. This makes it suited for next-generation multi-core processors, GPUs, FPGAs, and ASICs in high-density and high-performance systems, keeping pace with growth in computing power and core counts, particularly in AI, machine learning, and data centers.

Ferric’s Fe1766 DC/DC step-down power converter.Ferric’s Fe1766 DC/DC step-down power converter (Source: Ferric Corp.) Littelfuse Inc. Passives: Nano2 415 SMD fuse

The Littelfuse Nano2 415 SMD fuse is the industry’s first 277-VAC surface-mount fuse rated for a 1,500-A interrupting current. Previously, this was achievable only with larger through-hole fuses, according to the company. It allows designers to upgrade protection and transition to automated reflow processes, reducing assembly costs while improving reliability and surge-withstand capability.

The Nano2 415 SMD fuse bridges the gap between legacy cartridge and compact SMD solutions while advancing both performance and manufacturability, Littelfuse said. Its compact, 15 × 5-mm footprint and time-lag characteristic protect high-voltage, high-fault-current circuits while enabling reflow-solder assembly. It is compliant with UL/CSA/NMX 248-1/-14 and EN 60127-1/-7.

The Nano2 415 SMD Series offers high I2t performance. It is halogen-free and RoHS-compliant. Applications include industrial power supplies, inverters, and converters; appliances and HVAC systems; EV chargers and lighting control; and smart building and automation systems.

Littelfuse’s Nano2 415 SMD Fuse.Littelfuse’s Nano2 415 SMD Fuse (Source: Littelfuse Inc.) Marvell Technology Inc. Interconnects: 3-nm 1.6-Tbits/s PAM4 Interconnect Platform

The Marvell 3-nm 1.6-Tbits/s PAM4 Interconnect Platform claims the industry’s first 3-nm process node optical digital-signal processor (DSP) architecture, targeting bandwidth, power efficiency, and integration for AI and cloud infrastructure. The platform integrates eight 200G electrical lanes and eight 200G optical lanes in a compact, standardized module form factor.

The new platform sets a new standard in optical interconnect technology by integrating advanced laser drivers and signal processing in a single, compact device, Marvell said. This reduces power per bit and simplifies system design across the entire AI data center network stack.

The 3-nm PAM4 platform addresses the I/O bandwidth bottleneck by combining next-generation SerDes technology and laser driver integration to achieve higher bandwidth and power performance. It leverages 200-Gbits/s SerDes and integrated optical modulator drivers to reduce 1.6-Tbits/s optical module power by over 20%. The energy-efficiency improvement reduces operational costs and enables new AI server and networking architectures to meet the requirements for higher bandwidth and performance for AI workloads, within the significant power constraints of the data center, Marvell said.

The 1.6-Tbits/s PAM4 DSP enables low-power, high-speed optical interconnects that support scale-out architectures across racks, rows, and multi-site fabrics. Applications include high-bandwidth optical interconnects in AI and cloud data centers, GPU-to-GPU and server interconnects, rack-to-rack and campus-scale optical networking, and Ethernet and InfiniBand scale-out AI fabrics.

The DSP platform reduces module design complexity and power consumption for denser optical connectivity and faster deployment of AI clusters. With a modular architecture that supports 1.6 Tbits/s in both Ethernet and InfiniBand environments, this platform allows hyperscalers to future-proof their infrastructure for the transition to 200G-per-lane signaling, Marvell said.

Morse Micro Pty. Ltd. IoT Platforms: MM8108 Wi-Fi HaLow SoC

Morse Micro claims that the MM8108 Wi-Fi HaLow SoC is the smallest, fastest, lowest-power, and farthest-reaching Wi-Fi chip. The MM8108, built on the IEEE 802.11ah standard, establishes a new benchmark for performance, efficiency, and scalability in IoT connectivity. It delivers data rates up to 43.33 Mbits/s using the industry’s first sub-gigahertz, 256-QAM modulation, combining long-range operation with true broadband throughput.

The MM8108 Wi-Fi HaLow extends Wi-Fi’s reach into the sub-1-GHz spectrum, enabling multi-kilometer connectivity, deep penetration through obstacles, and support for 8,000+ devices per access point. Outperforming proprietary LPWAN and cellular alternatives while maintaining full IP compatibility and WPA3 enterprise security, the wireless platform reduces deployment cost and power consumption by up to 70%, accelerates certification, and expands Wi-Fi’s use beyond homes and offices to cities, farms, and factories, Morse Micro said.

The MM8108 SoC’s integrated 26-dBm power amplifier and low-noise amplifier achieve “outstanding” link budgets and global regulatory compliance without external SAW filters. It also simplifies system design and reduces power draw with a 5 × 5-mm BGA package, USB/SDIO/SPI interfaces, and host-offload capabilities. This allows devices to run for years on a coin-cell or solar battery, Morse Micro said.

The MM8108-RD09 USB dongle complements the SoC, enabling fast HaLow integration with existing Wi-Fi 4/5/6/7 infrastructure. It demonstrates plug-and-play Wi-Fi HaLow deployment for industrial, agricultural, smart city, and consumer applications. The dongle is fully IEEE 802.11ah–compliant and Wi-Fi CERTIFIED HaLow-ready, allowing developers to test and commercialize Wi-Fi HaLow solutions quickly.

Together, the MM8108 and RD09 combine kilometer-scale range, 100× lower power consumption, and 10× higher capacity than conventional Wi-Fi while maintaining the simplicity, interoperability, and security of the wireless standard, the company said.

Applications range from smart cities (lighting, surveillance, and environmental monitoring networks spanning kilometers) and industrial IoT (predictive maintenance, robotics, and asset tracking in factories and warehouses) to agriculture (solar-powered sensors for crop, irrigation, and livestock management), retail and logistics (smart shelves, POS terminals, and real-time inventory tracking), and healthcare (long-range, low-power connectivity for remote patient monitoring and smart appliances).

Morse Micro’s MM8108 Wi-Fi HaLow SoC.Morse Micro’s MM8108 Wi-Fi HaLow SoC (Source: Morse Micro Pty. Ltd.) Renesas Electronics Corp. Digital ICs: RA8P1 MCUs

Renesas’s RA8P1 group is the first group of 32-bit AI-accelerated microcontrollers (MCUs) powered by the high-performance Arm Cortex-M85 (CM85) with Helium MVE and Ethos-U55 neural processing unit (NPU). With advanced AI, it enables voice, vision, and real-time-analytics AI applications on a single chip. The NPU supports commonly used networks, including DS-CNN, ResNet, Mobilenet, and TinyYolo. Depending on the neural network used, the Ethos-U55 provides up to 35× more inferences per second than the Cortex-M85 processor on its own, according to the company.

The RA8P1, optimized for edge and endpoint AI applications, uses the Ethos-U55 NPU to offload the CPU for compute-intensive operations in convolutional and recurrent neural networks to deliver up to 256 MACs per cycle, delivering 256 GOPS of AI performance at 500 MHz and breakthrough CPU performance of over 7,300 CoreMarks, Renesas said.

The RA8P1 MCUs integrate high-performance CPU cores with large memory, multiple external memory interfaces, and a rich peripheral set optimized for AI applications.

The MCUs, built on the advanced, 22-nm ultra-low-leakage process, are available in single- and dual-core options, with a Cortex-M33 core embedded on the dual-core MCUs. Single- and dual-core devices in 224- and 289-BGA packages address diverse use cases across broad markets. This process also enables the use of embedded magnetoresistive RAM, which offers faster write speeds, in the new MCUs.

The MCUs also provide advanced security. Secure Element–like functionality, along with Arm TrustZone, is built in with advanced cryptographic security IP, immutable storage, and tamper protection to enable secure edge AI and IoT applications.

The RA8P1 MCUs are supported by Renesas’s Flexible Software Package, a comprehensive set of hardware and software development tools, and RUHMI (Renesas Unified Heterogenous Model Integration), a highly optimized AI software platform providing all necessary tools for AI development, model optimization, and conversion, which is fully integrated with the company’s e2 studio integrated design environment.

Renesas Electronics’ RA8P1 MCU group.Renesas Electronics’ RA8P1 MCU group (Source: Renesas Electronics Corp.) Rohde & Schwarz Test & Measurement: FSWX signal and spectrum analyzer

The Rohde & Schwarz FSWX is the first signal and spectrum analyzer with multichannel spectrum analysis, cross-correlation, and I/Q preselection. It features an internal multi-path architecture and high RF performance, with an internal bandwidth of 8 GHz, allowing for comprehensive analysis even of complex waveforms and modulation schemes.

According to Rohde & Schwarz, this represents a fundamental paradigm shift in signal-analysis technology. Cross-correlation cancels the inherent noise of the analyzer and gives a clear view of the device under test, pushing the noise level down to the physical limit for higher dynamic range in noise, phase noise, and EVM measurements.

By eliminating its own noise contribution (a big challenge in measurement science), the FSWX reveals signals 20–30 dB below what was previously measurable, enabling measurements that were impossible with traditional analyzers, the company said.

Addressing critical challenges across multiple industries, the multichannel FSWX offers the ability to measure two signal sources simultaneously with synchronous input ports, each featuring 4-GHz analysis bandwidth, opening phase-coherent measurements of antenna arrays used in beamforming for wireless communications, as well as in radar sensors and electronic warfare systems. For 5G and 6G development, the cross-correlation feature enables accurate EVM measurements below –50 dB that traditional analyzers cannot achieve, according to Rohde & Schwarz.

In radar and electronic warfare applications, the dual channels can simultaneously measure radar signals and potential interference from 5G/Wi-Fi systems. In addition, for RF component makers, the FSWX performs traditional spectrum analyzer measurements, enabling Third Order Intercept measurements near the thermal noise floor without any internal or external amplification.

The FSWX uses broadband ADCs with filter banks spanning the entire operating frequency range, allowing for pre-selected signal analysis while eliminating the need for YIG filters. This solves “a 50-year-old compromise between bandwidth and selectivity in spectrum analyzer design,” according to the company, while providing improved level-measurement accuracy and much faster sweep times.

No other manufacturer offers dual synchronous RF inputs with phase coherence, cross-correlation for general signals, 8-GHz preselected bandwidth, and multi-domain triggering across channels, according to Rohde & Schwarz. This makes it an architectural innovation rather than an incremental improvement.

Rohde & Schwarz’s FSWX signal and spectrum analyzer.Rohde & Schwarz’s FSWX signal and spectrum analyzer (Source: Rohde & Schwarz) Semtech Corp. RF/Microwave: LR2021 RF transceiver

The LR2021 is the first transceiver chip in Semtech’s LoRa Plus family, leveraging its fourth-generation LoRa IP that supports both terrestrial and SATCOM across sub-gigahertz, 2.4-GHz ISM bands, and licensed S-band. The transceiver is designed to be backward-compatible with previous LoRa devices for seamless LoRaWAN compatibility while featuring expanded physical-layer modulations for fast, long-range communication.

The LR2021 is the first transceiver to unify terrestrial (sub-gigahertz, 2.4-GHz ISM) and satellite (licensed S-band) communications on a single chip, eliminating the traditional requirement for separate radio platforms. This enables manufacturers to deploy hybrid terrestrial-satellite IoT solutions with single hardware designs, reducing development complexity and inventory costs for global deployments.

The LR2021 also delivers a high data rate of up to 2.6 Mbits/s, enabling the transmission of higher data-rate content with outstanding link budget and efficiency. The transceiver enables the use of sensor-collected data to train AI models, resulting in better control of industrial applications and support of new applications.

This represents a 13× improvement over Gen 3 LoRa transceivers (Gen 3 SX1262: maximum 200-kbits/s LoRa data rate), opening up new application categories previously impossible with LPWAN technology, including real-time audio classification, high-resolution image recognition, and edge AI model training from battery-powered sensors.

It also offers enhanced sensitivity down to –142 dBm @ SF12/125 kHz, representing a 6-dBm improvement over Gen 3 devices (Gen 3 SX1262: –148-dBm maximum sensitivity at lower spreading factors, typically –133-dBm operational sensitivity). The enhanced sensitivity extends coverage range and improves deep-indoor penetration for challenging deployment environments.

Simplifying global deployment, the transceiver supports multi-region deployment via a single-SKU design. The integration reduces bill-of-material costs, PCB footprint, and power consumption compared with previous LoRa transceivers. The increased frequency offset tolerance eliminates TCXO requirements and large thermal requirements, eliminating components that traditionally added cost and complexity to multi-region designs.

The device is compatible with various low-power wireless protocols, including Amazon Sidewalk, Meshtastic, W-MBUS, Wi-SUN FSK, and Z-Wave when integrated with third-party stack offerings.

Semtech’s LR2021 RF transceiver.Semtech’s LR2021 RF transceiver (Source: Semtech Corp.) Sensata Technologies Inc. Electromechanical: High Efficiency Contactor

Sensata claims a breakthrough electromechanical solution with its High Efficiency Contactor (HEC), designed to accelerate the transition to next-generation EVs by enabling seamless compatibility between 400-V and 800-V battery architectures. As the automotive industry moves toward ultra-fast charging and higher efficiency, the HEC targets vehicles that can charge rapidly at both legacy and next-generation charging stations.

By enabling the seamless reconfiguration between 400-V and 800-V battery systems, this capability allows EVs to charge efficiently at both legacy 400-V charging stations and emerging 800-V ultra-fast chargers, ensuring compatibility and eliminating infrastructure barriers for OEMs and end users.

A key differentiator is its ability to dramatically reduce system complexity and cost. By integrating three high-voltage switches into a single, compact device, the HEC achieves up to a 50% reduction in component count compared with traditional battery-switching solutions, according to Sensata, simplifying system integration and lowering costs.

The HEC withstands short-circuit events up to 25 kA and mechanical shocks greater than 90 g while maintaining ultra-low contact resistance (~50 μΩ) for minimal energy loss.

The HEC features a unique mechanical synchronization that ensures safer operation by eliminating the risk of short-circuit events (a critical safety advancement for high-voltage EV systems). It also offers a bi-stable design and ultra-low contact resistance that contribute to greater energy efficiency during both charging and driving.

The bi-stable design eliminates the need for holding power, further improving energy efficiency, Sensata said.

 

The HEC targets automotive, truck, and bus applications including vehicle-to-grid, autonomous driving, and megawatt charging scenarios. It is rated to ASIL-D.

Sensata’s High Efficiency Contactor.Sensata’s High Efficiency Contactor (Source: Sensata Technologies) SensiBel Sensors: SBM100B MEMS microphone

SensiBel’s SBM100B optical MEMS digital output microphone delivers 80-dBA signal-to-noise ratio (SNR) and 146-dB SPL acoustic overload point (AOP). Leveraging its patented optical sensing technology, the SBM100B achieves performance significantly surpassing anything that is available on the market today, according to the company. It delivers the same audio recording quality that users experience with professional studio microphones but in a small-form-factor microphone.

The 80-dB SNR delivers cleaner audio, reducing hiss and preserving clarity in quiet recordings. It is a significant achievement in noise and dynamic range performance for MEMS microphones, and it’s a level of audio performance that capacitive and piezo MEMS microphone technologies cannot match, the company said.

The SBM100B is also distortion-proof in high-noise environments. Offering an AOP of up to 146-dB SPL, the SBM100B delivers high performance, even in very loud environments, which often have high transient peaks that easily exceed the overload point of competitive microphones, SensiBel said.

The microphone offers studio-quality performance in a compact MEMS package (6 × 3.8 × 2.5-mm, surface-mount, reflow-solderable, bottom-port). With a dynamic range of 132 dB, it prevents distortion in loud environments while still capturing subtle audio details. It supports standard PDM, I2S, and TDM digital interfaces.

The SBM100B also supports multiple operational modes, which optimizes performance and battery life. This allows designers to choose between the highest performance or optimized power while still operating with exceptional SNR. It also supports sleep mode with very low current consumption. An optional I2C interface is available for customization of built-in microphone functions, including bi-quad filters and digital gain.

Applications include general conferencing systems, industrial sound detection, microphone arrays, over-the-ear and true wireless stereo headsets and earbuds, pro audio devices, and spatial audio, including VR/AR headsets, 3D soundbars, and field recorders.

SensiBel’s SBM100B MEMS microphone.SensiBel’s SBM100B MEMS microphone (Source: sensiBel) Stathera Inc. Analog/Mixed-Signal ICs: STA320 DualMode MEMS oscillator

Stathera’s ST320 DualMode MEMS oscillator, in a 2.5 × 2.0 × 0.95-mm package, is a timing solution that generates both kilohertz and megahertz signals from a single resonator. It is claimed to be the first DualMode MEMS timing device capable of replacing two traditional oscillators.

The DualMode capability provides both the kilohertz clock (32.768 kHz) for low-power mode and megahertz (configurable 1–40 MHz) clock for control and communication. This simplifies embedded system design and enhances performance and robustness, along with an extended battery life and a reduction of PCB footprint space and system costs.

Key specifications include a frequency stability of ±20 ppm, a voltage range of 1.62 to 3.63 V, and an operating temperature of –40°C to 85°C. Other features include LVCMOS output and four configurable power modes. This device can be used in consumer, wearables, IoT, edge AI, and industrial applications.

Stathera’s ST320 DualMode MEMS oscillator.Stathera’s ST320 DualMode MEMS oscillator (Source: Stathera Inc.)

The post EDN announces winners of the 2025 Product of the Year Awards appeared first on EDN.

Short push, long push for sequential operation of multiple power supplies

Tue, 02/03/2026 - 15:00

Industrial systems normally use both analog and digital circuits. While digital circuits include microcontrollers that operate at 5 VDC, analog circuits operate generally at either 12 or 15 VDC. In some systems, it may be necessary to switch on power supplies in sequence, first 5 VDC to digital circuits and then 15 VDC to analog circuits.

Wow the engineering world with your unique design: Design Ideas Submission Guide

During switch-off, first 15 VDC and then 5 VDC. In such requirements, Figure 1’s circuit comes in handy.

Figure 1 Single pushbutton switches on or off 5 V and 15 V supplies sequentially. LEDs D1, D2 indicate the presence of 5 V and 15 V supplies. Adequate heat sinks may be provided for Q2 and Q3, depending upon the load currents. Suitable capacitors may be added at the outputs of 5 V and 15 V.

A video explanation of this circuit can be found below:

When you push the button momentarily once, 5 VDC is applied to digital circuits, including microcontroller circuits, and then 15 VDC to analog circuits after a preset delay. When you push the button SW1 for a long time, say 2 seconds, the 15-V supply is withdrawn first, and then the 5-V supply. Hence, one push button does both (sequential) ON and OFF functions.

This Design Idea (DI) is intended for MCU-based projects. No additional components/circuitry are needed to implement this function. When you push SW1 (2-pole push button) momentarily, 5 VDC is extended to the digital circuit through the closure of the first pole of SW1. The microcontroller code should now load HIGH to the output port bit PB0. Due to this, Q1 conducts, pulling the gate of Q2 to LOW. Hence, Q2 now conducts and holds 5 VDC to the digital circuit even after releasing SW1.

Next, the code should be to load HIGH to the output port bit PB1 after a preset delay. This will make Q4 conduct and pull the gate of Q3 to LOW. Hence, Q3 is conducted, and 15 VDC is extended to the analog circuit. Now, the MCU can do its other intended functions.

To switch off the supplies in sequence, push SW1 for a long time, say 2 seconds. Through the second pole of SW1, input port line PB2 is pulled LOW. This 2+ seconds LOW must be detected by the microcontroller code, either by interrupt or by polling, and start the switch-off sequence by loading LOW to the port bit PB1, which switches off Q4 and hence Q3, removing the 15-V supply to the analog circuit. Next, the code should load LOW to PB0 after a preset delay.  This will switch off Q1 and hence Q2, so that 5 VDC is disconnected from the digital/microcontroller circuit.

Thus, a single push button switches on and switches off 5-V and 15-V supplies in sequence. This idea can be extended to any number of circuits and sequences, as needed. This idea is intended for use in MCU-based projects without introducing extra components/circuitry. In this design, ATMEGA 328P MCU and IRF4435 P-channel MOSFETs are used.  For circuits without an MCU, I will offer a scheme to do this function in my next DI.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

The post Short push, long push for sequential operation of multiple power supplies appeared first on EDN.

Why power delivery is becoming the limiting factor for AI

Tue, 02/03/2026 - 11:50

The sheer amount of power needed to support the expansion of artificial intelligence (AI) is unprecedented. Goldman Sachs Research suggests that AI alone will drive a 165% increase in data center power demand by 2030. While power demands continue to escalate, delivering power to next-generation AI processors is becoming more difficult.

Today, designers are scaling AI accelerators faster than the power systems that support them. Each new processor generation increases compute density and current demand while decreasing rail voltages and tolerances.

The net result? Power delivery architectures from even five years ago are quickly becoming antiquated. Solutions that once scaled predictably with CPUs and early GPUs are now reaching their physical limits and cannot sustain the industry’s roadmap.

If the industry wants to keep up with the exploding demand for AI, the only way forward is to completely reconsider how we architect power delivery systems.

Conventional lateral power architectures break down

Most AI platforms today still rely on lateral power delivery schemes where designers place power stages at the periphery of the processor and route current across the PCB to reach the load. At modest current levels, this approach works well. At the thousands of amps characteristic of AI workloads, it does not.

As engineers push more current through longer copper traces, distribution losses rise sharply. PCB resistance does not scale down fast enough to offset the increase. Designers therefore lose power to I2R heating before energy ever reaches the die, which forces higher input power and complicates thermal management (Figure 1). As current demands continue to grow, this challenge only compounds.

Figure 1 Conventional lateral power delivery architectures are wasteful of power and area. Source: Empower Semiconductor

Switching speed exacerbates the problem. Conventional regulators operate in the hundreds of kilohertz range, which requires engineers to use large inductors and bulky power stages. While these components are necessary for reliable operation, they impose placement constraints that keep conversion circuitry far from the processor.

Then, to maintain voltage stability during fast load steps, designers must surround the die with dense capacitor networks that occupy the closest real estate to the power ingress point to the processor: the space directly underneath it on the backside of the board. These constraints lock engineers into architectures that scale inadequately in size, efficiency, and layout flexibility.

Bandwidth, not efficiency, sets the ceiling

Engineers often frame power delivery challenges around efficiency. But, in AI systems, control bandwidth is starting to define the real limit.

When a regulator cannot respond fast enough to sudden load changes, voltage droop follows. To ensure reliable performance, designers raise the voltage so that the upcoming droop does not create unreliable operations. That margin preserves performance but wastes extra power continuously and erodes thermal headroom that could otherwise support higher compute throughput.

Capacitors act as a band aid to the problem rather than fix it. They act as local energy reservoirs that mitigate the slow regulator response, but they do so at the cost of space and parasitic complexity. As AI workloads become more dynamic and burst-driven, this trade-off becomes harder to justify, as enormous magnitudes of capacitance (often in tens of mF) are required.

Higher control bandwidth changes the relationship and addresses the root-cause. Faster switching allows designers to simultaneously shrink inductors, reduce capacitor dependence, and tighten voltage regulation. At that point, engineers can stop treating power delivery as a static energy problem and start treating it as a high-speed control problem closely tied to signal integrity.

High-frequency conversion reshapes power architecture

Once designers push switching frequencies into the tens or hundreds of megahertz, the geometry of power delivery changes.

For starters, magnetic components shrink dramatically, to the point where engineers can integrate inductors directly into the package or substrate. The same power stages that used to be bulky can now fit into ultra-thin profiles as low as hundreds of microns (µm).

Figure 2 An ultra-high frequency IVR-based PDN results in a near elimination of traditional PCB level bulk capacitors. Source: Empower Semiconductor

At the same time, higher switching frequencies mean control loops can react orders of magnitude faster, achieving nanosecond-scale response times. With such a fast transient response, high-frequency conversion completely removes the need for external capacitor banks, freeing up a significant area on the backside of the board.

Together, these space-saving changes make entirely new architectures possible. With ultra-thin power stages and dramatically reduced peripheral circuitry, engineers no longer need to place power stages beside the processor. Instead, for the first time, they can place them directly underneath it.

Vertical power delivery and system-level impacts

By placing power stages directly beneath the processor, engineers can achieve vertical power-delivery (VPD) architectures with unprecedented technical and economic benefits.

First, VPD shortens the power path, so high current only travels millimeters to reach the load (Figure 3). As power delivery distance drops, parasitic distribution losses fall sharply, often by as much as 3-5x. Lower loss reduces waste heat, which expands the usable thermal envelope of the processor and lowers the burden placed on heatsinks, cold plates, and facility-level cooling infrastructure.

Figure 3 Vertical power delivery unlocks more space and power-efficient power architecture. Source: Empower Semiconductor

At the same time, eliminating large capacitor banks and relocating the complete power stages in their place, frees topside board area that designers can repurpose for memory, interconnect, or additional compute resources, thereby increasing performance.

Higher functional density lets engineers extract more compute from the same board footprint, which improves silicon utilization and system-level return on hardware investment. Meanwhile, layout density improves, routing complexity drops, and tighter voltage regulation is achievable.

These combined effects translate directly into usable performance and lower operating cost, or simply put, higher performance-per-watt. Engineers can recover headroom previously consumed by lateral architectures through loss, voltage margining, and cooling overhead. At data-center scale, even incremental gains compound across thousands of processors to save megawatts of power and maximize compute output per rack, per watt, and per dollar.

Hope for the next generation of AI infrastructure

AI roadmaps point toward denser packaging, chiplet-based architectures, and increasing current density. To reach this future, power delivery needs to scale along the same curve as compute.

Architectures built around slow, board-level regulators will struggle to keep up as passive networks grow larger and parasitics dominate behavior. Instead, the future will depend on high-frequency, vertical-power delivery solutions.

Mukund Krishna is senior manager for product marketing at Empower Semiconductor.

Special Section: AI Design

The post Why power delivery is becoming the limiting factor for AI appeared first on EDN.

A hard-life Tile Mate goes under the knife

Mon, 02/02/2026 - 20:34

This engineer was curious to figure out why the Bluetooth tracker for his keys had abruptly gone deceased. Then he remembered a few-year-back mishap…

My various Tile trackers—a Mate attached to my keychain (along with several others hidden in vehicles)—and a Slim in my wallet, have “saved my bacon” multiple times over my years of using them, in helping me locate misplaced important items.

But they’ve been irritants as well, specifically in relation to the activation buttons and speakers built into them. Press the button, and the device loudly plays a little ditty…by default, it also rings whatever smartphone(s) it’s currently paired with. All of which is OK, I guess, as long as pressing the button was an intentional action.

However, when the keychain and/or wallet are in my pockets, the buttons sometimes also get pressed, as well…by keys or other objects in my front pocket, credit cards in my wallet, or sometimes just my body in combination with the pants or shorts fabric. That this often happens often when I’m unable to easily silence the din (while I’m driving, for example) or at an awkward moment (while I’m in the midst of a conversation, for example), is…like I said, irritating.

Silence isn’t always blessed

I eventually figured out how to disable the “Find Your Phone” feature, since I have other ways of determining a misplaced mobile device’s location. So my smartphone doesn’t incessantly ring any more, at least. But the tracker’s own ringtone can’t be disabled, as far as I can tell. And none of the other available options for it are any less annoying than the “Bionic Birdie” default (IMHO):

 

That said, as it turns out, the random activations have at least one unforeseen upside. I realized a while back that I hadn’t heard the tune unintentionally coming from the Tile Mate on my keychain in a while. After an initial sigh of relief, I realized that this likely meant something was wrong. Indeed, in checking the app I saw that the Tile Mate was no longer found.

My first thought (reasonable, I hope you’ll agree) was that I had a dead CR1632 battery on my hands. But to the best of my recollection, I hadn’t gotten the preparatory “low battery” notification beforehand. Indeed, when I pulled the coin cell out of the device and connected it to my multimeter’s leads, it still read a reasonable approximation of the original 3V level. And in fact, when I then dropped the battery into another Tile Mate, it worked fine.

A rough-and-tumble past

So, something inside the tracker had presumably died instead. I’d actually tore down a same-model-year (2020) Tile Mate several years back, that one brand new, so I thought it’d be fun to take this one apart, too, to see if I could discern the failure mechanism via a visual comparison to the earlier device.

At this point, I need to confess to a bout of apparent “senioritis”. This latest Tile Mate teardown candidate has been sitting on my bookshelf, queued up for attention for a number of months now. But it wasn’t until I grabbed it a couple of days ago, in preparation for the dissection, that I remembered/realized what had probably initiated its eventual demise.

Nearly four years back, I documented this very same Tile Mate’s inadvertent travel through the bowels of my snowblower, along with its subsequent ejection and deposit in a pile of moist snow and overnight slumber outside and to the side of my driveway. The Tile Mate had seemingly survived intact, as did my keys. My Volvo fob, on the other hand, wasn’t so lucky

Fast-forward to today, and the Tile Mate (as usual, and as with successive photos, accompanied by a 0.75″/19.1 mm diameter U.S. penny for size comparison purposes) still looks reasonably robust, at least from the front:

Compromised environmental barriers

Turn it around, on the other hand…see that chip in the case above the battery compartment lid? I’d admittedly not noticed that now-missing piece of plastic before:

Arguably, at least theoretically, the lid’s flipside gasket should still preclude moisture intrusion:

But as I started to separate the two case halves:

I also noticed cracks at both battery compartment ends:

Again, they’re limited to the battery area, not intruding into the glue-reinforced main inner compartment where the PCB is located. But still…

And what’s with that additional sliver of grey plastic that got ejected during the separation?

As you may have already figured out, it originated at the keyring “hole”:

After it initially cracked (again, presumably as a result of the early-2022 snowblower debacle) it remained in place, since the two case halves were still attached. But the resultant fracture provided yet another environmental moisture/dirt/etc. intrusion point, albeit once again still seemingly counteracted by the internal glue barrier (perhaps explaining why it impressively kept working for four more years).

Here’s a reenactment of what the tracker would have looked like if the piece had completely fallen out back then:

See, it fits perfectly!

Non-obvious defects (at least to my eyes)

Here’s what this device’s PCB topside looks like, flush with test points:

Compared to its brand-new, same-model-year predecessor, I tore down nearly five years ago:

Same goes for this device’s PCB underside, notably showcasing the Nordic Semiconductor nRF52810 Bluetooth 5.2/BLE control SoC, based on an Arm Cortex-M4, and the associated PCB-embedded antenna along one corner:

versus the pristine one I’d dissected previously:

I don’t see a blatant failure point. Do you? I’m therefore guessing that moisture eventually worked its way inside and invisibly did its damage to a component (or few). As always, I welcome your theories (and/or other thoughts) in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A hard-life Tile Mate goes under the knife appeared first on EDN.

Bridging the gap: Being an AI developer in a firmware world

Mon, 02/02/2026 - 14:14

AI model developers—those who create neural networks to power AI features—are a different breed. They think in terms of latent spaces, embeddings, and loss functions. Their tools of the trade are Python, Numpy, and AI frameworks, and the fruit of their efforts is operation graphs capable of learning how to transform an input into an insight.

A typical AI developer spends months, if not years, without ever considering how memory is allocated, whether a loop fits in a cache line, or even loops at all. Such concerns are the domain of software engineers and kernel developers. They generally don’t think about memory footprints, execution times, or energy consumption. Instead, they focus, correctly, on one main goal: ensuring the AI model accurately derives the desired insights from the available data.

This division of labor functions well in the cloud AI space, where machine learning and inference utilize the same frameworks, hardware, storage, and tools. If an AI developer can run one instance of their model, scaling it to millions of instances becomes a matter of MLOps (and money, of course).

 

Firmware in edge AI

In the edge AI domain, especially in the embedded AI space, AI developers have no such luxury. Edge AI models are highly constrained by memory, latency, and power. If a cloud AI developer runs up against these constraints, it’s a matter of cost: they can always throw more servers into the pool. In edge AI, these constraints are existential. If the model doesn’t meet them, it isn’t viable.

Figure 1 Edge AI developers must be keenly aware of firmware-related constraints such as memory space and CPU cycles. Source: Ambiq

Edge AI developers must, therefore, be firmware-adjacent: keenly aware of how much memory their model needs, how many CPU cycles it uses, how quickly it must produce a result, and how much energy it uses. Such questions are usually the domain of firmware engineers, who are known to argue over mega-cycles-per-second (MCPS) budgets, tightly coupled memory (TCM) share, and milliwatts of battery use.

For the AI developer, figuring out the answer to these questions isn’t a simple process; they must convert their Python-based TensorFlow (or PyTorch) model into firmware, flash it onto an embedded device, and then measure its latency, memory requirements, CPU usage, and energy consumption. With this often-overwhelming amount of data, they then modify their model and try again.

Since much of this process requires firmware expertise, the development cycle usually involves the firmware team, and a lot of tossing balls over fences, and all that leads to slow iteration.

In tech, slow iteration is a bad thing.

Edge AI development tools

Fortunately, all these steps can be automated. With the right tools, a candidate model can be converted into firmware, flashed onto a development board, profiled and characterized, and the results analyzed in a matter of minutes, all while reducing or eliminating the need to involve the firmware folks.

Take the case of Ambiq’s neuralSPOT AutoDeploy, a tool that takes a TensorFlow Lite model, a widely used standard format for embedded AI, converts it into firmware, fine-tunes that firmware, thoroughly characterizes the performance on real hardware (down to the microscopic detail an AI developer finds useful), compares the output of the firmware model to the Python implementation, and measures latency and power for a variety of AI runtime engines. All automatically, and all in the time it takes to fetch a cup of coffee.

Figure 2 AutoDeploy speeds up the AI/embedded iteration cycle by automating most of the tedious bits. Source: Ambiq

By dramatically shortening the optimization loop, AI development is accelerated. Less time is spent on the mechanics, and more time can be spent getting the model right, making it faster, making it smaller, and making it more efficient.

A recent experience highlights how effective this can be: one of our AI developers was working on a speech synthesis model. The results sounded natural and pleasing, and the model ran smoothly on a laptop. However, when the the developer used AutoDeploy to profile the model, he discovered it took two minutes to synthesize just 3 seconds of speech—so slow that he initially thought the model had crashed.

A quick look at the profile data showed that all that time was spent on just two operations—specifically, Transcode Convolutions—out of the 60 or so operations the model used. These two operations were not optimized for the 16-bit integer numeric format required by the model, so they defaulted to a slower, reference version of the code.

The AI developer had two options: either avoid using those operations or optimize the kernel. Ultimately, he opted for both; he rewrote the kernel to use other equivalent operations and asked Ambiq’s kernel team to create an optimized kernel for future runs. All of this was accomplished in about an hour, instead of the week it would normally take.

Edge AI, especially embedded AI, faces its own unique challenges. Bridging the gap between AI developers and firmware engineers is one of those challenges, but it’s a vital one. Here, edge AI system-on-chip (SoC) providers play an essential role by developing tools that connect these two worlds for their customers and partners—making AI development smooth and effortless.

Scott Hanson, founder and CTO of Ambiq, is an expert in ultra-low energy and variation-tolerant circuits.

Special Section: AI Design

The post Bridging the gap: Being an AI developer in a firmware world appeared first on EDN.

Understanding remote sense in today’s power supplies

Mon, 02/02/2026 - 10:16

In today’s power-supply designs, even small wiring and connector resistances can distort the voltage that actually reaches the load. As systems push tighter tolerances and higher currents, these drops become harder to ignore.

Remote sense provides a straightforward way to correct them by letting the supply monitor the voltage at the load itself and adjust its output accordingly. Understanding how this mechanism works—and how to apply it properly—is essential for maintaining stable, accurate rails in modern designs.

Local sense vs remote sense: Where you measure matters

Most power supplies regulate their output using local sense—monitoring voltage at the supply’s own output terminals. This works fine in ideal conditions, but in real systems, the path from supply to load includes resistance from wires, connectors, and circuit-board traces. As current increases, even small resistances can cause significant voltage drop, meaning the load receives less than intended.

Remote sense solves this by relocating the feedback point to the load itself. Instead of trusting the voltage at the supply’s output, it uses a separate pair of sense wires to measure the voltage at the load terminals. The supply then adjusts its output to compensate for any drop along the way, ensuring the load sees the correct voltage—even under dynamic or high-current conditions.

This simple shift in measurement point can dramatically improve regulation accuracy, especially in systems with long cables, high currents, or sensitive loads. Many benchtop and lab-grade power supplies now include this feature, often with a front-panel or software-selectable option to toggle between local and remote sense. When testing precision circuits or powering remote loads, enabling remote sense can make all the difference.

Figure 1 Simplified schematic illustrates a remote-sense setup with external output and sense wires. Source: Author

As a sidenote on what local sense really does, it seems many benchtop power supplies now include a simple switch—or sometimes local-sense jumpers—to select between local and remote sense. In local-sense mode, the supply regulates using the voltage at its own output terminals.

Switching to remote sense hands regulation to the separate sense leads, allowing the supply to track the voltage at the load instead. This selectable mechanism lets you match the regulation method to the setup—local sense for short leads and quick tests and remote sense when wiring losses matter.

Figure 2 Wiring diagram shows a power supply with local-sense jumpers installed. Source: Author

Put simply, for a local-sense configuration, you install the local-sense jumpers so that the Sense + and Sense – terminals are tied directly to the corresponding + and – output terminals on the power supply’s output connector. For a remote-sense configuration, all local sense jumpers are removed, and the Sense + and Sense – terminals are routed externally to the matching + and – points at the load or device under test (DUT).

Note at this point that power supplies with a local/remote sense selector switch don’t require separate local sense jumpers. That is, power supplies equipped with a physical or electronic local/remote sense switch (or a digital configuration setting) utilize internal circuitry to bridge the sense lines to the output terminals. This eliminates the need for the external metal jumpers or wire loops typically found on the barrier strips of older or simpler power supplies.

4-wire sensing: More sensible pointers on remote sense

Starting this session with a cautionary note, always verify the selector switch position and all sensing connections before enabling the output. Setting the switch to Remote without sense wires attached can trigger the feedback loop to detect zero voltage and attempt to compensate. This often forces the power supply to its maximum voltage, potentially damaging your equipment even if physical jumpers are absent.

Furthermore, any noise captured by the sense leads will be reflected at the output terminals, potentially degrading load regulation. To minimize electromagnetic interference (EMI) from external sources, use twisted-pair wiring or ribbon cables for the sense connections.

Because these high-impedance leads carry negligible current, thin-gauge wire is sufficient for this purpose. In high-noise environments, shielded cabling may be necessary; if used, ensure the shield is grounded at the power supply end only and never utilized as a current-carrying sensing conductor.

As a quick aside, it appears that many power supplies now implement some form of smart sense detection as a fail-safe. Since a floating sense connection can create a hazardous open-loop state, these systems protect the hardware by shutting down if the leads are disconnected—whether that happens during live use or at initial startup.

In practice, many modern programmable power supplies use auto-sense technology to monitor sense terminals and automatically engage remote sensing when external leads are detected. To ensure stability, these units include internal protection resistors—often called fallback resistors—connecting the output and sense terminals.

These resistors provide a secondary feedback path that allows the supply to default safely to local sensing if leads are missing or accidentally disconnected. This hardware redundancy prevents a dangerous open-loop overvoltage condition, protecting the load from upsurges caused by wiring failure or human error.

Just a sidewalk, ordinary yet essential, becomes a metaphor for design simplicity. On a workbench scattered with piles of discrete electronic components, it’s equally instructive and rewarding to attempt the design of an entry-level remote-sense power supply.

Experimenting with various operational amplifier configurations—specifically differential and error amplifier circuits—alongside voltage references demonstrates how feedback loops maintain precise regulation under dynamic loads.

Such a hands-on approach not only highlights the critical aspects of stability and compensation but also provides valuable insight into the trade-offs between component selection, circuit topology, and overall performance. These complexities are left for the reader to explore intentionally.

Virtual remote sense in practice

Jumping to a quick coffee break, let us touch on virtual remote sense (VRS). This clever technique emulates the benefits of true remote sensing without the extra wiring, helping designers maintain regulation accuracy while simplifying layouts.

Several well-known ICs in the Analog Devices’ portfolio—originally developed by Linear Technology—have embraced VRS to make implementation straightforward: LT4180, LT8697, and LT6110 are prime examples. Each integrates features that reduce voltage drops across traces and connectors, ensuring stable supply rails even in demanding applications.

Because these devices employ different methods to achieve VRS, a thorough review of their datasheets is strongly recommended to understand the nuances and select the right fit for your design. Exploring these solutions hands-on could be the key to unlocking cleaner, more reliable power delivery in your next project.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Understanding remote sense in today’s power supplies appeared first on EDN.

Touch ICs scale across automotive display sizes

Fri, 01/30/2026 - 20:31

Two touchscreen controllers join Microchip’s maXTouch M1 family, expanding support for automotive displays over a wider range of form factors. The ATMXT3072M1-HC and ATMXT288M1 cover free-form widescreen displays up to 42 in., as well as compact screens in the 2- to 5-in. range. Both devices are compatible with display technologies such as OLED and microLED.

The AEC-Q100-qualified controllers leverage Smart Mutual acquisition technology to boost SNR by up to 15 dB compared to previous generations. They deliver reliable touch detection even for on-cell OLEDs, where embedded touch electrodes are subjected to high capacitive loads and increased noise coupling.

The ATMXT3072M1-HC targets large, continuous touch sensor designs that span both the cluster and center information display, enabling a single hardware design for left-hand and right-hand drive vehicles. For smaller screens, the ATMXT288M1 is available in a TFBGA60 package, reducing PCB area by 20% compared to the previous smallest automotive-qualified maXTouch product.

For pricing and sample orders, contact a Microchip sales representative or authorized dealer.

ATMXT3072M1-HC product page 

ATMXT288M1 product page 

Microchip Technology 

The post Touch ICs scale across automotive display sizes appeared first on EDN.

Keysight automates complex coexistence testing

Fri, 01/30/2026 - 20:31

Keysight’s Wireless Coexistence Test Solution (WCTS) is a scalable platform for validating wireless device performance in crowded RF environments. This automated, standards-aligned approach reduces manual setup, improves test repeatability, and enables earlier identification of coexistence risks during development.

To replicate real-world RF conditions, WCTS integrates a wideband vector signal generator. It covers 9 kHz to 8.5 GHz—scalable to 110 GHz—with modulation bandwidths up to 250 MHz (expandable to 2.5 GHz). A single RF port can generate up to eight virtual signals, enabling complex interference scenarios without additional hardware. Nearly 100 predefined, ANSI C63.27-compliant test scenarios are included, covering all three coexistence tiers.

Built on OpenTAP, an open-source, cross-platform test sequencer, WCTS delivers scalable and configurable testing through a user-friendly GUI and open architecture. Engineers can upload custom waveforms and validate test plans offline using simulation mode, accelerating test development and reducing lab time.

More information about the Keysight Wireless Coexistence Test Solution can be found here.

Keysight Technologies 

The post Keysight automates complex coexistence testing appeared first on EDN.

600-V MOSFET enables efficient, reliable power conversion

Fri, 01/30/2026 - 20:31

The first device in AOS’ αMOS E2 high-voltage Super Junction MOSFET platform is the AOTL037V60DE2, a 600-V N-channel MOSFET. It offers high efficiency and power density for mid- to high-power applications such as servers and workstations, telecom rectifiers, solar inverters, motor drives, and other industrial power systems.

Optimized for soft-switching topologies, the AOTL037V60DE2 delivers low switching losses and is well suited for Totem Pole PFC, LLC and PSFB converters, as well as CrCM H-4 and cyclo-inverter applications. The device is available in a TOLL package and features a maximum RDS(on) of 37 mΩ.

AOS engineered the αMOS E2 high-voltage Super Junction MOSFET platform with a robust intrinsic body diode to handle hard commutation events, such as reverse recovery during short-circuits or start-up transients. Evaluations by AOS showed that the body diode can withstand a di/dt of 1300 A/µs under specific forward current conditions at a junction temperature of 150 °C. Testing also confirmed strong Avalanche Unclamped Inductive Switching (UIS) capability and a long Short-Circuit Withstanding Time (SCWT), supporting reliable operation under abnormal conditions.

The AOTL037V60DE2 is available in production quantities at a unit price of $5.58 for 1000-piece orders.

AOTL037V60DE2 product page

Alpha & Omega Semiconductor 

The post 600-V MOSFET enables efficient, reliable power conversion appeared first on EDN.

Stable LDOs use small-output caps

Fri, 01/30/2026 - 20:31

Based on Rohm’s Nano Cap ultra-stable control technology, the BD9xxN5 series of LDO regulator ICs delivers 500 mA of output current. The series is intended for 12-V and 24-V primary power supply applications in automotive, industrial, and communication systems.

The BD9xxN5 series builds on the earlier BD9xxN1 series, increasing the output current from 150 mA to 500 mA while maintaining stability with small-output capacitors. The ICs provide low output voltage ripple (~250 mV) for load current changes from 1 mA to 500 mA within 1 µs. Using a typical output capacitance of 470 nF, they enable compact designs and flexible component selection.

All six new variants in the BD9xxN5 series are AEC-Q100 qualified and operate over a temperature range of –40°C to +125°C. Each device provides a single output of 3.3 V, 5 V, or an adjustable voltage from 1 V to 18 V, accurate to within ±2.0%. The absolute maximum input voltage rating is 45 V.

The BD9xxN5 LDO regulators are available now from Rohm’s authorized distributors. Datasheets for each variant can be accessed here.

Rohm Semiconductor 

The post Stable LDOs use small-output caps appeared first on EDN.

1200-V SiC modules enable direct upgrades

Fri, 01/30/2026 - 20:31

Five 1200-V SiC power modules in SOT-227 packages from Vishay serve as drop-in replacements for competing solutions. Based on the company’s latest generation of SiC MOSFETs, the modules deliver higher efficiency in medium- to high-frequency automotive, energy, industrial, and telecom applications.

The VS-SF50LA120, VS-SF50SA120, VS-SF100SA120, VS-SF150SA120, and VS-SF200SA120 power modules are available in single-switch and low-side chopper configurations. Each module’s SiC MOSFET integrates a soft body diode with low reverse recovery. This reduces switching losses and improves efficiency in solar inverters and EV chargers, as well as server, telecom, and industrial power supplies.

The modules support drain currents from 50 A to 200 A. The VS-SF50LA120 is a 50-A low-side chopper with 43-mΩ RDS(on), while the VS-SF50SA120 is a 50-A single-switch device rated at 47 mΩ. Single-switch options scale to 100 A, 150 A, and 200 A with RDS(on) values of 23 mΩ, 16.8 mΩ, and 12.1 mΩ, respectively.

Samples and production quantities of the VS-SF50LA120, VS-SF50SA120, VS-SF100SA120, VS-SF150SA120, and VS-SF200SA120 are available now, with lead times of 13 weeks.

Vishay Intertechnology 

The post 1200-V SiC modules enable direct upgrades appeared first on EDN.

Chandra X-Ray Mirror

Fri, 01/30/2026 - 15:00

There is a Neil deGrasse Tyson video covering the topic of the Chandra X-ray Observatory. This essay is in part derived from that video. I suggest that you view the discussion. It will be sixty-five minutes well spent.

This device doesn’t look anything like a planar mirror because X-ray photons cannot be reflected by any known surface in the way you see your reflection above your bathroom sink.

If you aim a stream of X-ray photons directly toward any particular surface, either a silvered mirror or some kind of intended lens, those photons will either pass right on through (which is what your medical X-rays do) or they will be absorbed. You will not be able alter the trajectory of an X-ray photon stream, at least not with any device like that.

However, X-ray photons can be grazed off a reflective surface to achieve a slight trajectory change if their initial angle of approach to the mirror surface is kept very small. With the surface of the Chandra X-ray mirror made extremely smooth, almost down to the atomic level, repeated grazing permits X-ray focus to be achieved. This is the operating principle of the Chandra X-ray Telescope’s mirror, as shown in Figure 1.

Figure 1 The Chandra X-Ray Observatory mirrors showing a perspective view, a cut-away view, and x-ray photon trajectories. (Source: StarTalk Podcast)

The Chandra Observatory was launched on July 23, 1999, and has been doing great things ever since. Regrettably, however, its continued operation is in some jeopardy. Please see the following Google search result.

Figure 2 Google search result of the Chandra Telescope showing science funding budget cuts for the Chandra X-ray Observatory going from $69 million to zero. (Source: Google, 2026)

I’m keeping my fingers crossed.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Chandra X-Ray Mirror appeared first on EDN.

Successive approximation

Thu, 01/29/2026 - 15:00

Analog-to-digital conversion methods abound, but we are going to take a look at a particular approach as shown in Figure 1.

Figure 1 An analog-to-digital converter where an analog input signal is compared to a voltage reference that has been scaled via a resistive ladder network. (Source: John Dunn)

In this approach, in very simplified language, an analog input signal is compared to a voltage reference that has been scaled via a resistive ladder network. Scaling is adjusted by finding that digital word for which a scaled version of Vref becomes equal to the analog input. The number of bits in the digital word can be chosen pretty much arbitrarily, but sixteen bits is not unusual. However, for illustrative purposes, we will illustrate the use of only seven bits.

Referring to a couple of examples as seen in Figure 2, the process runs something like this.

Figure 2 Two digital word acquisition examples using successive approximation. (Source: John Dunn)

For descriptive purposes, let the analog input be called our “target”. We first set the most significant bit (the MSB) of our digital word to 1 and all of the lower bits to 0. We compare the scaled Vref to the target to see if we have equality. If the scaled Vref is lower than the target, we leave the MSB at 1, or if the scaled Vref is greater than the target, we return the MSB to 0. If the two are equal, we have completion.

In either case, if we do not have completion, we then set the next lower bit to 1, and again we compare the scaled Vref to the target to see if we have equality. If the scaled Vref is lower than the target, we leave this second bit at 1, or if the scaled Vref is greater than the target, we return this second bit back to 0. If the two are equal, we have completion.

Again, in either case, if we do not have completion, we then set the next lower bit to 1, and again we compare the scaled Vref to the target to see if we have equality. If the scaled Vref is lower than the target, we leave this third bit at 1, or if the scaled Vref is greater than the target, we return this third bit to 0. If the two are equal, we have completion.

Sorry for the monotony, but that is the process. We repeat this process until we achieve equality, which can take as many steps as there are bits, and therein lies the beauty of this method.

We will achieve equality in no more steps than there are bits. For the seven-bit examples shown here, the maximum number of steps to completion will be seven. Of course, it’s not that we actually have seven-bit converters offered by any company, but the number “seven” simply allows viewable examples to be drawn below. Fewer bits might not make things clear, while more bits could have us squinting at the page with a magnifying glass.

If we did a simple counting process starting from all zeros, the maximum number of steps could be as high as 27+1 or one-hundred-twenty-eight, which could/would be really slow.

Slow, straight-out counting would be a “tracking” process, which is sometimes used and which does have its own virtues. However, we can speed things up with what is called “successive approximation”.

Please note that the “1”, the “-1”, and the “0” highlighted in blue are merely indicators of which value is greater than, less than, or equal to the other.

A verbal description of this process for the target value of 101 may help shed some light. We then proceed as follows. (Yes, this is going to be verbose, but please trace it through.)

We first set the most significant bit with its weight value of 64 to a logic 1 and discover that the numerical value of the bit pattern is just that, the value 64. When we compare this to our target number of 101, we find that we’re too low. We will leave that bit where it is and move on.

We set the next lower significant bit with its weight value of 32 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 = 96. When we compare this to our target number of 101, we find that we’re still too low. We will leave the pair of bits where they are and move on.

We set the next lower bit again with its weight value of 16 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 16 = 112. When we compare this to our target number of 101, we find that we are now too high.  We will leave the first two most significant bits where they are, but we will return the third most significant bit to logic 0 and move on.

We set the next lower bit again with its weight value of 8 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 0 + 8 = 104.  When we compare this to our target number of 101, we find that we are now again too high.  We will leave the first three most significant bits where they are, but we will return the fourth most significant bit to logic 0 and move on.

We set the next lower bit again with its weight value of 4 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 0 + 0 + 4 = 100.  When we compare this to our target number of 101, we find that we’re once again too low. We will leave the quintet of bits where they are and move on.

We set the next lower bit again with its weight value of 2 to a logic 1 and discover that the sum yielding the numerical value is now 64 + 32 + 0 + 0 + 4 + 2 = 102.  When we compare this to our target number of 101, we find that we are now once again too high.  We will leave the first five most significant bits where they are, but we will return the sixth most significant bit to logic 0 and move on.

We set the lowest bit with its weight value of 1 to a logic 1 and discover that the sum yielding the numerical value is now 101, there is no error. We have completed our conversion in only seven counting steps, which is far and away, way less than the number of steps that would have been required in a simple, direct counting scheme.

It may be helpful to look at a larger number of digital word acquisition examples, as in Figure 3.

 

Figure 3 Digital word acquisitions with number paths. (Source: John Dunn)

Remember the old movie “Seven Brides for Seven Brothers”? For these examples, think “Seven Steps for Seven Bits”.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Successive approximation appeared first on EDN.

Apple CarPlay and Google Android Auto: Usage impressions and manufacturer tensions

Thu, 01/29/2026 - 15:00

What happens to manufacturers when your ability to differentiate whose vehicle you’re currently traveling in, far from piloting, disappears?

My wife’s 2018 Land Rover Discovery:

not only now has upgraded LED headlights courtesy of yours truly, I also persuaded the dealer a while ago to gratis-activate the vehicle’s previously latent Apple CarPlay and Google Android Audio facilities for us (gratis in conjunction with a fairly pricey maintenance bill, mind you…). I recently finally got around to trying them both out, and the concept’s pretty cool, with the implementation a close second. Here’s what CarPlay’s UI looks like, courtesy of Wikipedia’s topic entry:

And here’s the competitive Android Auto counterpart:

Vehicle-optimized user experiences

As you can see, this is more than just a simple mirroring of the default smartphone user interface; after the mobile device and vehicle successfully activate a bidirectional handshake, the phone switches into an alternative UI that’s more vehicle (specifically: mindful of driver-distraction potential) amenable, and tailored for its larger albeit potentially lower overall resolution dashboard-integrated display.

The baseline support for both protocols in our particular vehicle is wired, which means that you plug the phone into one of the USB-A ports located within the storage console located between the front seats. My wife’s legacy iPhone is still Lightning-based, so I’ve snagged both a set of inexpensive ($4.99 for three) coiled Lightning-to-USB-A cords for her:

and a similarly (albeit not quite as impressively) penny-pinching ($6.67 for two) suite of USB-C-to-USB-A coiled cords for my Google Pixel phones:

The wired approach is convenient because a single cord handles both communication-with-vehicle and phone charging tasks. That said, a lengthy strand of wire, even coiled, spanning the gap from the console to the magnetic mount located at the dashboard vent:

is aesthetically and otherwise unappealing, especially considering that the mount at the phone end also already redundantly supports both MagSafe (iPhone) and Qi (Pixel, in conjunction with a magnet-augmented case) charging functions:

Wireless communications

Therefore, I’ve also pressed into service a couple of inexpensive (~$10 each, sourced from Amazon’s Warehouse-now-Resale section) wireless adapters that mimic the integrated wireless facilities of newer model-year vehicles and even comprehend both the CarPlay and Android Auto protocols. One comes from a retailer called VCARLINKPLAY:

The other is from the “PakWizz Store”:

The approach here is somewhat more complicated. The phone first pairs with the adapter, already plugged into and powered by the car’s USB-A port, over Bluetooth. The adapter then switches both itself and the phone to a common and (understandably, given the aggregate data payload now involved) beefier 5 GHz Wi-Fi Direct link.

Particularly considering the interference potential from other ISM band (both 2.4 GHz for Bluetooth and 5 GHz for Wi-Fi) occupants contending for the same scarce spectrum, I’m pleasantly surprised at how reliable everything is, although initial setup admittedly wasn’t tailored for the masses and even caused techie-me to twitch a bit.

Encroaching on vehicle manufacturers’ turf

As such, I’ve been especially curious to follow recent news trends regarding both CarPlay and Android Auto. Rivian and Tesla, for example, have long resisted adding support for either protocol to their vehicles, although rumors persist that both companies are continuing to develop support internally for potential rollout in the future.

Automotive manufacturers’ broader embrace (public at least) for next-generation CarPlay Ultra has to date been muted at best. And GM is actively phasing out both CarPlay and Android Auto from new vehicle models, in favor of an internally developed entertainment software-and-display stack alternative.

What’s going on? Consider this direct quote from Apple’s May 2025 CarPlay Ultra press release:

CarPlay Ultra builds on the capabilities of CarPlay and provides the ultimate in-car experience by deeply integrating with the vehicle to deliver the best of iPhone and the best of the car. It provides information for all of the driver’s screens, including real-time content and gauges in the instrument cluster.

Granted, Apple has noted that in developing CarPlay Ultra, it’s “reflecting the automaker’s look and feel” (along with “offering drivers a customizable experience”). But given that all Apple showed last May was an Aston Martin logo next to its own:

I’d argue that Apple’s “partnership” claims are dubious, and maybe even specious. And per comments from Ford’s CEO Jim Farley in a recent interview, he seems to agree (the full interview is excellent and well worth a read):

Are you going to allow OEMs to control the vehicles? How far do you want the Apple brand to go? Do you want the Apple brand to start the car? Do you want the Apple brand to limit the speed? Do you want the Apple brand to limit access?

The bottom line, as I see it, is that Apple can pontificate all it wants that:

CarPlay Ultra allows automakers to express their distinct design philosophy with the look and feel their customers expect. Custom themes are crafted in close collaboration between Apple and the automaker’s design team, resulting in experiences that feel tailor-made for each vehicle.

But automakers like Ford and GM are obviously (and understandably so, IMHO) worried that with Apple and Google already taking over key aspects of the visual, touch (and audible; don’t forget about the Siri and Google Assistant-now-Gemini voice) interfaces, not to mention their even more aggressive aspirations (along with historical behavior in other markets as a guide to future behavior here), the manufacturer, brand and model uniqueness currently experienced by vehicle occupants will evaporate in response.

More to come

I’ll be curious to see (and cover) how this situation continues to develop. For now, I welcome your thoughts in the comments on what I’ve shared so far in this post. And FYI, I’ve also got two single-protocol wireless adapter candidates sitting in my teardown pile awaiting attention: a CarPlay-only unit from the “Luckymore Store”:

And an Android Auto-only unit, the v1 AAWireless, which I’d bought several years back in its original Indiegogo crowdfunding form:

Stay tuned for those, as well!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Apple CarPlay and Google Android Auto: Usage impressions and manufacturer tensions appeared first on EDN.

Is this low-inductance power-device package the real deal?

Thu, 01/29/2026 - 15:00

While semiconductor die get so much of the attention due to their ever-shrinking feature size and ever-increasing substrate size, the ability to effectively package them and thus use them in a circuit is also critical. For this reason, considerable effort is devoted to developing and perfecting practical, technically advanced, thermally suitable cost-effective packages for components ranging from switching power devices to multi-gigahertz RF devices.

Regardless of frequency, package parasitic inductance is a detrimental issue, as it slows down slewing needed for switching crispness of digital devices and responsiveness of analog ones (of course, reality is that digital switching performance is still constrained by analog principles.).

Now, a researcher team at the US Department of Energy’s National Renewable Energy Laboratory (NREL; recently renamed as the National Laboratory of the Rockies) has developed a silicon-carbide half-bridge module that uses organic direct-bonded copper in a novel layout design to enable a high degree of magnetic-flux cancellation, Figure 1.                 

Figure 1 (left) 3D CAD drawing of new half-bridge inverter module; (right) Early prototype of polyimide-based half-bridge module. Source: NREL

Their Ultra-Low Inductance Smart (ULIS) package is a 1200 V, 400 A half bridge silicon carbide (SiC) power module that can be pushed beyond 200-kHz switching frequency at maximum power. The low-cost ULIS also allows the converter to become easier to manufacture, addressing issues related to both bulkiness and costs.  

Preliminary results show that it has approximately seven to nine times lower loop inductances and higher switching speeds at similar voltages/current levels, and five times the energy density of earlier designs — while occupying a smaller footprint, Figure 2.

Figure 2 The complete ULIS package is very different than conventional packages and offers far lower loop inductance compared to exiting approaches. Source: NREL

In addition to being powerful and lightweight, the module continuously tracks its own condition and can anticipate component failures before they happen.

In traditional designs, the power modules conduct electricity and dissipate excess heat by bonding copper sheets directly to a ceramic base—an effective, but rigid, solution. ULIS bonds copper to a flexible Dupont Temprion polymer create a thinner, lighter, more configurable design.

Unlike typical power modules which assemble semiconductor devices inside a brick-like package, ULIS winds its circuits around a flat, octagonal design, Figure 3. The disk-like shape allows more devices to be housed in a smaller area, making the overall package smaller and lighter.

Figure 3 This “exploded” drawing of the complete half-bridge power module shows the arrangement of the electrical and structural elements. Source: NREL

At the same time, its novel current routing allows for maximum cancellation of magnetic flux, contributing to the power module’s clean, low-loss electrical output, meaning ultrahigh efficiency.

While conventional power modules rely on bulky and inflexible materials, ULIS takes a new approach. Traditional designs call for power modules to conduct electricity and dissipate excess heat by bonding copper sheets directly to a ceramic base—an effective but rigid solution. ULIS bonds copper to the flexible, electrically insulating Temprion to create a thinner and lighter module.

The stacked module layout greatly improves energy density and reduces parasitic inductance (based on simulation data).  Typical half-bridge module inductance is 2.2 to 5.5 nanohenries, compared to 20 to 25 nH for existing designs. Further, reliability is enhanced as the compliance of Temprion reduces the strain caused by the differences in the coefficient of thermal expansion (CTE) between mated materials.

Since the material bonds easily to copper using just pressure and heat, and because its parts can be machined using widely available equipment, the team maintains that the ULIS can be fabricated quickly and inexpensively, with manifesting costs in the hundreds of dollars rather than thousands, Figure 4.

Figure 4 The ULIS can be machined using widely available equipment, thus significantly reducing the manufacturing costs for the power module. Source: NREL

Another innovation allows  the ULIS to function wirelessly as an isolated unit that can be controlled and monitored without external cables. A patent is pending for this low-latency wireless communication protocol.

The ULIS design is a good example of the challenges and dead-end paths that innovation can take on its path to a successful conclusion. According to the team’s report, one of the original layouts looked like a flower with a semiconductor at the tip of each petal. Another idea was to create a hollow cylinder with components wired to the inside.

Every idea the team came up with was either too expensive or too difficult to fabricate—until they stopped thinking in three dimensions and flattened the design into nearly two dimensions, which made it possible to build the module balancing complexity with cost and performance.

The details of the work are in their readable and detailed IEEE APEC paper “Organic Direct Bonded Copper-Based Rapid Prototyping for Silicon Carbide Power Module Packaging” but it is behind a paywall. However, there is a nice “poster” summary of their work posted at the NLR site here.

I wonder is this innovation will catch on and be adopted, but I certainly don’t know. What I do know is that some innovations are slow to catch on, and many do not because of real-world problems related to scaling up, volume production unforeseen technical issues, testability…it’s a long list of what can get in the way.

If you don’t think so, just look at batteries: every month, we see news of dramatic advances that will supposedly revolutionize their performance, yet these breakthroughs don’t seem to get traction. Sometimes it is due to technical or implementation problems, but often it is because the actual improvement they provide does not outweigh the disruption they create in getting there.

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related content

The post Is this low-inductance power-device package the real deal? appeared first on EDN.

Top 10 edge AI chips

Thu, 01/29/2026 - 15:00
Hailo’s Hailo-10H edge AI accelerator.

As edge devices become increasingly AI-enabled, more and more chips are emerging to fill every application niche. At the extremes, applications such as speech recognition can be done in always-on power envelopes, while tens of watts will be enough for even larger generative AI models today.

Here, in no particular order, are 10 of EDN’s selections for a range of edge AI applications. These devices range from those capable of handling multimodal large language models (LLMs) in edge devices to those designed for vision processing and minimizing power consumption for always-on applications.

Multiple camera streams

For vision applications, Ambarella Inc.’s latest release is the CV7 edge AI vision system-on-chip (SoC) for processing multiple high-quality camera streams simultaneously via convolutional neural networks (CNNs) or transformer networks. The CV7 features the latest generation of Ambarella’s proprietary AI accelerator, plus an in-house image-signal processor (ISP), which uses both traditional ISP algorithms and AI-driven features. This family also includes quad Arm Cortex-A73 cores, hardware video codecs on-chip, and a new, 64-bit DRAM interface.

Ambarella is targeting this family for AI-based 8K consumer products such as action cameras, multicamera security systems, robotics and drones, industrial automation, and video conferencing. It will also be suitable for automotive applications such as telematics and advanced driver-assistance systems.

 

 

Ambarella’s CV7 vision SoC.Ambarella’s CV7 vision SoC (Source: Ambarella Inc.) Fallback CPU

The MLSoC Modalix from SiMa Technologies Inc. is now available in production quantities, along with its LLiMa software framework for deployment of LLMs and generative AI models on Modalix. Modalix is SiMa’s second-generation architecture, which comes as a family of SoCs designed to host full applications.

Modalix chips have eight Arm A-class CPU cores on-chip alongside the accelerator, important for running application-level code, but also allows programs to fall back on the CPU just in case a particular math operation isn’t supported by the accelerator. Also on the SoC are an on-chip ISP and digital-signal processor (DSP). Modalix will come in 25-, 50-, 100-, and 200-TOPS (INT8) versions. The 50-TOPS version will be first to market and can run Llama2-7B at more than 10 tokens per second, with a power envelope of 8–10 W.

Open-source NPU

Synaptics Inc.’s Astra series of AI-enabled IoT SoCs range from application processors to microcontroller (MCU)-level parts. This family is purpose-built for the IoT.

The SL2610 family of multimodal edge AI processors is for applications between smart appliances, retail point-of-sale terminals, and drones. All parts in the family have two Arm Cortex-A55 cores, and some have a neural processing unit (NPU) subsystem. The Coral NPU included was developed at Google—it’s an open-source RISC-V CPU with scalar instructions—sitting alongside Synaptics’ homegrown AI accelerator, the T1, which offers 1-TOPS (INT8) performance for transformers and CNNs.

Synaptics’ SL2610 multimodal edge AI processors.Synaptics’ SL2610 multimodal edge AI processors (Source: Synaptics Inc.) Raspberry Pi compatibility

The Hailo-10H edge AI accelerator from Hailo Technologies Ltd. is gaining a large developer base, as it is available in a form factor that plugs into hobbyist platform Raspberry Pi. However, the Hailo-10H is also used by HP in add-on cards for its point-of-sale systems, and it’s also automotive-qualified.

The 10H is the same silicon as the Hailo-10 but runs at a lower power-performance point: The 10H can run 2B-parameter LLMs in about 2.5 W. The architecture of this AI co-processor is based on Hailo’s second-generation architecture, which has improved support for transformer architectures and more flexible number representation. Multiple models can be inferenced concurrently.

Hailo’s Hailo-10H edge AI accelerator.Hailo’s Hailo-10H edge AI accelerator (Source: Hailo Technologies Ltd.) Analog acceleration

Startup EnCharge AI announced its first product, the EN100. This chip is a 200-TOPS (INT8) accelerator targeted squarely at the AI PC, achieving an impressive 40 TOPS/W. The device is based on EnCharge’s capacitance-based analog compute-in-memory technology, which the company says is less temperature-sensitive than resistance-based schemes. The accelerator’s output is a voltage (not a current), meaning transimpedance amplifiers aren’t needed, saving power.

Alongside the analog accelerator on-chip are some digital cores that can be used if higher precision is required, or floating-point maths. The EN100 will be available on a single-chip M.2 card with 32-GB LPDDR, with a power envelope of 8.25 W. A four-chip, half-height, half-length PCIe card offers up to 1 TOPS (INT8) in a 40-W power envelope, with 128-GB LPDDR memory.

Encharge AI’s EN100 M.2 card.Encharge AI’s EN100 M.2 card (Source: Encharge AI) SNNs

For microwatt applications, Innatera Nanosystems B.V. has developed an AI-equipped MCU that can run inference at very, very low power. The Pulsar neuromorphic MCU targets always-on sensor applications: It consumes 600 µW for radar-based presence detection and 400 µW for audio scene classification, for example.

The neural processor uses Innatera’s spiking neural network (SNN) accelerators—there are both analog and digital spiking accelerators on-chip, which can be used for different types of applications and workloads. Innatera says its software stack, Talamo, means developers don’t have to be SNN experts to use the device. Talamo interfaces directly with PyTorch and a PyTorch-based simulator and can enable power consumption estimations at any stage of development.

Innatera’s Pulsar spiking neural processor.Innatera’s Pulsar spiking neural processor (Source: Innatera Nanosystems B.V.) Generative AI

Axelera AI’s second-generation chip, Europa, can support both multi-user generative AI and computer vision applications in endpoint devices or edge servers. This eight-core chip can deliver 629 TOPS (INT8). The accelerator has large vector engines for AI computation alongside two clusters of eight RISC-V CPU cores for pre- and post-processing of data. There is also an H.264/H.265 decoder on-chip, meaning the host CPU can be kept free for application-level software. Given the importance of ensuring compute cores are fed quickly with data from memory, the Europa AI processor unit provides 128 MB of L2 SRAM and a 256-bit LPDDR5 interface.

Axelera’s Voyager software development kit covers both Europa and the company’s first-generation chip, Metis, reserved for more classical CNNs and vision tasks. Europa is available both as a chip or on a PCIe card. The cards are intended for edge server applications in which processing multiple 4K video streams is needed.

Butter wouldn’t melt

Most members of the DX-M1 series from South Korean chip company DeepX Co. Ltd. provide 25-TOPS (INT8) performance in the 2- to 5-W power envelope (the exception being the DX-M1M-L, offering 13 TOPS). One of the company’s most memorable demos involves placing a blob of butter directly on its chip while running inference to show that it doesn’t get hot enough for the butter to melt.

Delivering 25 TOPS in this co-processor chip is plenty for vision tasks such as pose estimation or facial recognition in drones, robots, or other camera systems. Under development, the DX-M2 will run generative AI workloads at the edge. Part of the company’s secret sauce is in its quantization scheme, which can run INT8-quantized networks with accuracy comparable to the FP32 original. DeepX sells chips, modules/cards, and small, multichip systems based on its technology for different edge applications.

Voice interface

The latest ultra-low-power edge AI accelerator from Syntiant Corp., the NDP250, offers 5× the tensor throughput versus its processor. This device is designed for computer vision, speech recognition, and sensor data processing. It can run on as little as microwatts, but for full, always-on vision processing, the consumption is closer to tens of milliwatts.

As with other parts in Syntiant’s range, the devices use the company’s AI accelerator core (30 GOPS [INT8]) alongside an Arm Cortex-M0 MCU core and an on-chip Tensilica HiFi 3 DSP. On-chip memory can store up to 6-million-bit parameters. The NDP250’s DSP supports floating-point maths for the first time in the Syntiant range. The company suggests that the ability to run both automatic speech recognition and text-to-speech models will lend the NDP250 to voice interfaces in particular.

Multiple power modes

Nvidia Corp.’s Jetson Orin Nano is designed for AI in all kinds of edge devices, targeting robotics in particular. It’s an Ampere-generation GPU module with either 8 GB or 4 GB of LPDDR5. The 8-GB version can do 33 TOPS (dense INT8) or 17 TFLOPS (FP16). It has three power modes: 7-W, 15-W, and a new, 25-W mode, which boosts memory bandwidth to 102 GB/s (from 65 GB/s for the 15-W mode) by increasing GPU, memory, and CPU clocks. The module’s CPU has six Arm Cortex-A78AE 64-bit cores. Jetson Orin Nano will be a good fit for multimodal and generative AI at the edge, including vision transformer and various small language models (in general, those with <7 billion parameters).

Nvidia’s Jetson Orin Nano.Nvidia’s Jetson Orin Nano (Source: Nvidia Corporation)

The post Top 10 edge AI chips appeared first on EDN.

Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs

Thu, 01/29/2026 - 11:07

The saying “round pegs do not fit square holes” persists because it captures a deep engineering reality: inefficiency most often arises not from flawed components, but from misalignment between a system’s assumptions and the problem it is asked to solve. A square hole is not poorly made; it’s simply optimized for square pegs.

Modern large language models (LLMs) now find themselves in exactly this situation. Although they are overwhelmingly executed on general-purpose graphics processing units (GPGPUs), these processors were never shaped around the needs of enormous inference-based matrix multiplications.

GPUs dominate not because they are a perfect match, but because they were already available, massively parallel, and economically scalable when deep learning began to grow, especially for training AI models.

What follows is not an indictment of GPUs, but a careful explanation of why they are extraordinarily effective when the workload is rather dynamic and unpredictable, such as graphic processing, and disappointedly inefficient when the workload is essentially regular and predictable, such as AI/LLM inference execution.

The inefficiencies that emerge are not accidental; they are structural, predictable, and increasingly expensive as models continue to evolve.

Execution geometry and the meaning of “square”

When a GPU renders a graphic scene, it deals with a workload that is considerably irregular at the macro level, but rather regular at the micro level. A graphic scene changes in real time with significant variations in content—changes in triangles and illumination—but in an image, there is usually a lot of local regularity.

One frame displays a simple brick wall, the next, an explosion creating thousands of tiny triangles and complex lighting changes. To handle this, the GPU architecture relies on a single-instruction multiple threads (SIMT) or wave/warp-based approach where all threads in a “wave” or “warp,” usually between 16 and 128, receive the same instruction at once.

This works rather efficiently for graphics because, while the whole scene is a mess, local patches of pixels are usually doing the same thing. This allows the GPU to be a “micro-manager,” constantly and dynamically scheduling these tiny waves to react to the scene’s chaos.

However, when applied to AI and LLMs, the workload changes entirely. AI processing is built on tensor math and matrix multiplication, which is fundamentally regular and predictable. Unlike a highly dynamic game scene, matrix math is just an immense but steady flow of numbers. Because AI is so consistent, the GPU’s fancy, high-speed micro-management becomes unnecessary. In this context, that hardware is just “overhead,” consuming power and space for a flexibility that the AI doesn’t actually use.

This leaves the GPGPU in a bit of a paradox: it’s simultaneously too dynamic and not dynamic enough. It’s too dynamic because it wastes energy on micro-level programming and complex scheduling that a steady AI workload doesn’t require. Yet it’s not dynamic enough because it is bound by the rigid size of its “waves.”

If the AI math doesn’t perfectly fit into a warp of 32, the GPU must use “padding,” effectively leaving seats empty on the bus. While the GPU is a perfect match for solving irregular graphics problems, it’s an imperfect fit for the sheer, repetitive scale of modern tensor processing.

Wasted area as a physical quantity

This inefficiency can be understood geometrically. A circle inscribed in a square leaves about 21% of the square’s area unused. In processing hardware terms, the “area” corresponds to execution lanes, cycles, bandwidth, and joules. Any portion of these resources that performs work that does not advance the model’s output is wasted area.

The utilization gap (MFU)

The primary way to quantify this inefficiency is through Model FLOPs Utilization (MFU). This metric measures how much of the chip’s theoretical peak math power is actually being used for the model’s calculations versus how much is wasted on overhead, data movement, or idling.

For an LLM like GPT-4 running on GPGPT-based accelerators operating in interactive mode, the MFU drops by an order of magnitude with the hardware busy with “bookkeeping,” which encompasses moving data between memory levels, managing thread synchronization, or waiting for the next “wave” of instructions to be decoded.

The energy cost of flexibility

The inefficiency is even more visible in power consumption. A significant portion of that energy is spent powering the “dynamic micromanagement,” namely, the logic gates that handle warp scheduling, branch prediction, and instruction fetching for irregular tasks.

The “padding” penalty

Finally, there is the “padding” inefficiency. Because a GPGPU-based accelerator operates in fixed wave sizes (typically 32 or 64 threads), if the specific calculation doesn’t perfectly align with those multiples, often happening in the “Attention” mechanism of the LLM model, the GPGPU still burns the power for a full wave while some threads sit idle.

These effects multiply rather than add. A GPU may be promoted with a high throughput, but once deployed, may deliver only a fraction of its peak useful throughput for LLM inference, while drawing close to peak power.

The memory wall and idle compute

Even if compute utilization was perfect, LLM inference would still collide with the memory wall, the growing disparity between how fast processors can compute and how fast they can access memory. LLM inference has low arithmetic intensity, meaning that relatively few floating-point operations are performed per byte of data fetched. Much of the execution time is spent reading and writing the key-value (KV) cache.

GPUs attempt to hide memory latency using massive concurrency. Each streaming multiprocessor (SM) holds many warps and switches between them while others wait for memory. This strategy works well when memory accesses are staggered and independent. In LLM inference, however, many warps stall simultaneously while waiting for similar memory accesses.

As a result, SMs spend large fractions of idle time, not because they lack instructions, but because data cannot arrive fast enough. Measurements commonly show that 50–70% of cycles during inference are lost to memory stalls. Importantly, the power draw does not scale down proportionally since clocks continue toggling and control logic remains active, resulting in poor energy efficiency.

Predictable stride assumptions and the cost of generality

To maximize bandwidth, GPUs rely on predictable stride assumptions; that is, the expectation that memory accesses follow regular patterns. This enables techniques such as cache line coalescing and memory swizzling, a remapping of addresses designed to avoid bank conflicts and improve locality.

LLM memory access patterns violate these assumptions. Accesses into the KV cache depend on token position, sequence length, and request interleaving across users. The result is reduced cache effectiveness and increased pressure on address-generation logic. The hardware expends additional cycles and energy rearranging data that cannot be reused.

This is often described as a “generality tax.”

Why GPUs still dominate

Given these inefficiencies, it’s natural to ask why GPUs remain dominant. The answer lies in history rather than optimality. Early deep learning workloads were dominated by dense linear algebra, which mapped reasonably well onto GPU hardware. Training budgets were large enough that inefficiency could be absorbed.

Inference changes priorities. Latency, cost per token, and energy efficiency now matter more than peak throughput. At this stage, structural inefficiencies are no longer abstract; they directly translate into operational cost.

From adapting models to aligning hardware

For years, the industry focused on adapting models to hardware such as larger batches, heavier padding, and more aggressive quantization. These techniques smooth the mismatch but do not remove it.

A growing alternative is architectural alignment: building hardware whose execution model matches the structure of LLMs themselves. Such designs schedule work around tokens rather than warps, and memory systems are optimized for KV locality instead of predictable strides. By eliminating unused execution lanes entirely, these systems reclaim the wasted area rather than hiding it.

The inefficiencies seen in modern AI data centers—idle compute, memory stalls, padding overhead, and excess power draw—are not signs of poor engineering. They are the inevitable result of forcing a smooth, temporal workload into a rigid, geometric execution model.

GPUs remain masterfully engineered square holes. LLMs remain inherently round pegs. As AI becomes a key ingredient in global infrastructure, the cost of this mismatch becomes the problem itself. The next phase of AI computing will belong not to those who shave the peg more cleverly, but to those who reshape the hole to match the true geometry of the workload.

Lauro Rizzatti is a business advisor to VSORA, a technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.

Special Section: AI Design

The post Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs appeared first on EDN.

Tune 555 frequency over 4 decades

Wed, 01/28/2026 - 15:00

The versatility of the venerable LMC555 CMOS analog timer is so well known it’s virtually a cliche, but sometimes it can still surprise us.  The circuit in Figure 1 is an example.  In it a single linear pot in a simple RC network sets the frequency of 555 square wave oscillation over a greater than 10 Hz to 100 kHz range, exceeding a 10,000:1 four decade, thirteen octave ratio.  Here’s how it works.

Figure 1 R1 sets U1 frequency from < 10Hz to > 100kHz.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Potentiometer R1 provides variable attenuation of U1’s 0 to V+ peak-to-peak square wave output to the R4R5C1 divider/integrator.  The result is a sum of an abbreviated timing ramp component developed by C1 sitting on top of an attenuated square wave component developed by R5.  This composite waveshape is input to the Trigger and Threshold pins of U1, resulting in the frequency vs R1 position function plotted on Figure 2′s semi-log graph.

Figure 2 U1 oscillation range vs R1 setting is so wide it needs a log scale to accommodate it.

Curvature of the function does get pretty radical as R1 approaches its limits of travel. Nevertheless,  log conformity is fairly decent over the middle 10% to 90% of the pot’s travel and the resulting 2 decades of frequency range. This is sketched in red in Figure 3.

Figure 3 Reasonably good log conformity is seen over mid-80% of R1’s travel.

Of course, as R1 is dialed to near its limits, frequency precision (or lack of it) becomes very sensitive to production tolerances in U1’s internal voltage divider network and those of the circuits external resistors. 

This is why U1’s frequency output is taken from pin 7 (Discharge) instead of pin 3 (Output) to at least minimize the effects of loading from making further contributions to instability.

Nevertheless, the strong suit of this design is definitely its dynamic range.  Precision?  Not so much.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

 

The post Tune 555 frequency over 4 decades appeared first on EDN.

Emerging trends in battery energy storage systems

Wed, 01/28/2026 - 15:00
Torus Nova Spin flywheel-based energy storage.

Battery energy storage systems (BESSes) are increasingly being adopted to improve efficiency and stability in power distribution networks. By storing energy from both renewable sources, such as solar and wind, and the conventional power grid, BESSes balance supply and demand, stabilizing power grids and optimizing energy use.

This article examines emerging trends in BESS applications, including advances in battery technologies, the development of hybrid energy storage systems (HESSes), and the introduction of AI-based solutions for optimization.

Battery technologies

Lithium-ion (Li-ion) is currently the main battery technology used in BESSes. Despite the use of expensive raw materials, such as lithium, cobalt, and nickel, the global average price of Li-ion battery packs has declined in 2025.

BloombergNEF reports that Li-ion battery pack prices have fallen to a new low this year, reaching $108/kWh, an 8% decrease from the previous year. The research firm attributes this decline to excess cell manufacturing capacity, economies of scale, the increasing use of lower-cost lithium-iron-phosphate (LFP) chemistries, and a deceleration in the growth of electric-vehicle sales.

Using iron phosphate as the cathode material, LFP batteries achieve high energy density, long cycle life, and good performance at high temperatures. They are often used in applications in which durability and reliable operation under adverse conditions are important, such as grid energy storage systems. However, their energy density is lower than that of traditional Li-ion batteries.

Although Li-ion batteries will continue to lead the BESS market due to their higher efficiency, longer lifespan, and deeper depth of discharge compared with alternative battery technologies, other chemistries are making progress.

Flow batteries

Long-life storage systems, capable of storing energy for eight to 10 hours or more, are suited for managing electricity demand, reducing peaks, and stabilizing power grids. In this context, “reduction-oxidation [redox] flow batteries” show great promise.

Unlike conventional Li-ion batteries, the liquid electrolytes in flow batteries are stored separately and then flow (hence the name) into the central cell, where they react in the charging and discharging phases.

Flow batteries offer several key advantages, particularly for grid applications with high shares of renewables. They enable long-duration energy storage, covering many hours, such as nighttime, when solar generation is not present. Their raw materials, such as vanadium, are generally abundant and face limited supply constraints. Material concerns are further mitigated by high recyclability and are even less significant for emerging iron-, zinc-, or organic-electrolyte technologies.

Flow batteries are also modular and compact, inherently safe due to the absence of fire risk, and highly durable, with service lifetimes of at least 20 years with minimal performance degradation.

The BESSt Company, a U.S.-based startup founded by a former Tesla engineer, has unveiled a redox flow battery technology that is claimed to achieve an energy density up to 20× higher than that of traditional, vanadium-based flow storage systems.

The novel technology relies on a zinc-polyiodide (ZnI2) electrolyte, originally developed by the U.S. Department of Energy’s Pacific Northwest National Laboratory, as well as a proprietary cell stack architecture that relies on undisclosed, Earth-abundant alloy materials sourced domestically in the U.S.

The company’s residential offering is designed with a nominal power output of 20 kW, paired with an energy storage capacity of 25 kWh, corresponding to an average operational duration of approximately five hours. For commercial and industrial applications, the proposed system is designed to scale to a power rating of 40 kW and an energy capacity of 100 kWh, enabling an average usage time of approximately 6.5 hours.

This technology (Figure 1) is well-suited for integration with solar generation and other renewable energy installations, where it can deliver long-duration energy storage without performance degradation.

The BESSt Company’s ZnI2 redox flow battery system.Figure 1: The BESSt Company’s ZnI2 redox flow battery system (Source: The BESSt Company) Sodium-ion batteries

Sodium-ion batteries are a promising alternative to Li-ion batteries, primarily because they rely on more abundant raw materials. Sodium is widely available in nature, whereas lithium is relatively scarce and subject to supply chains that are vulnerable to price volatility and geopolitical constraints. In addition, sodium-ion batteries use aluminum as a current collector instead of copper, further reducing their overall cost.

Blue Current, a California-based company specializing in solid-state batteries, has received an $80 million Series D investment from Amazon to advance the commercialization of its silicon solid-state battery technology for stationary storage and mobility applications. The company aims to establish a pilot line for sodium-ion battery cells by 2026.

Its approach leverages Earth-abundant silicon and elastic polymer anodes, paired with fully dry electrolytes across multiple formulations optimized for both stationary energy storage and mobility. Blue Current said its fully dry chemistry can be manufactured using the same high-volume equipment employed in the production of Li-ion pouch cells.

Sodium-ion batteries can be used in stationary energy storage, solar-powered battery systems, and consumer electronics. They can be transported in a fully discharged state, making them inherently safer than Li-ion batteries, which can suffer degradation when fully discharged.

Aluminum-ion batteries

Project INNOBATT, coordinated by the Fraunhofer Institute for Integrated Systems and Device Technology (IISB), has completed a functional battery system demonstrator based on aluminum-graphite dual-ion batteries (AGDIB).

Rechargeable aluminum-ion batteries represent a low-cost and inherently non-flammable energy storage approach, relying on widely available materials such as aluminum and graphite. When natural graphite is used as the cathode, AGDIB cells reach gravimetric energy densities of up to 160 Wh/kg while delivering power densities above 9 kW/kg. The electrochemical system is optimized for high-power operation, enabling rapid charge and discharge at elevated C rates and making it suitable for applications requiring a fast dynamic response.

In the representative system-level test (Figure 2), the demonstrator combines eight AGDIB pouch cells with a wireless battery management system (BMS) derived from the open-source foxBMS platform. Secure RF communication is employed in conjunction with a high-resolution current sensor based on nitrogen-vacancy centers in diamond, enabling precise current measurement under dynamic operating conditions.

A detailed block diagram of the INNOBATT battery system components.Figure 2: A detailed block diagram of the INNOBATT battery system components (Source: Elisabeth Iglhaut/Fraunhofer IISB) Li-ion battery recycling

Second-life Li-ion batteries retired from applications such as EVs often maintain a residual storage capacity and can therefore be repurposed for BESSes, supporting circular economy standards. In Europe, the EU Battery Passport—mandatory beginning in 2027 for EV, industrial, BESS (over 2 kWh), and light transport batteries—will digitally track batteries by providing a QR code with verified data on their composition, state of health, performance (efficiency, capacity), and carbon footprint.

This initiative aims to create a circular economy, improving product sustainability, transparency, and recyclability through digital records that detail information about product composition, origin, environmental impact, repair, and recycling.

HESSes

A growing area of innovation is represented by the HESS, which integrates batteries with alternative energy storage technologies, such as supercapacitors or flywheels. Batteries offer high energy density but relatively low power density, whereas flywheels and supercapacitors provide high power density for rapid energy delivery but store less energy overall.

By combining these technologies, HESSes can better balance both energy and power requirements. Such systems are well-suited for applications such as grid and microgrid stabilization, as well as renewable energy installations, particularly solar and wind power systems.

Utility provider Rocky Mountain Power (RMP) and Torus Inc., an energy storage solutions company, are collaborating on a major flywheel and BESS project in Utah. The project integrates Torus’s mechanical flywheel technology with battery systems to support grid stability, demand response, and virtual power plant applications.

Torus will deploy its Nova Spin flywheel-based energy storage system (Figure 3) as part of the project. Flywheels operate using a large, rapidly spinning cylinder enclosed within a vacuum-sealed structure. During charging, electrical energy powers a motor that accelerates the flywheel, while during discharge, the same motor operates as a generator, converting the rotational energy back into electricity. Flywheel systems offer advantages such as longer lifespans compared with most chemical batteries and reduced sensitivity to extreme temperatures.

This collaboration is part of Utah’s Operation Gigawatt initiative, which aims to expand the state’s power generation capacity over the next decade. By combining the rapid response of flywheels with the longer-duration storage of batteries, the project delivers a robust hybrid solution designed for a service life of more than 25 years while leveraging RMP’s Wattsmart Battery program to enhance grid resilience.

Torus Nova Spin flywheel-based energy storage.Figure 3: Torus Nova Spin flywheel-based energy storage (Source: Torus Inc.) AI adoption in BESSes

By utilizing its simulation and testing solution Simcenter, Siemens Digital Industries Software demonstrates how AI reinforcement learning (RL) can help develop more efficient, faster, and smarter BESSes.

The primary challenge of managing renewable energy sources, such as wind power, is determining the optimal charge and discharge timing based on dynamic variables such as real-time electricity pricing, grid load conditions, weather forecasts, and historical generation patterns.

Traditional control systems rely on simple, manually entered rules, such as storing energy when prices fall below weekly averages and discharging when prices rise. On the other hand, RL is an AI approach that trains intelligent agents through trial and error in simulated environments using historical data. For BESS applications, the RL agent learns from two years of weather patterns to develop sophisticated control strategies that provide better results than manual programming capabilities.

The RL-powered smart controller continuously processes wind speed forecasts, grid demand levels, and market prices to make informed, real-time decisions. It learns to charge batteries during periods of abundant wind generation and low prices, then discharge during demand spikes and price peaks.

The practical implementation of Siemens’s proposed approach combines system simulation tools to create digital twins of BESS infrastructure with RL training environments. The resulting controller can be deployed directly to hardware systems.

The post Emerging trends in battery energy storage systems appeared first on EDN.

Designing edge AI for industrial applications

Wed, 01/28/2026 - 13:07

Industrial manufacturing systems demand real-time decision-making, adaptive control, and autonomous operation. However, many cloud-dependent architectures can’t deliver the millisecond response required for safety-critical functions such as robotic collision avoidance, in-line quality inspection, and emergency shutdown.

Network latency (typically 50–200 ms round-trip) and bandwidth constraints prevent cloud processing from achieving sub-10 ms response requirements, shifting intelligence to the industrial edge for real-time control.

Edge AI addresses these high-performance, low-latency requirements by embedding intelligence directly into industrial devices and enabling local processing without reliance on the cloud. This edge-based approach supports machine-vision workloads for real-time defect detection, adaptive process control, and responsive human–machine interfaces that react instantly to dynamic conditions.

This article outlines a comprehensive approach to designing edge AI systems for industrial applications, covering everything from requirements analysis to deployment and maintenance. It highlights practical design methodologies and proven hardware platforms needed to bring AI from prototyping to production in demanding environments.

Defining industrial requirements

Designing scalable industrial edge AI systems begins with clearly defining hardware, software, and performance requirements. Manufacturing environments necessitate wide temperature ranges from –40°C to +85°C, resistance to vibration and electromagnetic interference (EMI), and zero tolerance for failure.

Edge AI hardware installed on machinery and production lines must tolerate these conditions in place, unlike cloud servers operating in climate-controlled environments.

Latency constraints are equally demanding: robotic assembly lines require inference times under 10 milliseconds for collision avoidance and motion control, in-line inspection systems must detect and reject defective parts in real time, and safety interlocks depend on millisecond-level response to protect operators and equipment.

Figure 1 Robotic assembly lines require inference times under 10 milliseconds for collision avoidance and motion control. Source: Infineon

Accuracy is also critical, with quality control often targeting greater than 99% defect detection, and predictive maintenance typically aiming for high-90s accuracy while minimizing false alarm rates.

Data collection and preprocessing

Meeting these performance standards requires systematic data collection and preprocessing, especially when defect rates fall below 5% of samples. Industrial sensors generate diverse signals such as vibration, thermal images, acoustic traces, and process parameters. These signals demand application-specific workflows to handle missing values, reduce dimensionality, rebalance classes, and normalize inputs for model development.

Continuous streaming of raw high-resolution sensor data can exceed 100 Mbps per device, which is unrealistic for most factory networks. As a result, preprocessing must occur at the industrial edge, where compute resources are located directly on or near the equipment.

Class-balancing techniques such as SMOTE or ADASYN address class imbalance in training data, with the latter adapting to local density variations. Many applications also benefit from domain-specific augmentation, such as rotating thermal images to simulate multiple views or injecting controlled noise into vibration traces to reflect sensor variability.

Outlier detection is equally important, with clustering-based methods flagging and correcting anomalous readings before they distort model training. Synthetic data generation can introduce rare events such as thermal hotspots or sudden vibration spikes, improving anomaly detection when real-world samples are limited.

With cleaner inputs established, focus shifts to model design. Convolutional neural networks (CNNs) handle visual inspection, while recurrent neural networks (RNNs) process time-series data. Transformers, though still resource-intensive, increasingly perform industrial time-series analysis. Efficient execution of these architectures necessitates careful optimization and specialized hardware support.

Hardware-accelerated processing

Efficient edge inference requires optimized machine learning models supported by hardware that accelerates computation within strict power and memory budgets. These local computations must stay within typical power envelopes below 5 W and operate without network dependency, which cloud-connected systems can’t guarantee in production environments.

Training neural networks for industrial applications can be challenging, especially when processing vibration signals, acoustic traces, or thermal images. Traditional workflows require data science expertise to select model architectures, tune hyperparameters, and manage preprocessing steps.

Even with specialized hardware, deploying deep learning models at the industrial edge demands additional optimization. Compression techniques shrink models by 80–95% while retaining over 95% accuracy, reducing size and accelerating inference to meet edge constraints. These include:

  • Quantization converts 32-bit floating-point models into 8- or 16-bit integer formats, reducing memory use and accelerating inference. Post-training quantization meets most industrial needs, while quantization-aware training maintains accuracy in safety-critical cases.
  • Pruning removes redundant neural connections, typically reducing parameters by 70–90% with minimal accuracy loss. Overparameterized models, especially those trained on smaller industrial datasets, benefit significantly from pruning.
  • Knowledge distillation trains a smaller student model to replicate the behavior of a larger teacher model, retaining accuracy while achieving the efficiency required for edge deployment.

Deployment frameworks and tools

After compression and optimization, engineers deploy machine learning models using inference frameworks, such as TensorFlow Lite Micro and ExecuTorch, which are the industry standards. TensorFlow Lite Micro offers hardware acceleration through its delegate system, which is especially useful on platforms with supported specialized processors.

While these frameworks handle model execution, scaling from prototype to production also requires integration with development environments, control interfaces, and connectivity options. Beyond toolchains, dedicated development platforms further streamline edge AI workflows.

Once engineers develop and deploy models, they test them under real-world industrial conditions. Validation must account for environmental variation, EMI, and long-term stability under continuous operation. Stress testing should replicate production factors such as varying line speeds, material types, and ambient conditions to confirm consistent performance and response times across operational states.

Industrial applications also require metrics beyond accuracy. Quality inspection systems must balance false positives against false negatives, where the geometric mean (GM) provides a balanced measure on imbalanced datasets common in manufacturing. Predictive maintenance workloads rely on indicators such as mean time between false positives (MTBFP) and detection latency.

Figure 2 Quality inspection systems must balance false positives against false negatives. Source: Infineon

Validated MCU-based deployments demonstrate that optimized inference—even under resource constraints—can maintain near-baseline accuracy with minimal loss.

Monitoring and maintenance strategies

Validation confirms performance before deployment, yet real-world operation requires continuous monitoring and proactive maintenance. Edge deployments demand distributed monitoring architectures that continue functioning offline, while hybrid edge-to-cloud models provide centralized telemetry and management without compromising local autonomy.

A key focus of monitoring is data drift detection, as input distributions can shift with tool wear, process changes, or seasonal variation. Monitoring drift at both device and fleet levels enables early alerts without requiring constant cloud connectivity. Secure over-the-air (OTA) updates extend this framework, supporting safe model improvements, updates, and bug fixes.

Features such as secure boot, signed updates, isolated execution, and secure storage ensure only authenticated models run in production, helping manufacturers comply with regulatory frameworks such as the EU Cyber Resilience Act.

Take, for instance, an industrial edge AI case study about predictive maintenance. A logistics operator piloted edge AI silicon on a fleet of forklifts, enabling real-time navigation assistance and collision avoidance in busy warehouse environments.

The deployment reduced safety incidents and improved route efficiency, achieving better ROI. The system proved scalable across multiple facilities, highlighting how edge AI delivers measurable performance, reliability, and efficiency gains in demanding industrial settings.

The upgraded forklifts highlighted key lessons for AI at the edge: systematic data preprocessing, balanced model training, and early stress testing were essential for reliability, while underestimating data drift remained a common pitfall.

Best practices included integrating navigation AI with existing fleet management systems, leveraging multimodal sensing to improve accuracy, and optimizing inference for low latency in real-time safety applications.

Sam Al-Attiyah is head of machine learning at Infineon Technologies.

Special Section: AI Design

The post Designing edge AI for industrial applications appeared first on EDN.

Pages