Українською
  In English
EDN Network
Delay lines demystified: Theory into practice

Delay lines are more than passive timing tricks—they are deliberate design elements that shape how signals align, synchronize, and stabilize across systems. From their theoretical roots in controlled propagation to their practical role in high-speed communication, test equipment, and signal conditioning, delay lines bridge abstract timing concepts with hands-on engineering solutions.
This article unpacks their principles, highlights key applications, and shows how understanding delay lines can sharpen both design insight and performance outcomes.
Delay lines: Fundamentals and classifications
Delay lines remain a fundamental building block in circuit design, offering engineers a straightforward means of controlling signal timing. From acoustic propagation experiments to precision imaging in optical coherence tomography, these elements underpin a wide spectrum of applications where accurate delay management is critical.
Although delay lines are ubiquitous, many engineers rarely encounter their underlying principles. At its core, a delay line is a device that shifts a signal in time, a deceptively simple function with wide-ranging utility. Depending on the application, this capability finds its way into countless systems. Broadly, delay lines fall into three physical categories—electrical, optical, and mechanical—and, from a signal-processing perspective, into two functional classes: analog and digital.
Analog delay lines (ADLs), often referred to as passive delay lines, are built from fundamental electrical components such as capacitors and inductors. They can process both analog and digital signals, and their passive nature allows attenuation between input and output terminals.
In contrast, digital delay lines (DDLs), commonly described as active delay lines, operate exclusively on digital signals. Constructed entirely from digital logic, they do not provide attenuation across terminals. Among DDL implementations, CMOS technology remains by far the most widely adopted logic family.
When classified by time control, delay lines fall into two categories: fixed and variable. Fixed delay lines provide a preset delay period determined by the manufacturer, which cannot be altered by the circuit designer. While generally less expensive, they are often less flexible in practical use.
Variable delay lines, by contrast, allow designers to adjust the magnitude of the delay. However, this tunability is bounded—the delay can only be varied within limits specified by the manufacturer, rather than across an unlimited range.
As a quick aside, bucket-brigade delay lines (BBDs) represent a distinctive form of analog delay. Implemented as a chain of capacitors clocked in sequence, they pass the signal step-by-step much like a line of workers handing buckets of water. The result is a time-shifted output whose delay depends on both the number of stages and the clock frequency.
While limited in bandwidth and prone to noise, BBDs became iconic in audio processing—powering classic chorus, flanger, and delay effects—and remain valued today for their warm, characterful sound despite the dominance of digital alternatives.
Other specialized forms of delay lines include acoustic devices (often ultrasonic), magnetostrictive implementations, surface acoustic wave (SAW) structures, and electromagnetic bandgap (EBG) delay lines. These advanced designs exploit material properties or engineered periodic structures to achieve controlled signal delay in niche applications ranging from ultrasonic sensing to microwave phased arrays.
There are more delay line types, but I deliberately omitted them here to keep the focus on the most widely used and practically relevant categories for designers.

Figure 1 The nostalgic MN3004 BBD showcases its classic package and vintage analog heritage. Source: Panasonic
Retro Note: Many grey-bearded veterans can recall the era when memory was not etched in silicon but rippled through wire. In magnetostrictive delay line memories, bits were stored as acoustic pulses traveling through nickel wire. A magnetic coil would twist the wire to launch a pulse—which propagated mechanically—and was sensed at the far end, then amplified and recirculated.
These memories were sequential, rhythmic, and beautifully analog, echoing the pulse logic of early radar and computing systems. Mercury delay line memories offered a similar acoustic storage medium in liquid form, prized for its stable acoustic properties. Though long obsolete, they remain a tactile reminder of a time when data moved not as electrons, but as vibrations.
And from my recollection of color television delay lines, a delay line keeps the faster, high-definition luminance signal (Y) in step with the slower, low-definition chrominance signal (C). Because the narrow-band chrominance requires more processing than the wide-band luminance, a brief but significant delay is introduced. The delay line compensates for this difference, ensuring that both signals begin scanning across the television screen in perfect synchrony.
Selecting the right delay line
It’s now time to focus on choosing a delay line that will function effectively in your circuit. To ensure compatibility with your electrical network, you should pay close attention to three key specifications. The first is line type, which determines whether you need a fixed or variable delay line and whether it must handle analog or digital signals.
The second is rise time, generally defined as the interval required for a signal’s magnitude to increase from 10% to 90% of its final amplitude. The third is time delay, the actual duration by which the delay line slows down the signal, expressed in units of time. Considering these parameters together will guide you toward a delay line that matches both the functional and performance requirements of your design.

Figure 2 A retouched snip from the legacy DS1021 datasheet shows its key specifications. Source: Analog Devices
Keep in mind that the DS1021 device, once a staple programmable delay line, is now obsolete. Comparable functionality is available on DS1023 or in modern timing ICs such as the LTC6994, which deliver finer programmability and ongoing support.
Digital-to-time converters: Modern descendants of delay lines
Digital-to-time converters (DTCs) represent the contemporary evolution of delay line concepts. Whereas early delay lines stored bits as acoustic pulses traveling through wire or mercury, a DTC instead maps a digital input word directly into a precise time delay or phase shift.
This enables designers to control timing edges with sub-nanosecond accuracy, a capability central to modern frequency synthesizers, clock generation, and high-speed signal processing. In effect, DTCs carry forward the spirit of delay lines—transforming digital code into controlled timing—but with the precision, programmability, and integration demanded by today’s systems.
Coming to practical points on DTC, unlike classic delay line ICs that were sold as standalone parts, DTCs are typically embedded within larger timing devices such as fractional-N PLLs, clock-generation ICs, or implemented in FPGAs and ASICs. Designers will not usually find a catalog chip labeled “DTC,” but they will encounter the function inside modern frequency synthesizers and RF transceivers.
This integration reflects the shift from discrete delay elements to highly integrated timing blocks, where DTCs deliver picosecond-level resolution, built-in calibration, and jitter control as part of a broader system-on-chip (SoC) solution.
Wrap-up: Delay lines for makers
For hobbyists and makers, the PT2399 IC has become a refreshing antidote to the fog of complexity.

Figure 3 PT2399’s block diagram illustrates internal functional blocks. Source: PTC
Originally designed as a digital echo processor, it integrates a simple delay line engine that can be coaxed into audio experiments without the steep learning curve of PLLs or custom DTC blocks. With just a handful of passive components, PT2399 lets enthusiasts explore echoes, reverbs, and time-domain tricks, inspiring them to get their hands dirty with audio and delay line projects.
In many ways, it democratizes the spirit of delay lines, bringing timing control out of the lab and into the workshop, where curiosity and soldering irons meet. And yes, I will add some complex design pointers in the seasoned landscape—but after some lines of delay.
Well, delay lines may have shifted from acoustic pulses to embedded timing blocks, but they still invite engineers to explore timing hands‑on.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Trip points for IC-timing analysis
- Timing is everything in SOC design
- On-chip variation and timing closure
- Timing semiconductors get software aid
- Deriving design margins for successful timing closure
The post Delay lines demystified: Theory into practice appeared first on EDN.
CES 2026: AI, automotive, and robotics dominate

If the Consumer Electronics Show (CES) is a benchmark for what’s next in the electronic component industry, you’ll find that artificial intelligence permeates across all industries, from consumer electronics and wearables to automotive and robotics. Many chipmakers are placing big bets on edge AI as a key growth area along with robotics and IoT.
Here’s a sampling of the latest devices and technologies launched at CES 2026, covering AI advances for automotive, robotics, and wearables applications.
AI SoCs, chiplets, and developmentAmbarella Inc. announced its CV7 edge AI vision system-on-chip (SoC), optimized for a wide range of AI perception applications, such as advanced AI-based 8K consumer products (action and 360° cameras), multi-imager enterprise security cameras, robotics (aerial drones), industrial automation, and high-performance video conferencing devices. The 4-nm SoC provides simultaneous multi-stream video and advanced on-device edge AI processing while consuming very low power.
The CV7 may also be used for multi-stream automotive designs, particularly for those running convolutional neural networks (CNNs) and transformer-based networks at the edge, such as AI vision gateways and hubs in fleet video telematics, 360° surround-view and video-recording applications, and passive advanced driver-assistance systems (ADAS).
Compared with its predecessor, the CV7 consumes 20% less power, thanks in part to Samsung’s 4-nm process technology, which is Ambarella’s first on this node, the company said. It incorporates Ambarella’s proprietary AI accelerator, image-signal processor (ISP), and video encoding, together with Arm cores, I/Os, and other functions for an efficient AI vision SoC.
The high AI performance is powered by Ambarella’s proprietary, third-generation CVflow AI accelerator, with more than 2.5× AI performance over the previous-generation CV5 SoC. This allows the CV7 to support a combination of CNNs and transformer networks, running in tandem.
In addition, the CV7 provides higher-performance ISP, including high dynamic range (HDR), dewarping for fisheye cameras, and 3D motion-compensated temporal filtering with better image quality than its predecessor, thanks to both traditional ISP techniques and AI enhancements. It provides high image quality in low light, down to 0.01 lux, as well as improved HDR for video and images.
Other upgrades include its hardware-accelerated video encoding (H.264, H.265, MJPEG), which boosts encode performance by 2× over the CV5 and its on-chip general-purpose processing upgrade to a quad-core Arm Cortex-A73, offering 2× higher CPU performance over the previous SoC. It also provides a 64-bit DRAM interface, delivering a significant improvement in available DRAM bandwidth compared with the CV5, Ambarella said. CV7 SoC samples are available now.
Ambiq Micro Inc. delivers the industry’s first ultra-low-power neural processing unit (NPU) built on its Subthreshold Power Optimized Technology (SPOT) platform. It is designed for real-time, always-on AI at the edge.
Delivering both performance and low power consumption, the SPOT-optimized NPU is claimed as the first to leverage sub- and near-threshold voltage operation for AI acceleration to deliver leading power efficiency for complex edge AI workloads. It leverages the Arm Ethos-U85 NPU, which supports sparsity and on-the-fly decompression, enabling compute-intensive workloads directly on-device, with 200 GOPS of on-device AI performance.
It also incorporates SPOT-based ultra-wide-range dynamic voltage and frequency scaling that enables operation at lower voltage and lower power than previously possible, Ambiq said, making room in the power budget for higher levels of intelligence.
Ambiq said the Atomiq SoC enables a new class of high-performance, battery-powered devices that were previously impractical due to power and thermal constraints. One example is smart cameras and security for always-on, high-resolution object recognition and tracking without frequent recharging or active cooling.
For development, Ambiq offers the Helia AI platform, together with its AI development kits and the modular neuralSPOT software development kit.
Ambiq’s Atomiq SoC (Source: Ambiq Micro Inc.)
On the development side, Cadence Design Systems Inc. and its IP partners are delivering pre-validated chiplets, targeting physical AI, data center, and high-performance computing (HPC) applications. Cadence announced at CES a partner ecosystem to deliver pre-validated chiplet solutions, based on the Cadence physical AI chiplet platform. Initial IP partners include Arm, Arteris, eMemory, M31 Technology, Silicon Creations, and Trilinear Technologies, as well as silicon analytics partner proteanTecs.
The new chiplet spec-to-packaged parts ecosystem is designed to reduce engineering complexity and accelerate time to market for developing chiplets. To help reduce risk, Cadence is also collaborating with Samsung Foundry to build out a silicon prototype demonstration of the Cadence physical AI chiplet platform. This includes pre-integrated partner IP on the Samsung Foundry SF5A process.
Extending its close collaboration with Arm, Cadence will use Arm’s advanced Zena Compute Subsystem and other essential IP for the physical AI chiplet platform and chiplet framework. The solutions will meet edge AI processing requirements for automobiles, robotics, and drones, as well as standards-based I/O and memory chiplets for data center, cloud, and HPC applications.
These chiplet architectures are standards-compliant for broad interoperability across the chiplet ecosystem, including the Arm Chiplet System Architecture and future OCP Foundational Chiplet System Architecture. Cadence’s Universal Chiplet Interconnect Express (UCIe) IP provides industry-standard die-to-die connectivity, with a protocol IP portfolio that enables fast integration of interfaces such as LPDDR6/5X, DDR5-MRDIMM, PCI Express 7.0, and HBM4.
Cadence’s physical AI chiplet platform (Source: Cadence Design Systems Inc.)
NXP Semiconductors N.V. launched its eIQ Agentic AI Framework at CES 2026, which simplifies agentic AI development and deployment for both expert and novice device makers. It is one of the first solutions to enable agentic AI development at the edge, according to the company. The framework works together with NXP’s secure edge AI hardware to help simplify agentic AI development and deployment for autonomous AI systems at the edge and eliminate development bottlenecks with deterministic real-time decision-making and multi-model coordination.
Offering low latency and built-in security, the eIQ Agentic AI Framework is designed for real-time, multi-model agentic workloads, including applications in robotics, industrial control, smart buildings, and transportation. A few examples cited include instantly controlling factory equipment to mitigate safety risks, alerting medical staff to urgent conditions, updating patient data in real time, and autonomously adjusting HVAC systems, without cloud connectivity.
For expert developers, they can integrate sophisticated, multi-agent workflows into existing toolchains, while novice developers can quickly build functional edge-native agentic systems without deep technical experience.
The framework integrates hardware-aware model preparation and automated tuning workflows. It enables developers to run multiple models in parallel, including vision, audio, time series, and control, while maintaining deterministic performance in constrained environments, NXP said. Workloads are distributed across CPU, NPU, and integrated accelerators using an intelligent scheduling engine.
The eIQ Agentic AI Framework supports the i.MX 8 and i.MX 9 families of application processors and Ara discrete NPUs. It aligns with open agentic standards, including Agent to Agent and Model Context Protocol.
NXP has also introduced its eIQ AI Hub, a cloud-based developer platform that gives users access to edge AI development tools for faster prototyping. Developers can deploy on cloud-connected hardware boards but still have the option for on-premise deployments.
NXP’s Agentic AI framework (Source: NXP Semiconductors N.V.)
Sensing solutions
Bosch Sensortec launched its BMI5 motion sensor platform at CES 2026, targeting high-precision performance for a range of applications, including immersive XR systems, advanced robotics, and wearables. The new generation of inertial sensors—BMI560, BMI563, and BMI570—is built on the same hardware and is adapted through intelligent software.
Based on Bosch’s latest MEMS architecture, these inertial sensors, housed in an LGA package, claim ultra-low noise and exceptional vibration robustness. They offer twice the full-scale range of the previous generation. Key specifications include a latency of less than 0.5 ms, combined with a time increment of approximately 0.6 µs, and a timing resolution of 1 ns, which can deliver responsive motion tracking in highly dynamic environments.
The sensors also leverage a programmable edge AI classification engine that supports always-on functionality by analyzing motion patterns directly on the sensor. This reduces system power consumption and accelerates customer-specific use cases, the company said.
The BMI560, optimized for XR headsets and glasses, delivers low noise, low latency, and precise time synchronization. Its advanced OIS+ performance helps capture high-quality footage even in dynamic environments for smartphones and action cameras.
Targeting robotics and XR controllers, the BMI563 offers an extended full-scale range with the platform’s vibration robustness. It supports simultaneous localization and mapping, high dynamic XR motion tracking, and motion-based automatic scene tagging in action cameras.
The BMI570, optimized for wearables and hearables, delivers activity tracking, advanced gesture recognition, and accurate head-orientation data for spatial audio. Thanks to its robustness, it is suited for next-generation wearables and hearables.
Samples are now available for direct customers. High-volume production is expected to start in the third quarter of 2026.
Bosch also announced the BMI423 inertial measurement unit (IMU) at CES. The BMI423 IMU offers an extended measurement range of ±32 g (accelerometer) and ±4,000 dps (gyroscope), which enable precise tracking of fast, dynamic motion, making it suited for wearables, hearables, and robotics applications.
The BMI423 delivers low current consumption of 25 µA for always-on, acceleration-based applications in small devices. Other key specifications include low noise levels of 5.5 mdps/√Hz (gyro) and 90 µg/√Hz (≤ 8 g) or 120 µg/√Hz (≥ 16 g) (accelerometer), along with several interface options including I3C, I2C, and serial peripheral interface (SPI).
For wearables and hearables, the BMI423 integrates voice activity detection based on bone-conduction sensing, which helps save power while enhancing privacy, Bosch said. The sensor detects when a user is speaking and activates the microphone only when required. Other on-board functions include wrist-gesture recognition, multi-tap detection, and step counting, allowing the main processor to remain in sleep mode until needed and extending battery life in compact devices such as smartwatches, earbuds, and fitness bands.
The BMI423 is housed in a compact, 2.5 × 3 × 0.8-mm3 LGA package for space-constrained devices. The BMI423 will be available through Bosch Sensortec’s distribution partners starting in the third quarter of 2026.
Bosch Sensortec’s BMI563 IMU for robotics (Source: Bosch Sensortec)
Also targeting hearables and wearables, TDK Corp. launched a suite of InvenSense SmartMotion custom sensing solutions for true wireless stereo (TWS) earbuds, AI glasses, augmented-reality eyewear, smartwatches, fitness bands, and other IoT devices. The three newest IMUs are based on TDK’s latest ultra-low-power, high-performance ICM-456xx family that offers edge intelligence for consumer devices at the highest motion-tracking accuracy, according to the company.
Instead of relying on central processors, SmartMotion on-chip software enables computational processing related to motion tracking to be offloaded to the motion sensor itself so that intelligence decisions may be made locally, which allows other parts of the system to remain in low-power mode, TDK said. In addition, the sensor fusion algorithm and machine-learning capability are reported to deliver seamless motion sensing with minimum software effort by the customer.
The SmartMotion solutions, based on the ICM-456xx family of six-axis IMUs, include the SmartMotion ICM-45606 for TWS applications including earbuds, headphones, and other hearable products; the SmartMotion ICM-45687 for wearable and IoT technology; and the SmartMotion for Smart Glasses ICM-45685, which now enables new features, including sensing whether users are putting glasses on or taking glasses off (wear detection) and vocal vibration detection for identifying the source of the speech through its on-chip sensor fusion algorithms. The ICM-45685 also enables high-precision head-orientation tracking, optical/electronic image stabilization, intuitive UI control, posture recognition, and real-time translation.
TDK’s SmartMotion ICM-45685 (Source: TDK Corp.)
TDK also announced a new group company, TDK AIsight, to address technologies needed for AI glasses. The company will focus on the development of custom chips, cameras, and AI algorithms enabling end-to-end system solutions. This includes combining software technologies such as eye intent/tracking and multiple TDK technologies, such as sensors, batteries, and passive components.
As part of the launch, TDK AIsight introduced the SED0112 microprocessor for AI glasses. The next-generation, ultra-low-power digital-signal processor (DSP) platform integrates a microcontroller (MCU), state machine, and hardware CNN engine. The built-in hardware CNN architecture is optimized for eye intent. The MCU features ultra-low-power DSP processing, eyeGenI sensors, and connection to a host processor.
The SED0112, housed in a 4.6 × 4.6-mm package, supports the TDK AIsight eyeGI software and multiple vision sensors at different resolutions. Commercial samples are available now.
SDV devices and developmentInfineon Technologies AG and Flex launched their Zone Controller Development Kit. The modular design for zone control units (ZCUs) is aimed at accelerating the development of software-defined-vehicle (SDV)-ready electrical/electronic architectures. Delivering a scalable solution, the development kit combines about 30 unique building blocks.
With the building block approach, developers can right-size their designs for different implementations while preserving feature headroom for future models, the company said. The design platform enables over 50 power distribution, 40 connectivity, and 10 load control channels for evaluation and early application development. A dual MCU plug-on module is available for high-end ZCU implementations that need high I/O density and computational power.
The development kit enables all essential zone control functions, including I2t (ampere-squared seconds), overcurrent protection, overvoltage protection, capacitive load switching, reverse-polarity protection, secure data routing with hardware accelerators, A/B swap for over-the-air software updates, and cybersecurity. The pre-validated hardware combines automotive semiconductor components from Infineon, including AURIX MCUs, OPTIREG power supply, PROFET and SPOC smart power switches, and MOTIX motor control solutions with Flex’s design, integration, and industrialization expertise. Pre-orders for the Zone Controller Development Kit are open now.
Infineon and Flex’s Zone Controller Development Kit (Source: Infineon Technologies AG)
Infineon also announced a deeper collaboration with HL Klemove to advance technologies in vehicle electronic architectures for SDVs and autonomous driving. This strategic partnership will leverage Infineon’s semiconductor and system expertise with HL Klemove’s capabilities in advanced autonomous-driving systems.
The three key areas of collaboration are ZCUs, vehicle Ethernet-based ADAS and camera solutions, and radar technologies.
The companies will jointly develop zone controller applications using Infineon’s MCUs and power semiconductors, with HL Klemove as the lead in application development. Enabling high-speed in-vehicle network solutions, the partnership will also develop front camera modules and ADAS parking control units, leveraging Infineon’s Ethernet technology, while HL Klemove handles system and product development.
Lastly, HL Klemove will use Infineon’s radar semiconductor solutions to develop high-resolution and short-range satellite radar. They will also develop high-resolution imaging radar for precise object recognition.
NXP introduced its S32N7 super-integration processor series, designed to centralize core vehicle functions, including propulsion, vehicle dynamics, body, gateway, and safety domains. Targeting SDVs, the S32N7 series, with access to core vehicle data and high compute performance, becomes the central AI control point.
Enabling scalable hardware and software across models and brands, the S32N7 simplifies vehicle architectures and reduces total cost of ownership by as much as 20%, according to NXP, by eliminating dozens of hardware modules and delivering enhanced efficiencies in wiring, electronics, and software.
NXP said that by centralizing intelligence, automakers can scale intelligent features, such as personalized driving, predictive maintenance, and virtual sensors. In addition, the high-performance data backbone on the S32N7 series provides a future-proof path for upgrading to the latest AI silicon without re-architecting the vehicle.
The S32N7 series, part of NXP’s S32 automotive processing platform, offers 32 compatible variants that provide application and real-time compute with high-performance networking, hardware isolation technology, AI, and data acceleration on an SoC. They also meet the strict timing, safety, and security requirements of the vehicle core.
Bosch announced that it is the first to deploy the S32N7 in its vehicle integration platform. NXP and Bosch have co-developed reference designs, safety frameworks, hardware integration, and an expert enablement program.
The S32N79, the superset of the series, is sampling now with customers.
NXP’s S32N7 super-integration processor series (Source: NXP Semiconductors N.V.)
Texas Instruments Inc. (TI) expanded its automotive portfolio for ADAS and SDVs with a range of automotive semiconductors and development resources for automotive safety and autonomy across vehicle models. The devices include the scalable TDA5 HPC SoC family, which offers power- and safety-optimized processing and edge AI; the single-chip AWR2188 8 × 8 4D imaging radar transceiver, designed to simplify high-resolution radar systems; and the DP83TD555J-Q1 10BASE-T1S Ethernet physical layer (PHY).
The TDA5 SoC family offers edge AI acceleration from 10 TOPS to 1,200 TOPS, with power efficiency beyond 24 TOPS/W. This scalability is enabled by its chiplet-ready design with UCIe interface technology, TI said, enabling designers to implement different feature sets.
The TDA5 SoCs provide up to 12× the AI computing of previous generations with similar power consumption, thanks to the integration of TI’s C7 NPU, eliminating the need for thermal solutions. This performance supports billions of parameters within language models and transformer networks, which increases in-vehicle intelligence while maintaining cross-domain functionality, the company said. It also features the latest Arm Cortex-A720AE cores, enabling the integration of more safety, security, and computing applications.
Supporting up to SAE Level 3 vehicle autonomy, the TDA5 SoCs target cross-domain fusion of ADAS, in-vehicle infotainment, and gateway systems within a single chip and help automakers meet ASIL-D safety standards without external components.
TI is partnering with Synopsys to provide a virtual development kit for TDA5 SoCs. The digital-twin capabilities help engineers accelerate time to market for their SDVs by up to 12 months, TI said.
The AWR2188 4D imaging radar transceiver integrates eight transmitters and eight receivers into a single launch-on-package chip for both satellite and edge architectures. This integration simplifies higher-resolution radar systems because 8 × 8 configurations do not require cascading, TI said, while scaling up to higher channel counts requires fewer devices.
The AWR2188 offers enhanced analog-to-digital converter data processing and a radar chirp signal slope engine, both supporting 30% faster performance than currently available solutions, according to the company. It supports advanced radar use cases such as detecting lost cargo, distinguishing between closely positioned vehicles, and identifying objects in HDR scenarios. The transceiver can detect objects with greater accuracy at distances greater than 350 meters.
With Ethernet an enabler of SDVs and higher levels of autonomy, the DP83TD555J-Q1 10BASE-T1S Ethernet SPI PHY with an integrated media access controller offers nanosecond time synchronization, as well as high reliability and Power over Data Line capabilities. This brings high-performance Ethernet to vehicle edge nodes and reduces cable design complexity and costs, TI said.
The TDA54 software development kit is now available on TI.com. Samples of the TDA54-Q1 SoC, the first device in the family, will be sampling to select automotive customers by the end of 2026. Pre-production quantities of the AWR2188 transceiver, AWR2188 evaluation module, DP83TD555J-Q1 10BASE-T1S Ethernet PHY, and evaluation module are now available on request at TI.com.
Robotics: processors and modulesQualcomm Technologies Inc. introduced a next-generation robotics comprehensive-stack architecture that integrates hardware, software, and compound AI. As part of the launch, Qualcomm also introduced its latest, high-performance robotics processor, the Dragonwing IQ10 Series, for industrial autonomous mobile robots and advanced full-sized humanoids.
The Dragonwing industrial processor roadmap supports a range of general-purpose robotics form factors, including humanoid robots from Booster, VinMotion, and other global robotics providers. This architecture supports advanced-perception, motion planning with end-to-end AI models such as VLAs and VMAs. These features enable generalized manipulation capabilities and human-robot interaction.
Qualcomm’s general-purpose robotics architecture with the Dragonwing IQ10 combines heterogeneous edge computing, edge AI, mixed-criticality systems, software, machine-learning operations, and an AI data flywheel, along with a partner ecosystem and a suite of developer tools. This portfolio enables robots to reason and adapt to the spatial and temporal environments intelligently, Qualcomm said, and is optimized to scale across various form factors with industrial-grade reliability.
Qualcomm’s growing partner ecosystem for its robotics platforms includes Advantech, APLUX, AutoCore, Booster, Figure, Kuka Robotics, Robotec.ai, and VinMotion.
Qualcomm’s Dragonwing IQ10 industrial processor (Source: Qualcomm Technologies Inc.)
Quectel Wireless Solutions released its SH602HA-AP smart robotic computing module. Based on the D-Robotics Sunrise 5 (X5M) chip platform and with an integrated Ubuntu operating system, the module features up to 10 TOPS of brain-processing-unit computing power. The robotic computing modules target demanding robotic workloads, supporting advanced large-scale models such as Transformer, Bird’s-Eye View, and Occupancy.
The module works seamlessly with Quectel’s independent LTE Cat 1, LTE Cat 4, 5G, Wi-Fi 6, and GNSS modules, offering expanded connectivity options and a broader range of robotics use cases. These include smart displays, express lockers, electricity equipment, industrial control terminals, and smart home appliances.
The module, measuring 40.5 × 40.5 × 2.9 mm, operates over the –25°C to 85°C temperature range. It supplies a default memory of 4 GB plus 32 GB and numerous memory options. It supports data input and fusion processing for multiple sensors, including LiDAR, structured light, time-of-flight, and voice, meeting the AI and vision requirements in robotic applications.
The module supports 4k video at 60 fps with video encoding and decoding, binocular depth processing, AI and visual simultaneous localization and mapping, speech recognition, 3D point-cloud computing, and other mainstream robot perception algorithms. It provides Bluetooth, DSI, RGMII, USB 3.0, USB 2.0, SDIO, QSPI, seven UART, seven I2C, and two I2S interfaces.
The module integrates easily with additional Quectel modules, such as the KG200Z LoRa and the FCS950 Wi-Fi and Bluetooth module for more connectivity options.
Quectel’s SH602HA-AP smart robotic computing module (Source: Quectel Wireless Solutions)
The post CES 2026: AI, automotive, and robotics dominate appeared first on EDN.
Power Tips #149: Boosting EV charger efficiency and density with single-stage matrix converters

An onboard charger converts power between the power grid and electric vehicles or hybrid electric vehicles. Traditional systems use two stages of power conversion: a boost converter to implement unity power factor, and an isolated DC/DC converter to charge the batteries with isolation. Obviously, these two stages require additional components that decrease power density and increase costs.
Matrix converters use a single stage of conversion without a boost inductor and bulky electrolytic capacitors. When using bidirectional gallium nitride (GaN) power switches, the converters further reduce component count and increase power density.
Comparing two-stage power converters with single-stage matrix convertersA two-stage power converter, as shown in Figure 1, requires a boost inductor (LB) and a DC-link electrolytic capacitor (CB), as well as four metal-oxide semiconductors (MOSFETs) for totem-pole power factor correction (PFC).
Figure 1 Two-stage power converter diagram with LB, CB, and four MOSFETs for totem-pole PFC. Source: Texas Instruments
A single-stage matrix converter, as shown in Figure 2, does not require a boost inductor nor a DC-link capacitor but does require bidirectional switches (S11 and S12). Connecting common drains or common sources of two individual MOSFETs forms the bidirectional switches. Alternatively, when adopting bidirectional GaN devices in matrix converters, the number of switches decreases. Table 1 compares the two types of converters.
Figure 2 Single-stage matrix converter diagram that does not require LB or CB, but necessitates the use of two bidirectional switches: S11 and S12 . Source: Texas Instruments
|
Two-stage power converter (totem pole power factor correction plus DC/DC) |
Single-stage matrix converter |
|
|
Boost inductor |
Yes |
No |
|
DC-link electrolytic capacitor |
Yes |
No |
|
Fast unidirectional switches |
10 |
4 |
|
Bidirectional switches |
0 |
4 |
|
Slow switches |
2 |
0 |
|
Electromagnetic interference filter |
Smaller |
Larger |
|
Input/output ripple current |
Smaller |
Larger |
|
Power density |
Lower |
Higher |
|
Power efficiency |
Lower |
Higher |
|
Control algorithm |
Simple |
Complicated |
Table 1 A two-stage AC/DC and single-stage matrix converter comparison. Source: Texas Instruments
Single-stage matrix converter topologiesThere are three major topologies applied to EV onboard charger applications.
Topology No. 1: The LLC topologyFigure 3 shows the inductor-inductor-capacitor (LLC) topology. The LLC converter regulates current or voltage by modulating switching frequencies. Lr and Cr form a resonant tank to shape the resonant current. Selecting the proper control algorithms will achieve a unity power factor.
With a three-phase AC input, the voltage ripple on the primary side is much smaller compared to a single-phase AC input. Therefore, the LLC topology is more suitable for three-phase applications. LLC converters operate at a higher frequency and realize a wider range of zero voltage switching (ZVS) than other topologies.
Figure 3 An LLC-based matrix converter with a three-phase AC input. Source: Texas Instruments
Topology No. 2: The DAB topologyFigure 4 shows a dual active bridge (DAB)-based matrix converter. The DAB topology can apply to a three-phase or single-phase AC input. Controlling the inductor current will realize unity power factor naturally. The goal of a control algorithm is to realize a wide ZVS range to reduce switching losses, reduce root-mean-square (RMS) current to reduce conduction losses, and achieve low current total harmonic distortion and unity power factor.
Triple-phase shift is necessary to achieve these goals, including primary-side internal phase shift, secondary-side internal phase shift, and external phase shift between the primary side and secondary side. Additionally, modulating the switching frequency will extend the ZVS range.
Figure 4 A DAB-based matrix converter with a single-phase AC input. Source: Texas Instruments
Topology No. 3: The SR-based topologyFigure 5 shows a series resonant (SR) matrix converter. The resonant tank formed by Lr and Cr shapes the transformer current to reduce turnoff current and turnoff losses. Meanwhile, the reactive power is reduced, as are conduction and switching losses. Compared to the LLC topology, the switching frequency of SR matrix converters is fixed, but higher than the resonant frequency.
Figure 5 An SR-based matrix converter with a single-phase AC input. Source: Texas Instruments
The control algorithm of single-stage matrix convertersIn an LLC topology-based onboard charger with a three-phase AC input, switching frequency modulation regulates the charging current or voltage and uses space vector control based on grid polarity. The voltage ripple applied to the resonant tank is small. The resonant tank determines gain variations and affects the converter’s operation.
A DAB or SR DAB-based onboard charger usually adopts triple-phase shift (TPS) control to naturally achieve unity power factor, a wide ZVS range, and low RMS current. Optimizing switching frequencies further reduces both conduction and switching losses.
Figure 6 illustrates pulse width modulation (PWM) waveforms of TPS control of matrix converters for a half AC cycle (for example, Vac > 0). Figure 4 shows where PWMs connect to the power switches: d1 denotes the internal phase shift between PWM1A and PWM4A, d2 denotes the internal phase shift between PWM5A and PWM6A, and d3 denotes the external phase shift between the middle point of d1 and d2. PWM1B and PWM4B are gate drives for the second pair of bidirectional switches.
Figure 6 TPS PWM waveforms for a single-stage matrix converter for a half AC cycle. Source: Texas Instruments
Regardless of the topology selected, matrix converters require bidirectional switches, formed by connecting two GaN or silicon carbide (SiC) switches with a common drain or common source. Bidirectional GaN switches are emerging devices, integrating two GaN devices with common drains and providing bidirectional control with a single device.
Matrix convertersMatrix converters use single-stage power conversion to achieve a unity power factor and DC/DC power conversion. They provide two major advantages in onboard charger applications:
- High power density through the use of single-stage conversion, while eliminating large boost inductors and bulky DC-link electrolytic capacitors.
- High power efficiency through reduced switching and conduction losses, and a single power-conversion stage.
There are still many challenges to overcome to expand the use of single-stage matrix converters to other applications. High ripple current is a concern for batteries that require a low ripple charging current. Matrix converters are also more susceptible to surge conditions given the lack of DC-link capacitors. Overall, however, matrix converters are gaining popularity, especially with the emergence of wide-band-gap switches and advanced control algorithms.
Sean Xu currently works as a system engineer in Texas Instruments’ Power Design Services team to develop power solutions using advanced technologies for automotive applications. Previously, he was a system and application engineer working on digital control solutions for enterprise, data center, and telecom power. He earned a Ph.D. degree from North Dakota State University and a Master’s degree from Beijing University of Technology, respectively.
Related Content
- Power Tips #122: Overview of a planar transformer used in a 1-kW high-density LLC power module
- Power Tips #145: EIS applications for EV batteries
- Power Tips #102: CLLLC vs. DAB for EV onboard chargers
- Extreme Fast Charging Architectures From 350 kW to 3.75 MW
- The Power of Bidirectional Bipolar Junction Technology
The post Power Tips #149: Boosting EV charger efficiency and density with single-stage matrix converters appeared first on EDN.
Procurement tool aims to bolster semiconductor supply chain

An AI-enabled electronic components procurement tool claims to boost OEM productivity by leveraging a software platform that negotiates prices, tracks spending, and monitors savings in real time. It takes your bill-of-materials (BOM) and uploads it to the system while leveraging AI agents to discover form-fit-function compatible parts and more.
ChipHub, founded in 2023, is a components procurement tool that aims to optimize operations and savings for OEMs by addressing the supply chain issues at the system level.

Figure 1 A lack of control on component pricing, availability, and spending matrices makes the supply chain operations challenging. Source: ChipHub
A standard components procurement tool
Envision a procurement platform empowering OEMs to directly engage with suppliers, enhancing control over annual expenditures ranging from millions to billions of dollars. Such a platform streamlines interactions with suppliers, fostering efficient negotiations and monitoring of cost-saving metrics.
A tool that, at a very high level, enables OEMs to negotiate commercial terms directly with suppliers, all on the platform with no emails and spreadsheets. It can support millions of SKUs and thousands of suppliers with four fundamental procurement premises.
- A scalable platform that facilitates supplier negotiations.
- It offers risk reduction because the component supplier knows who the end customer is.
- It employs generative AI to allow technical teams to evaluate devices or specs while extracting information from the datasheet and performing cross-part analysis.
- It provides record-keeping features to monitor savings for procurement staff.
Enter ChipHub, an AI-driven procurement tool tailored for hardware OEMs. Its agentic system leverages Model Context Protocol (MCP) to enable collaboration between multiple AI agents and humans to deliver the information supply chain professionals need. Features like this help reform component sourcing by offering time and cost efficiencies irrespective of the OEM’s scale.
Next, ChipHub offers the unified marketplace framework (UMF), which helps procurement teams across diverse sectors such as data centers, computing, networking, storage, power, consumer goods, industrial, and automotive. Users can implement UMF in a single day and start monitoring their spending and savings in real time.

Figure 2 The procurement tool enables OEMs to negotiate commercial terms directly with component suppliers and do it right on the platform. Source: ChipHub
Users such as procurement managers use the platform to search specific parts, and the system conducts cross-part analysis to find compatible options, including real-time pricing and inventory data from various ecosystem partners. So, they don’t have to spend hours manually searching for data and building comparison matrices.
The platform uses a system of multiple AI agents, with human oversight, to navigate the supply chain and provide insights into part availability and sourcing options. “We don’t house any parts; we are just enabling supply-based management,” said Aftab Farooqi, founder and CEO of ChipHub.
Do I really know my supply chain? According to Farooqi, that’s the fundamental question for procurement managers. “If they don’t have control and visibility of their supply chain, they could be vulnerable,” he added. He also acknowledged that ChipHub isn’t a solution for all OEMs.
“They could keep doing things the way they are doing,” Farooqi said. “But they can still subscribe to this platform and have it as a validation tool.” For example, OEMs can cross-check the signal integrity analysis of a particular component.
Farooqi added that the platform can also be used by contract manufacturers (CMs) as a key tool for risk reduction because it enables spend tracking and collaboration features on the platform.
Related Content
- Three steps to better procurement
- Software tools manage selection of components
- Semiconductor Supply Chain: The Role of Lean & Muda
- E-Commerce Is the Future of Components Procurement
- Europe’s Semiconductor Supply Chain Unleashes the Power of Diamond
The post Procurement tool aims to bolster semiconductor supply chain appeared first on EDN.
PolarFire FPGA ecosystem targets embedded imaging

Microchip Technology has expanded its PolarFire FPGA–based smart embedded video ecosystem to enable low-power, high-bandwidth video connectivity. The offering consists of integrated development stacks that combine hardware evaluation kits, development tools, IP cores, and reference designs to deliver complete video pipelines for medical, industrial, and robotic vision applications. The latest additions include Serial Digital Interface (SDI) receive and transmit IP cores and a quad CoaXPress (CXP) bridge kit.

The ecosystem supports SMPTE-compliant SDI video transport at 1.5G, 3G, 6G, and 12G, along with HDMI-to-SDI and SDI-to-HDMI bridging for 4K and 8K video formats. PolarFire FPGAs enable direct SLVS-EC (up to 5 Gbps per lane) and CoaXPress 2.0 (up to 12.5 Gbps per lane) bridging without third-party IP. The nonvolatile, low-power architecture supports compact, fanless system designs with integrated hardware-based security features.
Native support for Sony SLVS-EC sensors provides an upgrade path for designs impacted by component discontinuations. Development is supported through Microchip’s Libero Design Suite and SmartHLS tools to simplify design workflows and reduce development time.
The following links provide additional information on PolarFire smart embedded vision, the CoaXPress bridge kit, and FPGA solution stacks.
The post PolarFire FPGA ecosystem targets embedded imaging appeared first on EDN.
Controllers accelerate USB 2.0 throughput

Infineon’s EZ-USB FX2G3 USB 2.0 peripheral controllers provide DMA data transfers from LVCMOS inputs to USB outputs at speeds of up to 480 Mbps. Designed for USB Hi-Speed host systems, the devices also support Full-Speed (12 Mbps) and Low-Speed (1.5 Mbps) operation.

Built on the company’s MXS40-LP platform, EZ-USB FX2G3 controllers integrate up to six serial communication blocks (SCBs), a crypto accelerator supporting AES, DES, SHA, and RSA algorithms for enhanced security, and a high-bandwidth data subsystem with up to 1024 KB of SRAM for USB data buffering. Additional on-chip memory includes up to 512 KB of flash, 128 KB of SRAM, and 128 KB of ROM.
The family includes four variants, ranging from basic to advanced, all featuring a 100-MHz Arm Cortex-M0+ CPU, while the top-end device adds a 150-MHz Cortex-M4F. The peripheral I/O subsystem accommodates QSPI configurable in single, dual, quad, dual-quad, and octal modes. SCBs can be configured as I2C, UART, or SPI interfaces. The devices provide up to 32 configurable USB endpoints, making them suitable for a wide range of consumer, industrial, and healthcare applications.
EZ-USB FX2G3 controllers are now available in 104-pin, 8×8-mm LGA packages.
The post Controllers accelerate USB 2.0 throughput appeared first on EDN.
Digital isolators enhance signal integrity

Diodes’ API772x RobustISO series of dual-channel digital isolators protects sensitive components in high-voltage systems. The devices provide reliable, robust isolation for digital control and communication signals in industrial automation, power systems, and data center power supplies.

Comprising six variants, the API772x series meets reinforced and basic isolation requirements across various standards, including VDE, UL, and CQC. The parts have a 5-kVRMS isolation rating for 1 minute per UL 1577 and an 8-kVPK rating per DIN EN IEC 60747-17 (VDE 0884-17). Maximum surge isolation voltage is 12.8 kVPK. According to Diodes’ isolation reliability calculations, the devices achieve a predicted operational lifetime exceeding 40 years, based on a capacitive isolation barrier more than 25 µm thick.
RobustISO digital isolators support a range of transmission protocols at data rates up to 100 Mbps. They feature a minimum common-mode transient immunity of 150 kV/µs, ensuring reliable signal transmission in noisy environments. Operating from a 2.5-V to 5.5-V supply, the devices typically draw 2.1 mA per channel at 100 Mbps. The series offers flexible digital channel-direction configurations and default output levels to accommodate diverse design requirements.
Prices for the API772x devices start at $0.46 each in lots of 1000 units.
RobustISO API772x product page
The post Digital isolators enhance signal integrity appeared first on EDN.
MOSFET ensures reliable AI server power

A 100-V, 200-A MOSFET from Rohm, the RS7P200BM achieves a wide safe operating area (SOA) in a compact DFN5060-8S (5×6-mm) package. The device safely handles inrush current and overload conditions, ensuring stable operation in hot-swap circuits for AI servers using 48-V power supplies.

The RS7P200BM features RDS(on) of 4.0 mΩ (VGS = 10 V, Ta = 25 °C) while maintaining a wide SOA—7.5 A for a 10‑ms pulse width and 25 A for 1 ms at VDS = 48 V. This combination of low on-resistance and wide SOA, typically a trade-off, helps suppress heat generation. As a result, server power supply efficiency improves, while cooling requirements and overall electricity costs are reduced.
Housed in a DFN5060-8S package, the RS7P200BM enables higher-density mounting than the previous DFN8080-8S design. It is now available in production quantities through online distributors including DigiKey and Mouser.
The post MOSFET ensures reliable AI server power appeared first on EDN.
Sensor drives accurate downhole drilling

The Tronics AXO315T1 MEMS accelerometer from TDK is designed for oil and gas downhole navigation in extreme environments. It features a ±14‑g input range and a 24‑bit digital SPI interface for measurement-while-drilling (MWD) applications exposed to temperatures up to 175°C.

Powered by a unique closed-loop architecture, this single-axis device achieves a tenfold improvement in vibration rejection compared with conventional open-loop MEMS accelerometers. It offers vibration rejection of 20 µg/g², noise density of 10 µg/√Hz, and a bias residual error of 1.7 mg over a temperature range of –30 °C to +175 °C.
The AXO315T1 provides a cost-effective, digital, and low-SWaP alternative to quartz accelerometers for inclination measurement in directional drilling tools. It is rated for more than 1000 hours of operation at 175°C and is housed in a hermetically sealed, ceramic surface-mount package.
AXO315T1 sensors and evaluation boards are available for sampling and customer trials.
The post Sensor drives accurate downhole drilling appeared first on EDN.
Peeking inside a moving magnet phono cartridge and stylii

How does a wiggling groove on a rotating record transform into two-channel sonic excellence? It all starts with the turntable cartridge, mated to one of several possible needle types.
Mid-last year, I confessed that I’d headed back down the analog “vinyl” record rabbit hole after several decades of sole dedication to various digital audio media sources (physical, download and streamed). All three turntables now in my possession employ moving magnet cartridge technology; here’s what I wrote back in July in comparing it against the moving coil alternative:
Two main cartridge options exist: moving magnet and higher-end moving coil. They work similarly, at least in concept: in conjunction with the paired stylus, they transform physical info encoded onto a record via groove variations into electrical signals for eventual reproduction over headphones or a set of speakers. Differences between the two types reflect construction sequence variance of the cartridge’s two primary subsystems—the magnets and coils—and are reflected (additionally influenced by other factors such as cantilever constituent material and design) not only in perceived output quality but also in other cartridge characteristics such as output signal strength and ruggedness.
Miny-but-mighty magnetsAnd here’s more on moving magnet cartridges from Audio-Technica’s website:
Audio-Technica brand moving magnet-type cartridges carry a pair of small, permanent magnets on their stylus assembly’s cantilever. The cantilever is the tiny suspended “arm” that extends at an angle away from the cartridge body. The cantilever holds the diamond tip that traces the record groove on one end and transfers the vibrations from the tip to the other end where the magnets are located. These tiny magnets are positioned between two sets of fixed coils of wire located inside the cartridge body via pole pieces that extend outward from the coils. This arrangement forms the electromagnetic generator.
The magnets are the heaviest part of the moving assembly, but by mounting the magnets near the fulcrum, or pivot point, of the assembly the amount of mass the stylus is required to move is minimized, allowing it to respond quickly and accurately to the motion created by the record groove. In addition to enhancing response, the low effective tip mass reduces the force applied to the delicate record groove, reducing the possibility of groove wall wear and damage. The moving magnet-type cartridge produces moderate to high output levels, works easily into standard phono inputs on a stereo amplifier or receiver and has a user-replaceable stylus assembly. These cartridges have a robust design, making them an excellent choice for demanding applications such as live DJ, radio broadcasts and archiving.
The associated photo is unfortunately low-res and otherwise blurry:

Here’s a larger, clearer one, which I’d found within a tutorial published by retailer Crutchfield:
Inexpensively assuaging curiosityEver since I started dabbling with vinyl again, I’d been curious to take a moving magnet cartridge apart and see what was inside. I got my chance when I found a brand new one, complete with a conical stylus, on sale for $18.04 on eBay. It’s the AT3600L, the standalone version of the cartridge that comes pre-integrated with my Audio-Technica AT-LP60XBT turntable’s tonearm:
Here are some “stock” images of the AT3600L mated to the standard ATN3600LC conical stylus (with the protective plastic sleeve still over the needle):


This next set of shots accompanied the eBay post which had caught my eye (and wallet):

And, last but not least, here are some snaps of our dissection patient, first bagged as initially received:

then unbagged but still encased, and as usual (as well as with photos that follow) accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

along with mounting hardware at the bottom:

and finally, free from plastic captivity:






Next, let’s pop off the stylus and take a gander at its conical needle tip:
along with the cantilever and pivot assembly:
If you’ve already read my July coverage, you know that I’d also picked up an easily swappable:

elliptical stylus, the Pfanstiehl 4211-DE, which promised enhanced sonic quality:



but ended up being notably less tolerant than its conical sibling of any groove defects. Some of this functional variance, I noted back in July, is inherent to the needles’ structural deviations:
Because conical styli only ride partway down in the record groove, they supposedly don’t capture all the available fidelity potential with pristine records. But that same characteristic turns out to be a good thing with non-pristine records, for which all manner of gunk has accumulated over time in the bottom of the groove. By riding above the dross, the conical needle head doesn’t suffer from its deleterious effects.
But, as it turns out, the Pfanstiehl 4211-DE itself was also partly to “blame”. It reportedly works best with turntables based on the standalone AT3600L cartridge, whose tracking force and antiskating settings are both user-adjustable and lighter than those needed (non-adjustable, as well) with the fully integrated AT-LP60XBT turntable.
I resold the barely used Pfanstiehl 4211-DE on eBay and went with Audio-Technica’s (modestly) more pricey ATN3600LE elliptical stylus instead, which explicitly documented its compatibility with the AT-LP60 turntable series and indeed worked notably better with my setup:


Back to the ATN3600LC conical stylus. Two interior views showcase the magnets called out in the earlier concept image:
And here’s where they mate with the cartridge itself (with associated coils presumably inside, to be seen shortly):


Next, let’s remove the screw that holds the top black plastic mounting assembly in place:



One more look at the connections at the back, with markings now visible:

And now, let’s peel away the metal casing, focusing attention on the top-side seam:
With that, the insides come right out:
That was a fun and informative, not to mention inexpensive, project that satisfied my curiosity. I hope it did the same for you. Sounds off with your thoughts in the comments, please!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Hardware alterations: Unintended, apparent advantageous adaptations
- Mega-cool USB-based turntable
- Assessing vinyl’s resurrection: Differentiation, optimization, and demand maximization
The post Peeking inside a moving magnet phono cartridge and stylii appeared first on EDN.
Combine two TL431 regulators to make versatile current mirror

Various designs for current mirror circuits have been an active topic recently here in Design Ideas (DIs). Usually, the mirror designer’s aim is to make the mirror’s input and output currents accurately equal, but Figure 1 shows one that takes a tangent. Being immune to traditional current mirror bugaboos (Early effect, etc.), it can achieve the equality criterion quite well, but it also has particular versatility in applications where the input and output currents deliberately differ.
Figure 1 The R1/R2 resistance ratio sets the I2/I1 current ratio.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Here’s the backstory: Awhile back, I published a DI that used the venerable family of TLx431 shunt voltage regulators as programmable current regulators: “Precision programmable current sink.”
Figure 1 demonstrates their versatility again, this time combining two of the beasties to make a programmable gain current mirror.
The choice between the 2.5-V reference voltage TL431 and the 1.24-V TLV431 can be based on their different current and voltage ratings. For current: 1 mA to 100 mA for the TL versus 100 µA to 15 mA for the TLV. For voltage: 2.5 V to 36 V for the TL versus 1.24 V to 6 V for the TLV.
Note that both I1 and I2 must fall within those respective current numbers for useful regulation (and reflection!) to occur. Minimum mirror input voltage = Vref + I2R2.
Of course, you must also accommodate the modest heat dissipation limits of these small devices. However, the maximum current (and power) capabilities can be extended virtually without limit by the simple ploy shown in Figure 2.

Figure 2 Booster transistor Q1 can handle current and power beyond 431 max Ic and dissipation limits.
And one more thing.
You might reasonably accuse Z1 of basically loafing since its only job is to provide bias voltage for R1 and Z2. But we can give it more interesting work to do with the trick shown in Figure 3. Not only can this scheme accommodate arbitrary I1/I2 ratios, but we can also add a fixed offset current! Here’s how.

Figure 3 Add six resistors and one transistor to two TL431s to make this 0/20 mA to 4/20 mA current loop converter. Z2 sums the 500-mV offset provided by Z1 with the 0 to 2 V made by current sensor R1, then scales that with R2 to output the 4 to 20 mA with a boost from Q1 that can accommodate loop voltages up to 36 V. Note R1, R2, R4, and R6 need to be precision types.
What results here is a (somewhat simpler) solution to an application borrowed from a previous DI by frequent contributor R Jayapal in: “A 0-20mA source current to 4-20mA loop current converter.”
In electronic design, it seems there’s always more than one way to defrock a feline.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Precision programmable current sink
- A 0-20mA source current to 4-20mA loop current converter
- Silly simple precision 0/20mA to 4/20mA converter
The post Combine two TL431 regulators to make versatile current mirror appeared first on EDN.
The AI-tuned DRAM solutions for edge AI workloads

As high-performance computing (HPC) workloads become increasingly complex, generative artificial intelligence (AI) is being progressively integrated into modern systems, thereby driving the demand for advanced memory solutions. To meet these evolving requirements, the industry is developing next-generation memory architectures that maximize bandwidth, minimize latency, and enhance power efficiency.
Technology advances in DRAM, LPDDR, and specialized memory solutions are redefining computing performance, with AI-optimized memory playing a pivotal role in driving efficiency and scalability. This article examines the latest breakthroughs in memory technology and the growing impact of AI applications on memory designs.
Advanced memory architectures
Memory technology is advancing to meet the stringent performance requirements of AI, AIoT, and 5G systems. The industry is witnessing a paradigm shift with the widespread adoption of DDR5 and HBM3E, offering higher bandwidth and improved energy efficiency.
DDR5, with a per-pin data rate of up to 6.4 Gbps, delivers 51.2 GB/s per module, nearly doubling DDR4’s performance while reducing the voltage from 1.2 V to 1.1 V for improved power efficiency. HBM3E extends bandwidth scaling, exceeding 1.2 TB/s per stack, making it a compelling solution for data-intensive AI training models. However, it’s impractical for mobile and edge deployments due to excessive power requirements.

Figure 1 The above diagram chronicles memory scaling from MCU-based embedded systems to AI accelerators serving high-end applications. Source: Winbond
With LPDDR6 projected to exceed 150 GB/s by 2026, low-power DRAM is evolving toward higher throughput and energy efficiency, addressing the challenges of AI smartphones and embedded AI accelerators. Winbond is actively developing small-capacity DDR5 and LPDDR4 solutions optimized for power-sensitive applications around its CUBE memory platform, which achieves over 1 TB/s bandwidth with a significant reduction in thermal dissipation.
With anticipated capacity scaling up to 8 GB per set or even higher, such as 4Hi WoW, based on one reticle size, which can achieve >70 GB density and bandwidth of 40TB/s, CUBE is positioned as a viable alternative to traditional DRAM architectures for AI-driven edge computing.
In addition, the CUBE sub-series, known as CUBE-Lite, offers bandwidth ranging from 8 to 16 GB/s (equivalent to LPDDR4x x16/x32), while operating at only 30% of the power consumption of LPDDR4x. Without requiring an LPDDR4 PHY, system-on-chips (SoCs) only need to integrate the CUBE-Lite controller to achieve bandwidth performance comparable to full-speed LPDDR4x. This not only eliminates the high cost of PHY licensing but also allows the use of mature process nodes such as 28 nm or even 40 nm, achieving performance levels of 12-nm node.
This architecture is particularly suitable for AI SoCs or AI MCUs that come integrated with NPUs, enabling battery-powered TinyML edge devices. Combined with Micro Linux operating systems and AI model execution, it can be applied to low-power AI image sensor processor (ISP) edge scenarios such as IP cameras, AI glasses, and wearable devices, effectively achieving both system power optimization and chip area reduction.
Furthermore, SoCs without LPDDR4 PHY and only CUBE-light controller can achieve smaller die sizes and improved system power efficiency.
The architecture is highly suitable for AI SoCs—MCUs, MPUs, and NPUs—and TinyML endpoint AI devices designed for battery operation. The operating system is Micro Linux combined with an AI model for AI SoCs. The end applications include AI ISP for IP cameras, AI glasses, and wearable devices.

Figure 2 The above diagram chronicles the evolution of memory bandwidth with DRAM power usage. Source: Winbond
Memory bottlenecks in generative AI deployment
The exponential growth of generative AI models has created unprecedented constraints on memory bandwidth and latency. AI workloads, particularly those relying on transformer-based architectures, require extensive computational throughput and high-speed data retrieval.
For instance, deploying LLamA2 7B in INT8 mode requires at least 7 GB of DRAM or 3.5 GB in INT4 mode, which highlights the limitations of conventional mobile memory capacities. Current AI smartphones utilizing LPDDR5 (68 GB/s bandwidth) face significant bottlenecks, necessitating a transition to LPDDR6. However, interim solutions are required to bridge the bandwidth gap until LPDDR6 commercialization.
At the system level, AI edge applications in robotics, autonomous vehicles, and smart sensors impose additional constraints on power efficiency and heat dissipation. While JEDEC standards continue to evolve toward DDR6 and HBM4 to improve bandwidth utilization, custom memory architectures provide scalable, high-performance alternatives that align with AI SoC requirements.
Thermal management and energy efficiency constraints
Deploying large-scale AI models on end devices introduces significant thermal management and energy efficiency challenges. AI-driven workloads inherently consume substantial power, generating excessive heat that can degrade system stability and performance.
- On-device memory expansion: Mobile devices must integrate higher-capacity memory solutions to minimize reliance on cloud-based AI processing and reduce latency. Traditional DRAM scaling is approaching physical limits, necessitating hybrid architectures integrating high-bandwidth and low-power memory.
- HBM3E vs CUBE for AI SoCs: While HBM3E achieves high throughput, its power requirements exceed 30 W per stack, making it unsuitable for mobile and edge applications. Here, memory solutions like CUBE can serve as an alternative last level cache (LLC), reducing on-chip SRAM dependency while maintaining high-speed data access. The shift toward sub-7-nm logic processes exacerbates SRAM scaling limitations, emphasizing the need for new cache solutions.
- Thermal optimization strategies: As AI processing generates heat loads exceeding 15 W per chip, effective power distribution and dissipation mechanisms are critical. Custom DRAM solutions that optimize refresh cycles and employ TSV-based packaging techniques contribute to power-efficient AI execution in compact form factors.
DDR5 and DDR6: Accelerating AI compute performance
The evolution of DDR5 and DDR6 represents a significant inflexion point in AI system architecture, delivering enhanced memory bandwidth, lower latency, and greater scalability.
DDR5, with 8-bank group architecture and on-die error correction code (ECC), provides superior data integrity and efficiency, making it well-suited for AI-enhanced PCs and high-performance laptops. With an effective peak transfer rate of 51.2 GB/s per module, DDR5 enables real-time AI inference, seamless multitasking, and high-speed data processing.
DDR6, still in development, is expected to introduce bandwidth exceeding 200 GB/s per module, a 20% reduction in power consumption along with optimized AI accelerator support, further pushing AI compute capabilities to new limits.

Figure 3 CUBE, an AI-optimized memory solution, leverages through-silicon via (TSV) interconnects to integrate high-bandwidth memory characteristics with a low-power profile. Source: Winbond
The convergence of AI-driven workloads, performance scaling constraints, and the need for power-efficient memory solutions is shaping the transformation of the memory market. Generative AI continues to accelerate the demand for low-latency, high-bandwidth memory architectures, leading to innovation across DRAM and custom memory solutions.
As AI models become increasingly complex, the need for optimized, power-efficient memory architectures will become increasingly critical. Here, technological innovation will ensure commercial realization of cutting edge of AI memory solutions, bridging the gap between high-performance computing and sustainable, scalable memory devices.
Jacky Tseng is deputy director of CMS CUBE product line at Winbond. Prior to joining Winbond in 2011, he served as a senior engineer at Hon-Hai.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
The post The AI-tuned DRAM solutions for edge AI workloads appeared first on EDN.
Sensing and power-generation circuits for a batteryless mobile PM2.5 monitoring system

Editor’s note:
In this DI, high school student Tommy Liu builds a vehicle-mounted particulate matter monitoring system that siphons power from harvested wind energy from vehicle motion and an integrated supercapacitor.
Particulate matter (PM2.5) monitoring is a key public-health metric. Vehicle- and drone-mounted sensors can expand coverage, but many existing systems are too costly for broad deployment. This Design Idea (DI) presents a prototype PM2.5 sensing and power-generation front end for a low-cost, batteryless, vehicle-mounted node.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Two constraints drive the circuit design:
- Minimizing power to enable batteryless operation
- Harvesting and regulating power from a variable source
Beyond choosing a low-power sensor and MCU, the firmware duty-cycles aggressively: the PM2.5 sensor is fully powered down between samples, and the MCU enters deep sleep. A high-side MOSFET switch disconnects the sensor supply and avoids the ground bounce risk of low-side switching.
Low-cost micro wind turbines can harvest energy from vehicle motion, but available power is limited at typical road speeds, and the output voltage varies with airflow. A supercapacitor provides energy buffering, while a DC-DC buck converter clamps and regulates the rail for reliable sensor/MCU operation.
The circuits were built and tested, and the results highlight current limitations and next steps for improvement.
PM2.5 Sensor and MCU CircuitFigure 1 shows the sensing schematic: a PM2105 PM2.5 sensor, an ESP32-C3 module, and an FQP27P06 high-side PMOS switch.

Figure 1 Sensing circuit schematic with a PM2105 PM2.5 sensor, an ESP32-C3 module, and an FQP27P06 high-side PMOS switch.
Calculating the power budgetA PM2105 (cubic sensor and instrument) was chosen for low operating current (53 mA) and fast data acquisition (4 s). To size the batteryless budget, we measured total sensing-circuit power (PM2105 plus ESP32-C3) using an alternating on-and-standby test pattern (Figure 2).

Figure 2 Sensing circuit power consumption in operating and standby mode.
Power peaks during the first ~4 s after sensor power-up and during sensor operation. This startup transient occurs as the sensor ramps the laser intensity and fan speed to stabilize readings. With a 5-V supply, the measured average power is ~650 mW for the first 4 s and ~500 mW for the remaining on interval. In standby, power drops to ~260 mW, with most consumption from the MCU.
Because the PM2105 settles in ~4 s, the firmware samples for ~4 s, then switches the sensor off and puts the MCU into deep sleep until the next sample time.
Operating and deep sleep modesThe MCU is based on Espressif Systems’ ESP32-C3, a low-power SoC. It controls the sensor, acquires PM2.5 data, and transmits it to the vehicle gateway, router, or portable hotspot. Both devices support I2C and UART, but UART was used to tolerate longer cable runs in a vehicle.
To fully remove PM2105 power between samples, an FQP27P06 PMOS high-side switch disconnects VCC (Figure 1). A low-side switch would also cut power, but digital switching currents can create ground IR drop and ground bounce. In sensing systems, ground noise is typically more damaging than supply ripple. FQP27P06 was selected for low on-resistance and high current capability.
In deep sleep mode, the MCU GPIOs float (high impedance). A 33 kΩ pull-down and an inverter force the PMOS gate to a defined OFF state during sleep. Because the ESP32-C3 uses 3.3 V GPIO, the high-side gate drive needs level shifting. A TI SN74LV1T04 provides both inversion and level shifting in one device.
Batteryless power generation Wind turbineVehicle motion provides airflow, making a micro wind turbine a convenient harvester. A small brushed DC motor and rotor act as the turbine (Figure 3). Assuming vehicle speeds of ~15 to 65 mph, a representative average headwind speed is ~30 to 40 mph.
Figure 3 Micro wind turbine comprising a DC motor and rotor.
At 35 mph, the turbine under test delivered ~3.2 V and ~135 mW into 41 Ω, selected to approximate the average MCU and sensor load. That output is insufficient for a regulated 5-V rail and the ~650-mW startup peak.
SupercapacitorTo bridge this gap, a 10-F supercapacitor stores energy and buffers the turbine from the sensing load. Because turbine output varies with speed and the MCU and sensor maximum voltage must remain below 5.5 V, the turbine cannot be connected directly to the sensing circuit. We used an LM2596 adjustable buck-converter module set to 5 V to keep the voltage within limits.
Figure 4 shows the power-generation schematic. A series Schottky diode (D1) protects the buck stage if the turbine reverses polarity during reverse rotation.

Figure 4 Power-generation system where a series Schottky diode (D1) protects the buck stage if the turbine reverses polarity during reverse rotation.
During sensor operation, the supercapacitor supplies load current. The supercapacitor droop per sample is:
where I is the average operating current, and T is the operating time per sample.
When the sensing circuit is on, the turbine voltage can fall below 5 V, for example, ~3.2 V at 35 mph, and the LM2596 output correspondingly drops. Because LM2596 is an asynchronous (diode-rectified) buck converter, reverse current is blocked when the converter output falls below the supercapacitor voltage, preventing the supercapacitor from discharging back into the converter.
After sampling, the sensor is powered down, and the MCU enters deep sleep. With the load reduced, the turbine voltage rises. At 35 mph, the turbine produces ~9 V while charging a 10 F supercapacitor through the LM2596 with no additional load.
The buck output regulates at 5 V and charges the supercapacitor. Near 5 V, the measured charge rate is ~2.3 mV/s. Therefore, the time to recover the ~50 mV droop from a sample is:

This supports ~30 s sampling at ~35 mph. Vehicle speed variation will affect the achievable sampling rate, but for public health PM2.5 monitoring, update intervals on the order of 1 minute are often sufficient.
Results and future workFigure 5 shows the prototype sensing PCB with the PM2105, ESP32-C3 circuitry, and a 10-F supercapacitor on the same board. Figure 6 shows the LM2596 buck module configured for a 5-V output.

Figure 5 Prototype sensing circuit board with the PM2105, ESP32-C3 circuitry, and a 10-F supercapacitor.

Figure 6 LM2596 DC-DC down-converter configured for a 5-V output.
A steady wind supply provided continuous airflow at ~35 mph, verified by an anemometer, directed at the turbine blade. The MCU powered up the sensor and acquired a PM2.5 sample every 30 s. Before the test, the supercapacitor was precharged to 5 V using USB power. During the run, the system was powered only by the supercapacitor and the wind turbine.
Over a 1-hour run, the system reported PM2.5 data at a 30-s sampling interval. Figure 7 shows an excerpt of the collected PM data.

Figure 7 Excerpt of the collected PM data (sensor not calibrated).
Next, the system will be mounted on a test vehicle for road testing. One limitation is the micro wind turbine’s low output power. Once the supercapacitor is charged to 5 V, the system can sustain operation, but initial charging using only the turbine is slow. With a 10-F supercapacitor, the initial charge time can be on the order of ~30 minutes. Reducing capacitance shortens charge time, but larger capacitance helps ride through low-speed driving and stops.
In this prototype, PM data were logged locally and downloaded over USB after the test was completed. In deployment, Wi-Fi transmission typically increases MCU energy per sample. The connection and transmission can add up to ~1 s of active time. These factors increase the required harvested power. Future work focuses on increasing harvested power using a higher-power motor, an improved rotor, or multiple turbines in parallel. The goal is a self-starting system that charges the supercapacitor within a few minutes at typical road speeds.
Acknowledgement
I gratefully acknowledge Professor Shijia Pan, the founder of the PANS Lab (Pervasive Autonomous Networked Systems Lab) at the University of California, Merced, and my Ph.D. mentor Shubham Rohal for their mentorship, guidance, and technical feedback throughout this project. In addition, I gratefully acknowledge Philip for the generous donation of the test equipment used in this work.
Tommy Liu is currently a senior at Monta Vista High School (MVHS) with a passion for electronics. A dedicated hobbyist since middle school, Tommy has designed and built various projects ranging from FM radios to simple oscilloscopes and signal generators for school use. He aims to pursue Electrical Engineering in college and aspires to become a professional engineer, continuing his exploration in the field of electronics.
Related Content
- Building a low-cost, precision digital oscilloscope—Part 1
- Building a low-cost, precision digital oscilloscope – Part 2
- Let’s clear the air: analog and power management of environmental sensor networks
- A groovy apparatus for calibrating miniature high sensitivity anemometers
References
- Espressif Systems. (2025, September 4). Datasheet of ESP32-C3 Series (Version 2.2). https://documentation.espressif.com/esp32-c3_datasheet_en.html (Espressif Documentation))
- Cubic Sensor and Instrument Co., Ltd. (2022, March 21). PM2105L Laser Particle Sensor Module Specification (Version 0.1).
https://www.en.gassensor.com.cn/Uploads/Blocks/Cubic-PM2105L-Laser-Particle-Sensor-Module-Specification.pdf - Texas Instruments. (2023, March). LM2596 SIMPLE SWITCHER® Power Converter 150-kHz 3-A Step-Down Voltage Regulator datasheet (Rev. G).
https://www.ti.com/lit/gpn/lm2596 (Texas Instruments) - Rohal, Shubham, Zhang, Joshua, Montgomery-Yale, Farren, Lee, Dong Yoon, Schertz, Stephen, & Pan, Shijia. (2025, May 6–9). Self-Adaptive Structure Enabled Energy-Efficient PM2.5 Sensing. 13th International Workshop on Energy Harvesting and Energy-Neutral Sensing Systems (ENSsys ’25). https://doi.org/10.1145/3722572.3727928
The post Sensing and power-generation circuits for a batteryless mobile PM2.5 monitoring system appeared first on EDN.
How to implement MQTT on a microcontroller

One of the original and most important reasons Message Queuing Telemetry Transport (MQTT) became the de facto protocol for Internet of Things (IoT) is its ability to connect and control devices that are not directly reachable over the Internet.
In this article, we’ll discuss MQTT in an unconventional way. Why does it exist at all? Why is it popular? If you’re about to implement a device management system, is MQTT the best fit, or are there better alternatives?

Figure 1 This is how incoming connections are blocked. Source: Cesanta Software
In real networks—homes, offices, factories, and cellular networks—devices typically sit behind routers, network address translation (NAT) gateways, or firewalls. These barriers block incoming connections, which makes traditional client/server communication impractical (Figure 1).
However, as shown in the figure below, even the most restrictive firewalls usually allow outgoing TCP connections.

Figure 2 Even the most restrictive firewalls usually allow outgoing TCP connections. Source: Cesanta Software
MQTT takes advantage of this: instead of requiring the cloud or the user to initiate a connection into the device, the device initiates an outbound connection to a publicly visible MQTT broker. Once this outbound connection is established, the broker becomes a communication hub, enabling control, telemetry, and messaging in both directions.

Figure 3 This is how devices connect out but servers never connect in. Source: Cesanta Software
This simple idea—devices connect out, servers never connect in—solves one of the hardest networking problems in IoT: how to reach devices that you cannot address directly.
To summarize:
- The device opens a long-lived outbound TCP connection to the broker.
- Firewalls/NAT allow outbound connections, and they maintain the state.
- The broker becomes the “rendezvous point” accessible to all.
- The server or user publishes messages to the broker; the device receives them over its already-open connection.
Publish/subscribe
Every MQTT message is carried inside a binary frame with a very small header, typically only a few bytes. These headers contain a command code—called a control packet type—that defines the semantic meaning of the frame. MQTT defines only a handful of these commands, including:
- CONNECT: The client initiates a session with the broker.
- PUBLISH: It sends a message to a named topic.
- SUBSCRIBE: It registers interest in one or more topics.
- PINGREQ/PINGRESP: They keep alive messages to maintain the connection.
- DISCONNECT: It ends the session cleanly.
Because the headers are small and fixed in structure, parsing them on a microcontroller (MCU) is fast and predictable. The payload that follows these headers can be arbitrary data, from sensor readings to structured messages.
So, the publish/subscribe pattern works like this: a device publishes a message to a topic (a string such as factory/line1/temp). Other devices subscribe to topics they care about. The broker delivers messages to all subscribers of each topic.

Figure 4 The model shows decoupling of senders and receivers. Source: Cesanta Software
As shown above, the model decouples senders and receivers in three important ways:
- In time: Publishers and subscribers do not need to be online simultaneously.
- In space: Devices never need to know each other’s IP addresses.
- In message flow: Many-to-many communication is natural and scalable.
For small IoT devices, the publish/subscribe model removes networking complexity while enabling structured, flexible communication. Combined with MQTT’s minimal framing overhead, it achieves reliable messaging even on low-bandwidth or intermittent links.
Request/response over MQTT
MQTT was originally designed as a broadcast-style protocol, where devices publish telemetry to shared topics and any number of subscribers can listen. This publish/subscribe model is ideal for sensor networks, dashboards, and large-scale IoT systems where data fan-out is needed. However, MQTT can also support more traditional request/response interactions—similar to calling an API—by using a simple topic-based convention.
To implement request/response, each device is assigned two unique topics, typically embedding the device ID:
Request topic (RX): devices/DEVICE_ID/rx used by the server or controller to send a command to the device.
Response topic (TX): devices/DEVICE_ID/tx used by the device to send results back to the requester.
When the device receives a message on its RX topic, it interprets the payload as a command, performs the corresponding action, and publishes the response on its TX topic. Because MQTT connections are persistent and outbound from the device, this pattern works even for devices behind NAT or firewalls.
This structure effectively recreates a lightweight RPC-style workflow over MQTT. The controller sends a request to a specific device’s RX topic; the device executes the task and publishes a response to its TX topic. The simplicity of topic naming allows the system to scale cleanly to thousands or millions of devices while maintaining separation and addressing.
With it, it’s easy to implement remote device control using MQTT. One of the practical choices is to use JSON-RPC for the request/response.
Secure connectivity
MQTT includes basic authentication features such as username/password and transport layer security (TLS) encryption, but the protocol itself offers very limited isolation between clients. Once a client is authenticated, it can typically subscribe to wildcard topics and receive all messages published on the broker. Also, it can publish to any topic, potentially interfering with other devices.
Because MQTT does not define fine-grained access control in its standard, many vendors implement non-standard extensions to ensure proper security boundaries. For example, AWS IoT attaches per-client access control lists (ACLs) tied to X.509 certificates, restricting exactly which topics a device may publish or subscribe to. Similar policy frameworks exist in EMQX, HiveMQ, and other enterprise brokers.
In practice, production systems must rely on these vendor-specific mechanisms to enforce strong authorization and prevent devices from accessing each other’s data.
MQTT implementation on a microcontroller
MCUs are ideal MQTT clients because the protocol is lightweight and designed for low-bandwidth, low-RAM environments. Implementing MQTT on an MCU typically involves integrating three components: a TCP/IP stack (Wi-Fi, Ethernet, or cellular), an MQTT library, and application logic that handles commands and telemetry.
After establishing a network connection, the device opens a persistent outbound TCP session to an MQTT broker and exchanges MQTT frames—CONNECT, PUBLISH, and SUBSCRIBE—using only a few kilobytes of memory. Most implementations follow an event-driven model: the device subscribes to its command topic, publishes telemetry periodically, and maintains the connection with periodic ping messages. With this structure, even small MCUs can participate reliably in large-scale IoT systems.
An example of a fully functional but tiny MQTT client can be found in the Mongoose repository: mqtt-client.
WebSocket server: An alternative
If all you need is a clean way for your devices to talk to your back-end, MQTT can feel like bringing a whole toolbox just to tighten one screw. JSON-RPC over WebSocket keeps things minimal: devices open a WebSocket, send tiny JSON-RPC method calls, and get direct responses. No brokers, no topic trees, and no QoS semantics to wrangle.
The nice part is how naturally it fits into a modern back-end. The same service handling the WebSocket connections can also expose a familiar REST API. That REST layer becomes the human- and script-friendly interface, while JSON-RPC over WebSocket stays as the fast “device side” protocol.
The back-end basically acts as a bridge: REST in, RPC out. This gives you all the advantages of REST—a massive ecosystem of tools, gateways, authentication systems, monitoring, and automation—without forcing your devices to speak.

Figure 5 This is how REST to JSON-RPC over WebSocket bridge architecture looks like. Source: Cesanta Software
This setup also avoids one of MQTT’s classic security footguns, where a single authenticated client can accidentally gain visibility or access to messages from the entire fleet just by subscribing to the wrong topic pattern.
With a REST/WebSocket bridge, every device connection is isolated, and authentication happens through well-understood web mechanisms like JWTs, mTLS, API keys, OAuth, or whatever your infrastructure already supports. It’s a much more natural fit for modern access control models.
Beyond typical MQTT setup
This article offers a fresh look at IoT communication, going beyond the typical MQTT setup. It explains why MQTT is great for devices behind NAT/firewalls (devices only connect out to the broker) and highlights that the protocol’s lack of fine-grained access control can create security headaches. It also outlines an alternative solution: JSON-RPC over a single persistent WebSocket connection.
For a practical application demo of these MQTT principles, see the video tutorial that explains how to implement an MQTT client on an MCU and build a web UI that displays MQTT connection status, provides connect/disconnect control, and lets you publish MQTT messages to any topic.
In this step-by-step tutorial, we use STM32 Nucleo-F756ZG development board with Mongoose Wizard—though the same method applies to virtually any other MCU platform—and a free HiveMQ Public Broker. This tutorial is suitable for anyone working with embedded systems, IoT devices, or STM32 development stack, and looking to integrate MQTT networking and a lightweight web UI dashboard into their firmware.
Sergey Lyubka is co-founder and technical director of Cesanta Software Ltd. He is known as the author of the open-source Mongoose Embedded Web Server and Networking Library (https://mongoose.ws), which has been on the market since 2004 and has over 12k stars on GitHub.
Related Content
- Avoiding MQTT pitfalls
- Connecting correctly in MQTT
- MQTT essentials – Scenarios and the pub-sub pattern
- Device to Cloud: MQTT and the power of topic notation
- How to Control a Servo Motor Using Your Smartphone with the MQTT Protocol and the Raspberry Pi
The post How to implement MQTT on a microcontroller appeared first on EDN.
Windows 10: Support hasn’t yet ended after all, but Microsoft’s still a fickle-at-best friend

Bowing to user backlash, Microsoft eventually relented and implemented a one-year Windows 10 support-extension scheme. But (limited duration) lifelines are meaningless if they’re DOA.
Back in November, within my yearly “Holiday Shopping Guide for Engineers”, the first suggestion in my list was that you buy you and yours Windows 11-compatible (or alternative O/S-based) computers to replace existing Windows 10-based ones (specifically ones that aren’t officially Windows 11-upgradable, that is). Unsanctioned hacks to alternatively upgrade such devices to Windows 11 do exist, but echoing what I first wrote last June (where I experimented for myself, but only “for science”, mind you), I don’t recommend relying on them for long-term use, even assuming the hardware-hack attempt is successful at all, that is:
The bottom line: any particular system whose specifications aren’t fully encompassed by Microsoft’s Windows 11 requirements documentation is fair game for abrupt no-boot cutoff at any point in the future. At minimum, you’ll end up with a “stuck” system, incapable of being further upgraded to newer Windows 11 releases, therefore doomed to fall off the support list at some point in the future. And if you try to hack around the block, you’ll end up with a system that may no longer reliably function, if it even boots at all.
A mostly compatible computing stableFortunately, all of my Windows-based computers are Windows 11-compatible (and already upgraded, in fact), save for two small form factor systems, one (Foxconn’s nT-i2847, along with its companion optical drive), a dedicated-function Windows 7 Media Center server:

(mine are white, and no, the banana’s not normally a part of the stack):

and the other, an XCY X30, largely retired but still hanging around to run software that didn’t functionally survive the Windows 10-to-11 transition:
And as far as I can recall, all of the CPUs, memory DIMMs, SSDs, motherboards, GPUs and other PC building blocks still lying around here waiting to be assembled are Windows 11-compliant, too.
One key exception to the ruleMy wife’s laptop, a Dell Inspiron 5570 originally acquired in late 2019, is a different matter:
Dell’s documentation initially indicated that the Inspiron 5570 was a valid Windows 11 upgrade candidate, but the company later backtracked due to partner Microsoft’s increasingly-over-time stingy CPU and TPM requirements. Our secondary strategy was to delay its demise by a year by taking advantage of one of Microsoft’s Windows 10 Extended Support Update (ESU) options. For consumers, there initially were two paths, both paid: spending $30 or redeeming 1,000 Microsoft Rewards points, although both ESU options covered up to 10 devices (presumably associated with a common Microsoft account). But in spite of my repeated launching of the Windows Update utility over a several-month span, it stubbornly refused to display the ESU enrollment section necessary to actualize my extension aspirations for the system:
My theory at the time was that although the system was registered under my wife’s personal Microsoft account, she’d also associated it with a Microsoft 365 for Business account for work email and such, and it was therefore getting caught by the more complicated corporate ESU license “net”. So, I bailed on the ESU aspiration and bought her a Dell 16 Plus as a replacement, instead:
That I’d done (and to be precise, seemingly had to do) this became an even more bitter already-swallowed pill when Microsoft subsequently added a third, free consumer ESU option, involving backup of PC settings in prep for the delayed Windows 11 migration to still come a year later:
Belated success, and a “tinfoil hat”-theorized root cause-and-effectAnd then the final insult to injury arrived. At the beginning of October, a few weeks prior to the Windows 10 baseline end-of-support date, I again checked Windows Update on a lark…and lo and behold, the long-missing ESU section was finally there (and I then successfully activated it on the Inspiron 5570). Nothing had changed with the system, although I had done a settings backup a few weeks earlier in a then-fruitless attempt to coax the ESU to reactively appear. That said, come to think of it, we also had just activated the new system…were I a conspiracy theorist (which I’m not, but just sayin’), I might conclude that Microsoft had just been waiting to squeeze another Windows license fee out of us (a year earlier than otherwise necessary) first.
To that last point, and in closing, a reality check. At the end of the day, “all” we did was to a) buy a new system a year earlier than I otherwise likely would have done, and b) delay the inevitable transition to that new system by a year. And given how DRAM and SSD prices are trending, delaying the purchase by a year might have resulted in an increased cash outlay, anyway. On the other hand, the CPU would have likely been a more advanced model than we ended up, too. So…
A “First World”, albeit baffling, problem, I’m blessed to be able to say in summary. How did your ESU activation attempts go? Let me (and your fellow readers) know in the comments: thanks as always in advance!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Updating an unsanctioned PC to Windows 11
- A holiday shopping guide for engineers: 2025 edition
- Microsoft embraces obsolescence by design with Windows 11
- Microsoft’s Build 2024: Silicon and associated systems come to the fore
The post Windows 10: Support hasn’t yet ended after all, but Microsoft’s still a fickle-at-best friend appeared first on EDN.
Handheld enclosures target harsh environments

Rolec’s handCASE (IP 66/IP 67) handheld enclosures for machine control, robotics, and defense electronics can now be specified with a choice of lids and battery options.
These rugged diecast aluminum enclosures are ideal for industrial and military applications in which devices must survive challenging environments but also be comfortable to hold for long periods.
(Source: Rolec USA)
Robust handCASE can be specified with or without a battery compartment (4 × AA or 2 × 9 V). Two versions are available: S with an ergonomically bevelled lid, and R with a narrow-edged lid to maximize space. Both tops are recessed to protect a membrane keypad or front plate. Inside there are threaded screw bosses for PCBs or mounting plates.
The enclosures are available in three sizes: 3.15″ × 7.09″ × 1.67″, 3.94″ × 8.66″ × 1.67″ and 3.94″ × 8.66″ × 2.46″. As standard, Version S features a black (RAL 9005) base with a silver metallic top, while Version R is fully painted in light gray (RAL 7035).
Custom colors are available on request. They include weather-resistant powder coatings (F9) with WIWeB approvals and camouflage colors for military applications. These coatings are also available in a wet painted finish. They meet all military requirements, including the defense equipment standard VG 95211.
Options and accessories include a shoulder strap, a holding clip and wall bracket, and a corrosion-proof coating in azure blue (RAL 5009).
Rolec can supply handCASE fully customized. Services include CNC machining, engraving, RFI/EMI shielding, screen and digital printing, and assembly of accessories.
For more information, view the Rolec website: https://Rolec-usa.com/en/products/handcase#top
The post Handheld enclosures target harsh environments appeared first on EDN.
AI’s insatiable appetite for memory

The term “memory wall” was first coined in the mid-1990s when researchers from the University of Virginia, William Wulf and Sally McKee, co-authored “Hitting the Memory Wall: Implications of the Obvious.” The research presented the critical bottleneck of memory bandwidth caused by the disparity between processor speed and the performance of dynamic random-access memory (DRAM) architecture.
These findings introduced the fundamental obstacle that engineers have spent the last three decades trying to overcome. The rise of AI, graphics, and high-performance computing (HPC) has only served to increase the magnitude of the challenge.
Modern large language models (LLMs) are being trained with over a trillion parameters, requiring continuous access to data and petabytes of bandwidth per second. Newer LLMs in particular demand extremely high memory bandwidth for training and for fast inference, and the growth rate shows no signs of slowing with the LLM market size expected to increase from roughly $5 billion in 2024 to over $80 billion by 2033. And the growing gap between CPU and GPU performance, memory bandwidth, and latency is unmistakable.
The biggest challenge posed by AI training is in moving these massive datasets between the memory and processor, and here, the memory system itself is the biggest bottleneck. As compute performance has increased, memory architectures have had to evolve and innovate to keep pace. Today, high-bandwidth memory (HBM) is the most efficient solution for the industry’s most demanding applications like AI and HPC.
History of memory architecture
In the 1940s, the von Neumann architecture was developed and it became the basis for computing systems. The control-centric design stores a program’s instructions and data in the computer’s memory. The CPU fetched instructions and data sequentially, creating idle time while the processor waited for these instructions and data to return from memory. The rapid evolution of processors and the relatively slower improvement of memory eventually created the first system memory bottlenecks.

Figure 1 Here is a basic arrangement showing how processor and memory work together. Source: Wikipedia
As memory systems evolved, memory bus widths and data rates increased, enabling higher memory bandwidths that improved this bottleneck. The rise of graphics processing units (GPUs) and HPC in the early 2000s accelerated the compute capabilities of systems and brought with them a new level of pressure on memory systems to keep compute and memory systems in balance.
This led to the development of new DRAMs, including graphics double data rate (GDDR) DRAMs, which prioritized bandwidth. GDDR was the dominant high-performance memory until AI and HPC applications went mainstream in the 2000s and 2010s, when a newer type of DRAM was required in the form of HBM.

Figure 2 The above chart highlights the evolution of memory in more than two decades. Source: Amir Gholami
The rise of HBM for AI
HBM is the solution of choice to meet the demands of AI’s most challenging workloads, with industry giants like Nvidia, AMD, Intel, and Google utilizing HBM for their largest AI training and inference work. Compared to standard double-data rate (DDR) or GDDR DRAMs, HBM offers higher bandwidth and better power efficiency in a similar DRAM footprint.
It combines vertically stacked DRAM chips with wide data paths and a new physical implementation where the processor and memory are mounted together on a silicon interposer. This silicon interposer allows thousands of wires to connect the processor to each HBM DRAM.
The much wider data bus enables more data to be moved efficiently, boosting bandwidth, reducing latency, and improving energy efficiency. While this newer physical implementation comes at a greater system complexity and cost, the trade-off is often well worth it for the improved performance and power efficiency it provides.
The HBM4 standard, which JEDEC released in April of 2025, marked a critical leap forward for the HBM architecture. It increases bandwidth by doubling the number of independent channels per device, which in turn allows more flexibility in accessing data in the DRAM. The physical implementation remains the same, with the DRAM and processor packaged together on an interposer that allows more wires to transport data compared to HBM3.
While HBM memory systems remain more complex and costlier to implement than other DRAM technologies, the HBM4 architecture offers a good balance between capacity and bandwidth that offers a path forward for sustaining AI’s rapid growth.
AI’s future memory need
With LLMs growing at a rate between 30% to 50% year over year, memory technology will continue to be challenged to keep up with the industry’s performance, capacity, and power-efficiency demands. As AI continues to evolve and find applications at the edge, power-constrained applications like advanced AI agents and multimodal models will bring new challenges such as thermal management, cost, and hardware security
The future of AI will continue to depend as much on memory innovation as it will on compute power itself. The semiconductor industry has a long history of innovation, and the opportunity that AI presents provides compelling motivation for the industry to continue investing and innovating for the foreseeable future.
Steve Woo is a memory system architect at Rambus. He is a distinguished inventor and a Rambus fellow.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
The post AI’s insatiable appetite for memory appeared first on EDN.
Zero maintenance asset tracking via energy harvesting
Real-time tracking of assets has enabled both supply chain digitalization and operational efficiency leaps. These benefits, driven by IoT advances, have proved transformational. As a result, the market for asset-tracking systems for transportation and logistics firms is set to triple, reaching USD 22.5 billion by 2034¹. And, if we look across all sectors, the asset tracking market is forecasted to grow at a CAGR of 15%, reaching USD 51.2 billion by 2030².
However, the ability for firms to maximize the benefits of asset tracking is being constrained by the finite power limitations of a single component, the battery. Reliance on batteries has a number of disadvantages. In addition to the battery cost, battery replacement across multiple locations increases operational costs and demands considerable time and effort.
At the same time, batteries can cause system-wide vulnerabilities. When a tag’s battery unexpectedly fails, for example, a tracked item can effectively disappear from the network and the corresponding data is no longer collected. This, in turn, leads to supply chain disruptions and bottlenecks, sometimes even production line downtime, and reduces the very efficiencies the IoT-based system was designed to deliver (Figure 1).
![]()
Figure 1 Real-time tracking of assets is transforming logistics operations, enabling supply chain digitalization and unlocking major efficiency gains.
Battery maintenanceA “typical” asset tracking tag will implement two core functions: location and communications. For long-distance shipping, GPS will primarily be used as the location identifier. In a logistics warehouse, GPS coverage can be poor, but Wi-Fi scanning remains an option. Other efficient systems include FSK or BLE beacons, Wirepas mesh, or Quuppa’s angle of arrival (AoA).
For data communication, several protocols are possible,
- BLE if the assets remain indoors
- LTE-M if global coverage is a key requirement, and the assets are outdoors
- LoRaWAN if seamless indoor and outdoor coverage is needed, as this can use private, public, community, and satellite networks, with some of them offering native multi-country coverage.
Sensors can also improve functionality and efficiency. For example, an accelerometer can be added to identify when a tag moves and then initiate a wake-up. Other sensors can determine a package’s status and condition. In the case of energy harvesting, the power management chip can indicate the amount of energy that is available. Therefore, the behavior of the device can also be adapted to this information. The final important component on the board of an asset tracker will be an energy-efficient MCU.
The stated battery life of a 15-dollar tag will often be overestimated. This will mainly be due to the radio protocol behaviors. But even if the battery cost itself is limited, the replacement cost can be estimated at around 50 dollars once man-hours are factored into this.
An alternative tag based on the latest energy harvesting technology might have an initial cost of around 25 dollars, but with no batteries to replace, its total cost over a decade remains essentially the same, whereas even a single battery replacement already pushes a 15-dollar tag above that level.
For example, in the automotive industry, manufacturers transport parts using large reusable metal racks. Each manufacturer will use tens of thousands of these, each valued at around 500 dollars. We have been told that, because of scanning errors and mismanagement, up to 10 percent go missing each year.
By equipping racks with tags powered from harvested energy, companies can create an automated inventory system. This results in annual OPEX savings that can be in the order of millions of dollars, a return on investment within months, and lower CAPEX since fewer racks are required for the same production volume.
Self-powered trackingUnlike battery-powered asset trackers, Ambient IoT tags use three core blocks to supply energy to the system: the harvester, an energy storage element, and a power management IC. Together, these enable energy to be harvested as efficiently as possible.
Energy sources can range from RF through thermoelectric to vibration, but for many logistics and transport applications, the most readily available and most commonly used source is light. And this will be natural (solar) or ambient, depending on whether the asset being tracked spends most of its life outdoors (e.g., a container) or indoors (e.g., a warehouse environment).
For outdoor asset trackers on containers or vehicles, significant energy can be harvested from direct sunlight using traditional photovoltaic (PV) amorphous silicon panels. When space is limited, monocrystalline silicon technology provides a higher power density and still works well indoors. For indoor light levels, in addition to the traditional amorphous silicon, there are three additional technologies that become available and cost-effective for these use cases.
- Organic photovoltaic (OPV) cells can provide up to twice the power density of amorphous silicon. Furthermore, the flexibility of these PV cells allows for easy mechanical implementation on the end device.
- Dye-sensitized solar cells bring even higher power densities and exhibit low degradation levels over time, but they are sometimes limited by the requirement for a glass substrate, which prevents flexibility.
- Perovskite PV cells also reach similar power densities as dye-sensitized solar cells, with the possibility of a flexible substrate. However, these have challenges related to lead content and aging.
Before selecting a harvester, an evaluation of the PV cell should be undertaken. This should combine both laboratory measurements and real-world performance tests, along with an assessment of aging characteristics (to ensure that the lifetime of the PV cell exceeds the expected end-of-life of the tracker) and mechanical integration into the casing. The manufacturer chosen to supply the technology should also be able to support large-scale deployments.
When it comes to energy storage, such a system may require either a small, rechargeable chemical-based battery or a supercapacitor. Alternatively, there is the lithium capacitor (a hybrid of the two). Each has distinct characteristics regarding energy density and self-discharge. The right choice will depend on a number of factors, including the application’s required operating temperature and longevity.
Finally, a power management IC (PMIC) must be chosen. This provides the interface between the PV cell and the storage element, and manages the energy flow between the two, something that needs to be done with minimal losses. The PMIC should be optimized to maximize the lifespan of the energy storage element, protecting it from overcharging and overdischarging, while delivering a stable, regulated power output to the tag’s application electronics (Figure 2).
For an indoor industrial environment, where ambient light levels can be low, there is the risk of the storage element becoming fully depleted. It is therefore crucial that the PMIC can perform a cold start in these conditions, when only a small amount of energy is available.
In developing the most appropriate system for a given asset tracking application, it will be important to undertake a power budget analysis. This will consider both the energy consumed by the application and the energy available for harvesting. With the size of the device and its power consumption, it is relatively straightforward to determine the number of hours per day and the luminosity (lux level) for any given PV cell technology to make the device capable of autonomously running by harvesting more energy over a 24-hour period than it consumes.
The storage element size is also critical as it determines how long the device can operate without any power at the source. And even if power consumption is too high to make it fully autonomous, the application of energy harvesting can be used to significantly extend battery life.
![]()
Figure 2 e-peas has worked with several leading tracking system developers, including MOKO SMART (top), Minew (left), and inVirtus (center), Jeng IoT (right) to implement energy harvesting in asset trackers. Source: e-peas
Examples of energy-harvested tracking systemsCompanies such as inVirtus, Jeng IoT, Minew, and MOKO SMART, all leaders in developing logistics and transportation tracking systems, have already started transitioning to energy-harvesting-powered asset trackers. And notably, these devices are delivering significant returns in complex logistical environments.
Minew’s device, for example, implements Epishine’s ultra-thin solar cells to create a credit card-sized asset tracker. MOKO SMART’s L01A-EH is a BLE-based tracker with a three-axis accelerometer and temperature and humidity sensors. These tags, which can be placed on crates to track their journey through a production process, give precise data on lead times and dwell times at each station. This allows monitoring of efficiency and the highlighting of bottlenecks in the system.
A good example of such benefits can be found at Thales, where the InVirtus EOSFlex Beacon battery-free tag is being used. The company has cited a saving of 30 minutes on tracking during part movements when monitoring work orders after the company switched to a system where each work order was digitally linked to a tagged box. Because each area of the factory corresponds to a specific task, the tag’s indoor location provides accurate manufacturing process monitoring.
Additionally, the system saves time by selecting the highest priority task and activating a blinking LED on the corresponding box. It also improves both lead time prediction accuracy and scheduling adherence—the alignment between the planned schedule and actual work progress.
The tags have also been used to locate measurement equipment shared by multiple divisions, and Thales has reported savings of up to two hours when locating these pieces of equipment. This is a critical difference as each instance of downtime represents a major cost, and without this tracking, the company would incur significant maintenance delays that could stop the production line.
Additionally, one aviation manufacturer that is also using this approach to track the work orders has improved scheduling adherence from 30% up to 90%.
Ultimately, energy harvesting in logistics is not simply about eliminating batteries, but about building more resilient, predictable, and cost-effective supply chains. Perpetually powered tracking systems provide constant and reliable visibility, allow for more accurate lead-time predictions, better resource planning, and a significant reduction in the operational friction caused by lost or untraceable assets.
Pierre Gelpi graduated from École Polytechnique in Paris and obtained a Master’s degree from the University of Montreal in Canada. He has 25 years of experience in the telecommunications industry. He began his career at Orange Labs, where he spent eight years working on radio technologies and international standardization. He then served for five years as Technical Director for large accounts at Orange Business Services. After Orange, he joined Siradel, where he led sales and customer operations for wireless network planning and smart city projects, notably in Chile. He subsequently co-founded the first SaaS-based radio planning tool dedicated to IoT.
In 2016, he joined Semtech, where he was responsible for LoRa business development in the EMEA region, driving demand creation to accelerate market growth, particularly in the track-and-trace segment. He joined e-peas in 2024 to lead Sales in EMEA and to promote the vision of unlimited battery life.
References:
- Yahoo! (n.d.). Real Time Location Systems in transportation and Logistics Market Outlook Report 2025-2034 | AI, ML, and IOT, enhancing the capabilities of RTLS in real-time data collection and analysis. Yahoo! Finance. https://uk.finance.yahoo.com/news/real-time-location-systems-transportation-150900694.html?guccounter=2
- Asset tracking market size & share: Industry report, 2030. Asset Tracking Market Size & Share | Industry Report, 2030. (n.d.). https://www.grandviewresearch.com/industry-analysis/asset-tracking-market-report#:~:text=Industry:%20Technology,reducing%20losses%20and%20optimizing%20logistics.
Related Content
- Energy harvesting gets really personal
- Circuits for RF Energy Harvesting
- Lightning as an energy harvesting source?
- 5 key considerations in IoT asset tracking design
The post Zero maintenance asset tracking via energy harvesting appeared first on EDN.
AI workloads demand smarter SoC interconnect design

Artificial intelligence (AI) is transforming the semiconductor industry from the inside out, redefining not only what chips can do but how they are created. This impacts designs from data centers to the edge, including endpoint devices such as autonomous driving, drones, gaming systems, robotics, and smart homes. As complexity pushes beyond the limits of conventional engineering, a new generation of automation is reshaping how systems come together.
Instead of manually placing every switch, buffer, and timing pipeline stage, engineers can now use automation algorithms to generate optimal network-on-chip (NoC) configurations directly from their design specifications. The result is faster integration and shorter wirelengths, driving lower power consumption and latency, reduced congestion and area, and a more predictable outcome.
Below are the key takeaways of this article about AI workload demands in chip design:
- AI workloads have made existing SoC interconnect design impractical.
- Intelligent automation applies engineering heuristics to generate and optimize NoC architectures.
- Physically aware algorithms enhance timing closure, reduce power consumption, and shorten design cycles.
- Network topology automation is enabling a new class of AI system-on-chips (SoCs).
Machine learning guides smarter design decisions
As SoCs become central to AI systems, spanning high-performance computing (HPC) to low-power devices, the scale of on-chip communication now exceeds what traditional methods can manage effectively. Integrating thousands of interconnect paths has created data-movement demands that make automation essential.
Engineering heuristics analyze SoC specifications, performance targets, and connectivity requirements to make design decisions. This automation optimizes the resulting interconnect for throughput and latency within the physical constraints of the device floorplan. While engineers still set objectives such as bandwidth limits and timing margins, the automation engine ensures the implementation meets those goals with optimized wirelengths, resulting in lower latency and power consumption and reduced area.
This shift marks a new phase in automation. Decades of learned engineering heuristics are now captured in algorithms that are designing silicon that enables AI itself. By automatically exploring thousands of variations, NoC automation determines optimal topology configurations that meet bandwidth goals within the physical constraints of the design. This front-end intelligence enables earlier architectural convergence and provides the stability needed to manage the growing complexity of SoCs for AI applications.
Accelerating design convergence
In practice, automation generates and refines interconnect topologies based on system-level performance goals, eliminating the need for laborious repeated manual engineering adjustments, as shown in Figure 1. These automation capabilities enable rapid exploration and convergence of multiple different design configurations, shortening NoC iteration times by up to 90%. The benefits compound as designs scale, allowing teams to evaluate more options within a fixed schedule.

Figure 1 Automation replaces manual NoC generation, reducing power and latency while improving bandwidth and efficiency. Source: Arteris
Equally important, automation improves predictability. Physically aware algorithms recognize layout constraints early, minimizing congestion and improving timing closure. Teams can focus on higher-level architectural trade-offs rather than debugging pipeline delays or routing conflicts late in the flow.
AI workloads place extraordinary stress on interconnects. Training and inference involve moving vast amounts of data between compute clusters and high-bandwidth memory, where even microseconds of delay can affect throughput. Automated topology optimization ensures traffic flow to maintain consistent operation under heavy loads.
Physical awareness drives efficiency
In 3-nm technologies and beyond, routing wire parasitics are a significant factor in energy use. Automated NoC generation incorporates placement and floorplan awareness, optimizing wirelength and minimizing congestion to improve overall power efficiency.
Physically guided synthesis accelerates final implementation, allowing designs to reach timing closure faster, as Figure 2 illustrates. This approach provides a crucial advantage as interconnects now account for a large share of total SoC power consumption.

Figure 2 Smart NoC automation optimizes wirelength, performance, and area, delivering faster topology generation and higher-capacity connectivity. Source: Arteris
The outcome is silicon optimized for both computation and data movement. Automation enables every signal to take the best route possible within physical and electrical limits, maximizing utilization and overall system performance.
Additionally, automation delivers measurable gains in AI architectures. For example, in data centers, automated interconnect optimization manages multi-terabit data flows among heterogeneous processors and high-bandwidth memory stacks.
At the edge, where latency and battery life are critical, automation enables SoCs to process data locally without relying on the cloud. Across both environments, interconnect fabric automation ensures that systems meet escalating computational demands while remaining within realistic power envelopes.
Automation in designing AI
Automation has become both the architect and the workload. Automated systems can be used to explore multiple design options, optimize for power and performance simultaneously, and reuse verified network templates across derivative products. These advances redefine productivity, allowing smaller engineering teams to deliver increasingly complex SoCs in less time.
By embedding intelligence into the design process, automation transforms the interconnect from a passive conduit into an active enabler of AI performance. The result is a new generation of optimized silicon, where the foundation of computing evolves in step with the intelligence it supports.
Automation has become indispensable for next-generation SoCs, where the pace of architectural change exceeds traditional design capacity. By combining data analysis, physical awareness, and adaptive heuristics, engineers can build systems that are faster, leaner, and more energy efficient. These qualities define the future of AI computing.
Rick Bye is director of product management and marketing at Arteris, overseeing the FlexNoC family of non-coherent NoC IP products. Previously, he was a senior product manager at Arm, responsible for a demonstration SoC and compression IP. Rick has extensive product management and marketing experience in semiconductors and embedded software.
Special Section: AI Design
The post AI workloads demand smarter SoC interconnect design appeared first on EDN.
Plastic TVS devices meet MIL-grade requirements

Microchip’s JANPTX transient voltage suppressors (TVS) are among the first to achieve MIL-PRF-19500 qualification in a plastic package. With a working voltage range from 5 V to 175 V, the JANPTX family provides a lightweight, cost-effective alternative to conventional hermetic TVS devices while maintaining required military performance.

Rated to clamp transients up to 1.5 kW (10/1000 µs waveform) and featuring response times under 100 ps (internal testing), the devices protect sensitive electronics in aerospace and defense systems. These surface-mount, unidirectional TVS diodes mitigate voltage transients caused by lightning strikes, electrostatic discharge, and electrical surges.
JANPTX TVS devices safeguard airborne avionics, electrical systems, and other mission-critical applications where low voltage and high reliability are required. They protect against switching transients, RF-induced effects, EMP, and secondary lightning, meeting IEC61000-4-2, -4-4, and -4-5 standards.
Available now in production quantities, the JANPTX product line spans five device variants with multiple JAN-qualified ordering options. View the datasheet for full specifications and application information.
The post Plastic TVS devices meet MIL-grade requirements appeared first on EDN.








































