EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 5 min 16 sec ago

Negative resistance amplification

Fri, 04/10/2026 - 15:00

We once looked at how conducted emissions testing could be affected by the negative input impedance of a switch-mode power supply. Please see: “Conducted Emissions testing.”

Digital data signals that a client’s electric power company was putting on the power lines were being amplified by the negative input impedance of the power supply being tested, which made it look like the power supply itself was generating conducted emissions, which, in fact, it was not.

I have since been asked by someone, “How can a negative impedance result in amplification?” The sketch below will illustrate how that can come about.

Figure 1 Negative resistance amplification.

Let our “impedance” in question be a resistance. In our sketch, voltages E2 and E4 are derived by voltage dividers from identical “Esig” sources for which standard voltage division equations apply. What is NOT standard here is that we are going to set R4 to negative numerical values.

My SPICE simulator will not let me assign a negative number to any resistance value (I think of that as picky, picky, picky!), but given that as the case, the voltage divider equations can be set up in GWBASIC. Line 150 of that code is where that happens.

With R1 and R3 arbitrarily set to 1K each and held there, we vary R2 and R4 together as shown to look at the effects on outputs E2 and E4, where we find the following.

E2 is always a lesser voltage than Esig. E2 varies versus the choices of value of R2, but it is always smaller than Esig.

On the other hand, E4 is always a greater voltage than Esig. E4 varies versus the negative value of R4, but it is always larger than Esig.

This effect on E4 is the amplification effect referred to in the earlier essay.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Negative resistance amplification appeared first on EDN.

GaN ICs drive robotics and motion control

Thu, 04/09/2026 - 23:52

Four 100-V GaN power-stage ICs from EPC are optimized for motor drives in humanoid robots, drones, and battery-powered platforms. The EPC23108, EPC23109, EPC23110, and EPC23111 integrate a gate driver, high- and low-side eGaN FETs, and level-shifting circuitry in a half-bridge configuration. They support operation up to 100 V with load currents of 35 A (EPC23108, EPC23109) and 20 A (EPC23110, EPC23111).

The control interface includes an active-low fast-shutdown and standby input with a 65-kΩ pull-up. It meets industrial logic standards, letting designers connect directly to standard controllers. This simplifies designs and ensures consistent operation across platforms. Safety is enhanced through deterministic shutdown. 

The series supports continuous 100% duty-cycle operation, enabling full-torque and uninterrupted conduction in motion control, robotics, and precision regulation systems. The EPC23109 and EPC23111 offer a single-pin PWM input with enable logic and fixed dead time, simplifying multi-axis designs. The EPC23108 and EPC23110 feature dual PWM inputs for adaptive dead-time modulation.

Engineering samples are available for qualified designs. The EPC23108, EPC23109, EPC23110, and EPC23111 can be ordered through EPC’s distributor partners.

Efficient Power Conversion 

The post GaN ICs drive robotics and motion control appeared first on EDN.

Tiny filters curb 5-GHz audio-line noise

Thu, 04/09/2026 - 23:52

Built with low-distortion ferrite material, TDK’s MAF0603GWY series of filters attenuates noise on audio lines in the 5-GHz band. The filters fit in a compact 0.6×0.3×0.3-mm package for use in small consumer devices like smartphones and wearables with Bluetooth and Wi-Fi audio lines.

Electromagnetic noise radiated from audio lines in electronic devices can interfere with the internal antenna and reduce receiver sensitivity. While chip beads are commonly used to suppress noise, they can degrade sound quality.

TDK reports its newly developed ferrite material minimally affects audio-line characteristics while reducing distortion. The filters provide high attenuation at 5 GHz (impedance up to 3220 Ω) to suppress noise. They also limit attenuation of audio signals with lower resistance than conventional products, enabling a wide dynamic range.

Mass production of the MAF0603GWY series is set to begin in April 2026.

TDK

The post Tiny filters curb 5-GHz audio-line noise appeared first on EDN.

Photovoltaic driver streamlines EV power designs

Thu, 04/09/2026 - 23:51

A photovoltaic MOSFET driver from Vishay, the VODA1275 increases safety and reliability in high-voltage automotive applications. The device provides a typical open-circuit voltage of 20 V, short-circuit current of 20 µA, and turn-on time of 80 µs—said to be three times faster than competing devices.

The AEC-Q102-qualified device targets pre-charge circuits, wall chargers, and battery management systems for EVs and HEVs. Its high open-circuit output voltage allows a single driver to be used, removing the need for two devices in series to generate higher voltages. The VODA1275 also enables custom solid-state relays to replace electromechanical relays in next-generation vehicles. 

A working isolation voltage of 1260 Vpeak and isolation test voltage of 5300 VRMS make the driver well-suited for 800-V+ battery systems. The device comes in a compact SMD-4 package with an 8-mm creepage distance and a mold compound with a CTI of 600.

Samples and production quantities of the VODA1275 are available now, with lead times of eight weeks.

VODA1275 product page 

Vishay Intertechnology 

The post Photovoltaic driver streamlines EV power designs appeared first on EDN.

Shielded inductors reduce emissions in tight layouts

Thu, 04/09/2026 - 23:51

Bourns’ SRP2008DP series of shielded power inductors provides the saturation current needed for dense DC/DC converter designs and miniature electronic devices. These low-profile devices, with dimensions of just 2.0×1.6×0.8 mm, enable use in compact circuits with minimal routing changes.

The eight inductors in the SRP2008DP series cover inductances from 0.24 µH to 4.70 µH, heating current (IRMS) from 1.10 A to 3.50 A, and saturation current (ISAT) from 1.60 A to 5.50 A. DC resistance ranges from 36 mΩ to 468 mΩ, and operating temperature spans -40°C to +125°C.

In crowded layouts, radiated emissions and magnetic coupling can compromise signal integrity and complicate EMC compliance. The SRP2008DP series addresses these issues with a small, shielded package and a metal-alloy powder core. The shielded design contains magnetic flux, reducing emissions to nearby circuitry, while the high-resistivity core suppresses eddy currents and limits core losses at high switching frequencies. Contained flux also minimizes coupling to adjacent traces, lowering interference in densely populated layouts.

The SRP2008DP series is available through Bourns’ authorized distributors. Request samples here.

SRP2008DP product page

Bourns

The post Shielded inductors reduce emissions in tight layouts appeared first on EDN.

RISC-V SoC supports voice-enabled IoT devices

Thu, 04/09/2026 - 23:51

Espressif Systems is sampling its ESP32-S31 dual-core RISC-V SoC with Wi-Fi 6, Bluetooth 5.4, Thread, Zigbee, and Ethernet. Rich HMI and security features make it well-suited for IoT applications such as consumer and industrial appliances, voice-controlled devices, and automation systems.

Running at 320 MHz, the ESP32-S31’s 32-bit RISC-V microcontroller achieves 6.86 CoreMark/MHz and integrates a memory management unit and 60 GPIOs for design flexibility. One of its two cores features a 128-bit-wide SIMD data path for fast parallel processing. Memory resources comprise 512 KB SRAM and support for 250-MHz, 8-bit DDR PSRAM, with concurrent flash and PSRAM access. External memory expansion (up to octal SPI) further supports memory-intensive multimedia and AI/ML workloads at the edge.

The ESP32-S31’s HMI capabilities include a DVP camera interface, LCD support, and up to 14 capacitive touch channels. Security features span secure key management, secure boot, flash and PSRAM encryption, cryptographic hardware acceleration, and a trusted execution environment. Supported by Espressif’s open-source IoT Development Framework, the device works with common LLMs to build voice-enabled client devices that run or interact with AI agents.

To request samples of the ESP32-S31 SoC, contact Espressif’s customer support team.

ESP32-S31 product page 

Espressif Systems 

The post RISC-V SoC supports voice-enabled IoT devices appeared first on EDN.

Leveling up Industry 4.0

Thu, 04/09/2026 - 22:45
Illustration of smart industry elements.

Industry 4.0 is all about transforming manufacturing processes with advances in smart capabilities, data connectivity, and automation. It encompasses devices from sensors that capture data to motors and motor control and power devices that have a big impact on efficiency. Edge computing is also playing a larger role to combat challenges around latency, particularly in safety critical applications, and cybersecurity is critical for protecting connected devices.

Illustration of smart industry elements.(Source: Adobe Stock)

The March/April issue covers some of the key components that are vital to Industry 4.0, from new sensing approaches such as event-based sensing that enable faster and more reliable decisions to the latest designs in power devices to deliver higher efficiency in industrial systems. We also look at designing edge AI for industrial and industrial IoT systems for cybersecurity.

Machine vision plays a big role in industrial automation applications, ranging from object tracking to vibration monitoring. Prophesee believes the industry should be rethinking machine vision in industrial automation, addressing challenges around latency, data processing, and decision-making.

“As industrial systems move toward higher levels of automation and autonomy, vision is becoming a core component of the perception pipeline,” said Thibaut Willeman, head of business development and go-to-market at Prophesee.

This is driving the demand for new sensing approaches that address these challenges: reducing latency, limiting unnecessary data, and enabling faster and more reliable decisions, he added.

Willeman explains how event-based vision addresses these challenges: “By mimicking biological vision, this technology utilizes efficient sensing and collection techniques that capture changes within a specific scene. This reduces processing requirements compared with traditional frame-based methods while revealing details that conventional systems miss, opening new possibilities for precision and performance in industrial applications.”

Applications that can benefit from event-based vision include industrial automation, IoT, automotive, and edge applications.

Another component area that has a large impact on applications in the Industry 4.0 world is power electronics. As factories, energy systems, and data centers get smarter and more connected, it requires more efficient power solutions that offer high power density, said Stefano Lovati, contributing writer.

Lovati discusses some of the latest approaches to designing, packaging, and controlling power devices to deliver higher efficiency, flexibility, and scalability. One of the most significant changes introduced in the power system is the move to 800-VDC distribution in data centers.

There is also a key focus on wide-bandgap materials such as silicon carbide (SiC) and gallium nitride (GaN). SiC can operate efficiently and provide high reliability in high-voltage and high-power environments, thanks to its high breakdown voltage, low switching losses, and high thermal conductivity, while GaN, suited for low- and medium-voltage applications, can switch at high frequencies, up to the megahertz range, with very low power loss, making power converters more efficient and smaller and requiring less cooling, Lovati said.

In addition, GaN is delivering on integration, which is helping to simplify power design.

Another big element of implementing smart manufacturing within Industry 4.0 is motor control ICs and motor drives. Similar to power devices, a big challenge is efficiency. “About 50% of global energy consumption is due to electric motors, and therefore, even a moderate improvement in efficiency can provide meaningful economic benefits, helping reduce the carbon footprint,” Lovati reports.

These modern industrial motor solutions are smart and connected with advanced capabilities to identify irregularities such as excessive heat or voltage surges and respond automatically. Lovati said the introduction of AI technologies brings this function to the next level, allowing predictive maintenance and reducing factory downtime.

He covers everything from motor driver architecture and connectivity in smart motor control to AI and ML integration and software tools.

Edge computing is becoming critical for real-time data processing in industrial automation. Industrial manufacturing systems require real-time decision-making, adaptive control, and autonomous operation, but many cloud-dependent architectures can’t deliver the millisecond response required for safety-critical functions such as robotic-collision avoidance, in-line quality inspection, and emergency shutdown, said Sam Al-Attiyah, head of machine learning at Infineon Technologies AG.

Al-Attiyah said edge AI addresses high-performance and low-latency requirements by embedding intelligence directly into industrial devices and enabling local processing to support machine-vision workloads for real-time defect detection, adaptive process control, and responsive human-machine interfaces that react instantly to dynamic conditions.

He outlines an approach to designing edge AI systems for industrial applications, covering everything from requirements analysis to deployment and maintenance.

Security is also a growing concern and an industry requirement as more devices are connected in industrial environments. Francesco Vaiani, senior product manager at Seco, looks at how designing for industrial IoT systems is changing to meet the European Cyber Resilience Act and the cybersecurity extension of the Radio Equipment Directive. This marks a structural shift in how connected products must be designed, documented, and maintained, he said.

For industrial OEMs, this means more than documentation updates and demands architectural decisions that remain technically defensible throughout the operational lifetime of the device, which often exceeds 10 years, Vaiani said.

Also in this issue, we select the top 10 DC/DC converters introduced over the past year. DC/DC converter manufacturers continue to focus on two big areas: delivering higher efficiency and offering greater flexibility.

Don’t miss the APEC 2026 product roundup. This annual conference showcases the latest in power electronics devices and solutions across industries. Some of these power devices highlight major technology advances in areas such as topologies and packaging, along with meeting growing demand for higher efficiency and higher power density. They also address system complexity by helping to simplify power design.

The post Leveling up Industry 4.0 appeared first on EDN.

METCASE expands accessory options for enclosures

Thu, 04/09/2026 - 20:48
METCASE's brochure of new accessories for electronic enclosures.

METCASE’s new enclosure accessories brochure features its expanded range of options including metal tilt/swivel bail arms, a wide range of molded enclosure feet, PCB mounting parts, 19″ front panels, rack shelves and rack hardware.

METCASE's brochure of new accessories for electronic enclosures.(Source: METCASE USA)

These universal accessories fit METCASE models and other manufacturers’ enclosures, as well as bespoke OEM equipment housings. Applications include networking, communications, laboratory instrumentation, industrial control, test/measurement, peripherals, interfaces and medical devices.

Bail arms with 30° indexing double as desk stands. The aluminum handle profile (ordered separately) fits between two diecast side arms. It is supplied cut to the required length for the customer’s enclosure. The bail arms are available in a range of color combinations including off-white, anthracite, light gray, black and traffic white.

METCASE’s recently expanded range of molded ABS (UL 94 HB) enclosure feet kits can be specified with/without tilt legs. They are suitable for metal and plastic enclosures. There are two models: robust CASE FEET and the designer TECHNOFEET. The feet are easy to fit (just three holes required) with the fixing screws supplied. TPE non-slip inserts are included to prevent the enclosure skidding on the desk. Choose from five standard colors: off-white, traffic grey A, light gray, black and anthracite.

For mounting circuit boards, METCASE offers a range of snap-in guides (for slide-in PCB fitment) in different lengths and for board thicknesses from 0.031″ to 0.078″. For screw fitting PCBs to enclosure panels, there is a kit that includes M3 PCB pillars (0.394″ high) and mounting hardware.

METCASE also offers a range of accessories for 19″ racks. This includes matt anodized aluminum 10.5″/19″ front panels (ventilated/unventilated) in all standard heights from 1U to 6U; the 10.5″ front panels are 3U and 4U. There are also mild steel CR4 2U cantilever rack shelves for mounting equipment without rack brackets. Choose from two depths 11.02″/15.75″ in light gray or anthracite. 19″ equipment mounting kits include four bolts, four cup washers (black or gray) and four caged nuts.

For further information, view the METCASE website and download the accessories brochure: https://www.metcaseusa.com/en/Accessories/Accessories-for-Enclosures.htm

 

The post METCASE expands accessory options for enclosures appeared first on EDN.

Advancing AI performance with HBM4, SPHBM4 DRAM solutions

Thu, 04/09/2026 - 18:15

Over the past two decades, the raw compute capability of processors used in high‑performance computing (HPC) and artificial intelligence (AI) systems has increased at an extraordinary pace. Figure 1 illustrates this trend: XPU floating‑point performance has scaled by more than 90,000×, while DRAM bandwidth and interconnect bandwidth have improved by only about 30× over the same period.

Figure 1 The above chart highlights increases in XPU performance and interconnect bandwidth over 20 years.

This growing disparity between compute capability and data movement—often described as the memory wall and the I/O wall—has become one of the most significant constraints on achievable system performance.

For system designers, this imbalance translates directly into underutilized compute resources, rising power consumption, and increasing architectural complexity. As a result, memory bandwidth and packaging technologies have become just as critical to AI performance scaling as transistor density or core count.

HBM as a foundation for modern AI architectures

To address these bandwidth challenges, HPC and AI systems have increasingly adopted disaggregated architectures built around chiplets. While LPDDR and DDR memories continue to play important roles, high bandwidth memory (HBM) has emerged as the highest‑bandwidth DRAM solution available and a key enabler for modern accelerators.

HBM devices consist of a buffer (or base) die at the bottom and multiple 3D‑stacked DRAM layers above it. The buffer die uses very fine‑pitch micro‑bumps, allowing the memory stack to be co‑packaged with an ASIC using advanced packaging technologies such as silicon interposers or silicon bridges. Supported by rigorous standardization through the JEDEC HBM task group, HBM has become one of the most successful and widely adopted examples of chiplet‑based integration in production systems.

Figure 2 shows a representative side view of an HBM DRAM stack connected to an ASIC through a silicon interposer.

Figure 2 Here is how an HBM DRAM stack is connected to an ASIC through a silicon interposer. Source: Eliyan

A widely deployed example of HBM in practice is Nvidia’s B100 Blackwell accelerator, shown in Figure 3. The package contains two large, reticle‑sized XPU dies connected to one another through high‑bandwidth links, with HBM devices placed along the top and bottom edges of each die. Each XPU die integrates four HBM stacks—two on each long edge—resulting in a total of eight HBM devices per package.

Figure 3 Nvidia’s B100 Blackwell accelerator uses two XPUs connected to eight HBMs in a single package. Source: Nvidia

Using typical HBM3 specifications available at the time the JEDEC standard was adopted, each HBM3 device could employ an 8‑high stack of 16-Gb DRAM layers, providing 16 GB of capacity per stack. With a data rate of 6.4 Gb/s and 1,024 I/Os, each HBM3 device delivers approximately 0.8 TB/s of bandwidth. Across eight devices, this configuration provides 128 GB of total memory capacity and roughly 6.6 TB/s of aggregate bandwidth.

HBM4: Scaling bandwidth and capacity

To continue scaling memory performance alongside compute, JEDEC recently published JESD270‑4, the HBM4 standard. HBM4 introduces a number of architectural improvements over HBM3 that directly address the growing bandwidth and capacity requirements of AI workloads.

One of the most significant changes in HBM4 is a doubling of the channel count, increasing the number of I/Os from 1,024 to 2,048. In parallel, supported data rates have increased into the 6–8 Gb/s range and beyond. Memory density has also scaled, with 24 Gb and 32 Gb DRAM layers specified, along with support for 12‑high and 16‑high stacks. Reliability, availability, and serviceability (RAS) features—including DRFM—have also been enhanced.

Taken together, these advances enable substantial improvements in bandwidth, power efficiency, and capacity relative to HBM3. As an illustrative example, an HBM4e device using a 16‑high stack of 32 Gb layers provides 64 GB of capacity per device, as shown in Figure 4.

Figure 4 Eight HBM4 devices are shown in an example package accomplishing increased total capacity and bandwidth. Source: Eliyan

With 2,048 I/Os operating at 8 Gb/s, such a device can deliver up to 2 TB/s of bandwidth. In a package containing eight HBM4 devices, total memory capacity scales to 512 GB—four times that of the earlier HBM3 example—while aggregate bandwidth exceeds 16 TB/s, a 2.5× increase.

Custom HBM and the role of the base die

As HBM4 adoption accelerates, some system designers are exploring the development of custom HBM solutions optimized for specific applications. A key enabler of this trend is the evolution of the HBM base die.

In earlier HBM generations, the base die was typically manufactured using a DRAM‑optimized process, well suited for capacitor structures but less optimal for high‑speed logic. With HBM4, most suppliers are transitioning to standard advanced logic processes for the base die. This shift aligns more closely with the processes already familiar to SoC designers and opens the door to customization opportunities.

Whether using standard or custom HBM4 devices, these solutions continue to rely on advanced packaging and silicon substrates—such as interposers or bridges—to accommodate the large number of fine‑pitch connections between the memory and the ASIC.

SPHBM4: Bringing HBM‑class bandwidth to organic packaging

Despite its performance advantages, traditional HBM integration requires advanced packaging, which can increase cost and complexity. Many system designers, particularly those focused on volume production and reliability, prefer standard organic substrates. To address this gap, JEDEC has announced that it is nearing completion of a new standard for Standard Package High Bandwidth Memory (SPHBM4).

SPHBM4 devices use the same DRAM core dies as HBM4 and provide equivalent aggregate bandwidth, but they introduce a new interface base die designed for attachment to standard organic substrates. Figure 5 illustrates a side view of an SPHBM4 DRAM mounted directly on an organic package substrate, alongside an ASIC. The ASIC itself may also reside on the organic substrate, or it may remain on advanced packaging such as a silicon bridge for multi‑XPU integration.

Figure 5 Side-view of an SPHMB4 DRAM and ASIC connection is shown with the SPHBM4 DRAM attached directly to the organic package substrate. Source: Eliyan

To achieve HBM4‑class throughput with fewer pins, SPHBM4 employs higher interface frequencies and serialization. While HBM4 defines 2,048 data signals, SPHBM4 is expected to use 512 data signals with 4:1 serialization, enabling the relaxed bump pitch required for organic substrates.

Because SPHBM4 uses the same DRAM stacks as HBM4, per‑stack capacity remains unchanged. However, organic substrate routing supports longer channel lengths between the SoC and the memory, which can enable new system‑level trade‑offs. In particular, longer routing distances and angled trace routing can allow more memory stacks to be placed around a given die.

Figure 6 illustrates this effect. When HBM devices are mounted on silicon substrates, they must be placed immediately adjacent to the XPU, limiting the number of stacks to two per 25-mm die edge. With SPHBM4 on an organic substrate, three memory devices can be connected along the same edge, increasing both memory capacity and bandwidth by approximately 50%.

Figure 6 This is how 12 SPHBM4 devices in example package boost capacity and total bandwidth. Source: Eliyan

Even when a silicon substrate is still used beneath the XPU—for example, to support high‑bandwidth XPU‑to‑XPU links—the overall interposer size can be significantly reduced when memory devices are moved to the organic package. This reduction can translate into meaningful benefits in system cost, manufacturability, and test complexity.

Looking ahead

AI workloads continue to push the limits of memory bandwidth, capacity, and packaging technology. JEDEC’s HBM4 standard represents a major step forward in addressing these demands, while the emerging SPHBM4 standard expands the design space by enabling HBM‑class performance on standard organic substrates.

For system architects, these technologies offer new flexibility in balancing performance, cost, and integration complexity. As memory and packaging increasingly shape overall system capability, early consideration of options such as HBM4, custom HBM, and SPHBM4 will be essential to fully unlocking the next generation of AI and HPC performance.

Kevin Donnelly is VP of strategic marketing at Eliyan.

Related Content

The post Advancing AI performance with HBM4, SPHBM4 DRAM solutions appeared first on EDN.

Rethinking machine vision in industrial automation

Thu, 04/09/2026 - 16:00
Applications of event-based vision in industrial automation.

Machine vision has always played a critical role in ensuring safe, efficient, and reliable operation in many industrial settings. However, as vision-enabled machines become more numerous and the type and volume of data they can collect expand, challenges are forcing system makers to look at new approaches to efficiently acquire, process, and utilize visual data.

If we look at the current challenges, they span the spectrum in terms of improving operational efficiency, accuracy, and reliability.

Data overload and processing efficiency that limits throughput are major issues as industries move toward more advanced, faster automation, tasking vision systems with capturing and analyzing vast amounts of data. Traditional vision systems often struggle with the sheer volume of images they capture, much of which can be redundant. The requirement now is not just about capturing high-resolution images but doing so in a way that first and foremost accelerates throughput (in part by minimizing irrelevant data) while maximizing the precision and relevance of the information captured.

Real-time processing is becoming increasingly important, especially in environments where machines need to make instantaneous decisions, such as in quality control or defect detection on production lines. This requires more efficient processing methods and data reduction techniques.

High-speed and high-precision demands increase as production lines get faster. High-speed processing, low latency, and the ability to capture minute changes in a scene in real time are critical. Traditional frame-based systems struggle with motion blur and data overload when capturing fast-moving objects. For example, in applications such as high-speed counting, even the slightest delay in image acquisition and processing can lead to errors.

Sustainability is a growing priority, as many industrial systems operate in environments where power efficiency is key. Vision systems need to operate for extended periods without consuming significant amounts of energy. Traditional image-processing systems, especially those that capture entire frames at a fixed rate, can be power-intensive and require sophisticated cooling or energy management.

Complex lighting and environmental conditions are common in many settings, including extreme brightness, low light, or dynamic lighting scenarios. Vision systems need to cope with high-dynamic-range requirements to capture high-quality images without losing detail in either the darkest or brightest areas. Conventional frame-based systems have struggled in such conditions, leading to the need for more adaptable and sensitive vision technologies.

Predictive maintenance and condition monitoring are growing needs. Vision systems must not only react to issues but also help to predict potential problems before they occur. Predictive maintenance requires vision systems that can monitor machine vibrations, detect wear and tear, and identify early signs of equipment failure.

These challenges point to a more fundamental limitation: Traditional frame-based vision was designed for image capture and human viewing, not for machines that must detect, interpret, and react to changes in real time. As industrial systems move toward higher levels of automation and autonomy, vision is becoming a core component of the perception pipeline.

This shift is driving demand for sensing approaches that reduce latency, limit unnecessary data, and enable faster, more reliable decisions across applications such as monitoring, inspection, counting, and control.

Event-based vision addresses these challenges

Event-based vision, inspired by the human eye and brain, is increasingly used in industrial machine vision to address these challenges. By mimicking biological vision, this technology utilizes efficient sensing and collection techniques that capture changes within a specific scene. This reduces processing requirements compared with traditional frame-based methods while revealing details that conventional systems miss, opening new possibilities for precision and performance in industrial applications.

Event-based vision is particularly suited for industrial automation, IoT, automotive, and edge applications that demand high performance, low power consumption, and operation in challenging lighting conditions. The technology offers significant advantages in speed, power efficiency, dynamic range, and low latency, driving use cases such as high-speed counting, preventive maintenance, and inspection.

From frame-based imaging to event-based perception

In conventional video systems, entire images (i.e., the light intensity at each pixel) are recorded at fixed intervals, known as the frame rate. Standard movies are recorded at 24 fps, with some videos using higher frame rates like 60 fps (16.7-ms intervals). While effective for representing the “real world” on a screen, this method oversamples unchanged parts of an image, especially at high frame rates, while undersampling the most dynamic areas. As a result, critical motion information can be missed between frames.

In contrast, the human eye samples changes up to 1,000× per second without focusing on static backgrounds at such high frequencies. Event-based sensing offers a biologically inspired solution to this under- and oversampling. Unlike traditional cameras, event sensors don’t use a uniform acquisition rate (frame rate) for all pixels. Instead, each pixel defines its sampling points by reacting to changes in the amount of light it detects. Information about contrast changes is encoded in “events”—data packets containing the pixel’s coordinates and the precise time of the event.

Frame-based vs. event-based sensing—discrete frame sampling vs. continuous motion capture.Figure 1: Frame-based vs. event-based sensing—discrete frame sampling vs. continuous motion capture (Source: Prophesee)

Prophesee’s patented event-based sensors, for instance, allow each pixel to activate intelligently based on detected contrast changes. This enables continuous acquisition of essential motion information at the pixel level. The pixels operate asynchronously (unlike traditional CMOS cameras) and at much higher speeds, as they don’t need to wait for a complete frame before reading data.

The advantages of event sensors include high-speed operation (equivalent to 10,000 fps), extremely efficient power consumption (down to the microwatt range), low latency, reduced data processing requirements (10× to 10,000× less than frame-based systems), and high dynamic range (up to 140 dB).

Because only changes are transmitted, event-based data streams are inherently sparse and temporally precise, allowing downstream processing systems—including AI-based processing—to focus on what matters: motion, variation, and anomalies rather than static background information. These attributes make event-based vision systems suited for a wide range of applications and products.

This technology is being commercialized more widely, such as in Prophesee’s Metavision, which has evolved over the past decade to deliver high performance through integrated hardware and software solutions.

Real-time industrial automation with event-based vision

Event-based vision excels in a variety of industrial automation applications. Typical use cases (see Figure 2) range from object tracking and high-speed counting to predictive maintenance and quality control.

Applications of event-based vision in industrial automation.Figure 2: Applications of event-based vision in industrial automation (Source: Prophesee) Safety: Object tracking

Event-based vision systems excel at tracking moving objects, leveraging their low data rate and sparse information capabilities. This approach allows for precise object tracking with minimal computational resources, eliminating traditional “blind spots” between frame acquisitions. Additionally, event sensors offer native segmentation, focusing solely on movement and disregarding static backgrounds for improved tracking accuracy and efficiency. Event-based vision enhances safety by monitoring worker and machine interactions in real time, even in complex lighting, without capturing images.

Productivity: high-speed counting

Real-time vision systems powered by event-based sensing enable objects to be counted at unprecedented speeds with high accuracy and minimal motion blur. Sensors independently trigger each pixel as objects pass through the field of view, achieving a throughput of over 1,000 objects per second and an accuracy of more than 99.5%, ensuring rapid and precise counting in high-speed environments.

Predictive maintenance: vibration monitoring

Event-based vision enables continuous, remote vibration monitoring with pixel-level precision. By tracking the temporal evolution of each pixel in the scene, the sensors record each event’s coordinates, polarity of change, and exact timestamp. This data provides valuable insights into vibration patterns across frequencies from 1 Hz to the kilohertz range, aiding in predictive maintenance.

Event-based vibration monitoring in industrial systems.Figure 3: Event-based vibration monitoring in industrial systems; frame-based imaging shown for reference (Source: Prophesee) Quality: particle/object size monitoring

In high-speed production environments, event-based sensing allows for real-time control, counting, and measurement of particle or object sizes on conveyors or channels. The sensors capture instantaneous quality statistics, ensuring accurate process control at speeds of up to 500,000 pixels per second with a counting precision of 99%, optimizing quality assurance in production lines.

High-speed event-based particle counting and size monitoring.Figure 4: High-speed event-based particle counting and size monitoring; frame-based image shown for reference (Source: Prophesee) Quality control

Event-based vision systems help lower reject rates with real-time feedback and advanced processing down to a 5-µs time resolution and blur-free asynchronous event output. One specific use case is in the automatic detection and classification of the finest imperfections in manufacturing materials—for example, in automotive parts to perform paint defect inspection, scratch detection, and planarity testing (see Figure 5).

Event-based surface contamination and defect detection in industrial production.Figure 5: Event-based surface contamination and defect detection in industrial production (Source: Prophesee)

As event-based vision continues to evolve and address diverse market needs, it is establishing itself as a new industry standard. Over the past several years, the technology has expanded to serve a wide array of applications.

Thousands of product developers are now adopting event-based vision for sophisticated camera and perception systems, supported by open-source technology and a growing inventors’ community. These advancements are transforming how machines perceive, process, and react to visual information in real time, bringing greater precision, efficiency, and intelligence to industrial automation operations.

Thibaut Willeman is head of business development and go-to-market at Prophesee, where he works on the market development of event-based vision systems for industrial automation, robotics, and defense applications. He previously held strategy and innovation roles at companies such as Boston Consulting Group, working on growth strategy, product strategy, and innovation initiatives for industrial and technology companies. He holds an engineering degree and a master’s degree in innovation and technology management.

The post Rethinking machine vision in industrial automation appeared first on EDN.

Humidifiers and such: How much “smart” is too much?

Thu, 04/09/2026 - 15:00

This engineer’s new humidifier is—he kids you not—Wi-Fi enabled, therefore “smart”. What upsides does such a product deliver? And at what tradeoffs?

Within one of last month’s writeups, I mentioned that my wife and I had recently acquired two DREO 4 liter-capacity ionizing humidifiers. That purchase led to my interest in hygrometers (humidity measurement devices) such as the TP-Link Tapo T315, which ended up supplanting the bad data I’d previously relied upon, coming from my furnaces’ touchscreen thermostats.

Ionizing advancements

The baseline DREO HM311:

relies on front panel buttons for user control purposes. It works well, and I enjoy the dynamic bubbling-water “light show” projected through the center mist tube, particularly visible at night:

The ionizing design approach is also interesting; just make sure to remember to keep ‘em clean:

Its slightly more expensive “smart” sibling, the HM311S, adds Wi-Fi support, thereby making it controllable (and more broadly manageable) via a mated smartphone or other mobile device:

or even, courtesy of its integrated Amazon Alexa and Google Assistant support, your voice:

And the tri-color mist tube (which I’d been calling a “pillar” until I revisited the user manual just now), is a handy visual reference to the current measured humidity level (I’ve yet to see blue):

Light Color

Humidity Level

Yellow

≤30%

Green

31-60%

Blue

≥61%

Binary impermanence

Believe it or not, the HM311S is even the beneficiary of periodic firmware updates, such as the one that I was prompted to install as part of initial out-of-box setup:

Another update, I noticed, was available as I re-accessed the device via my smartphone two-plus months later, just prior to writing these words:

And yes, the humidifier’s status and settings are even accessible over the Internet; note the cellular-only connection in the following screenshot (per the reported 436 hours of use to date, this was an Amazon Warehouse-sourced, apparently previously-used unit, even though it arrived in seemingly brand-new condition):

Weighing pros and cons

Nifty. But also potentially (more than) a bit scary. First off, what’s the realistic benefit (if any) of remote status monitoring from my mobile device? It’s not like I have a robot sitting at home in my absence that can alternatively grab a water pitcher, fill it and transfer its contents to the humidifier if it empties, after all. Not yet, at least:

More generally, is it convenient to turn on and off (and raise and lower the output intensity) of the humidifier from the couch, using either the aforementioned smartphone or my voice? Sure. But on the other hand, I could also always use the exercise. And what do I give up in exchange for all this supposed connectivity “goodness”?

For one thing, I’m sharing WAN IP address, device usage and ambient analytics data with the manufacturer. For a humble DREO humidifier, maybe this degree of reveal isn’t such a big deal. But what about my Google Nest Wifi mesh network, similarly managed via the cloud? Or my Blink security camera setup, which leverages cloud services not only for monitoring and control purposes but also to store recordings (at least currently; stay tuned for next week’s teardown)?

And what happens if those cloud services, not only from DREO (or its Amazon Alexa partner), Google or Blink but any other similar supplier, get hacked? Sure, it’s annoying to have someone remotely switching on and off your humidifier out of your control. That time someone used my then-firewall-exposed IPP port to spit pages (and pages and pages) of gibberish out of my laser printer was a bit more annoying. But that’s not what I’m talking about when I say “scary”.

The hackers now know who I am from my account profile and can easily determine my location via an online search using my name. Since they know my WAN IP address, they can now attempt to hack me. They also know my Wi-Fi network credentials, which makes it even easier to get inside my LAN if, since they now know my location, they’re motivated to pull up and park on the street outside. They know my account username and password, which theoretically should be unique to this particular cloud service but—get real—is undoubtedly reused elsewhere. And for a paid cloud service, they also now know my credit card and/or bank account info. Fun times!

Is elementary (especially) convenience worth the potential consequences? If you’re a consumer, it’s a question you should be asking yourself pre-purchase…although you’re likely to be unaware of the possible downsides. Therefore, if you’re a manufacturer, it’s a question you should be asking on behalf of your potential customers during the initial development process…although you’ve also got marketing breathing down your neck for new features, and your competitors may have already unveiled similar capabilities, so you’re also under late-to-market pressure, so…🤷‍♂️

When, if ever, is a product too “smart”? Or taking the thought to the other end of the extremist spectrum, should products be “smart” at all, at least for the mass market? As always, I welcome your thoughts in the comments!

Brian Dipert is the associate editor, as well as a contributing editor, at EDN Magazine.

Related Content

The post Humidifiers and such: How much “smart” is too much? appeared first on EDN.

Top 10 DC/DC converters and modules

Wed, 04/08/2026 - 16:00
XP Power’s BCT40T series.

DC/DC converters for demanding applications, ranging from industrial, railway systems, and satellites to communications and information technology equipment (ITE), are required to meet stringent requirements. They call for enhanced performance and high reliability, including operating in extreme conditions, while often requiring compact designs.

Over the past year, DC/DC converter manufacturers have focused on providing higher efficiency, offering greater flexibility with more options, saving board space with smaller packages, and delivering more cost-effective solutions. These devices are available in a variety of form factors, including brick types, DIPs, and modules.

Here’s a sampling of DC/DC converters introduced over the past year that deliver improvements in performance and packaging while providing the right-sized features for the application.

Meeting demanding requirements

Many of the latest families of DC/DC converters are designed to operate in demanding and harsh environments, including industrial, railway, ITE, and communications. They also often need to fit into tight spaces.

XP Power recently developed a family of DC/DC converters for space-constrained applications in demanding environments such as industrial, ITE, and communications systems. The BCT40T series of 40-W DC/DC converters offer high power density in a 1 × 1-inch (25.4 × 25.4-mm) package.

The BCT40T series features high efficiency, up to 89% depending on the model, and remote on/off functionality to enable energy savings and safe shutdowns. The series offers a wide 4:1 input voltage range, enabling operation across multiple input voltages. Models are available with nominal 24-VDC inputs (ranging from 9.0 V to 36.0 VDC) and 48-VDC inputs (ranging from 18.0 V to 75.0 VDC).

The devices operate over a wide operating temperature range of −40°C to 105°C and a broader full-load operating temperature range than many alternatives, XP Power said.

The BCT40T offers single regulated outputs ranging from 3.3 V to 24 VDC, as well as dual regulated outputs at ±12 VDC and ±15 VDC. The single-output models offer the flexibility of ±10% output voltage adjustment via an external trim resistor, enabling specific voltage requirements.

Targeting applications such as test and measurement, robotics, process control, analytical instruments, and communications equipment, these DC/DC converters feature an ultra-compact metal package that saves printed-circuit-board (PCB) area and allows more room for customer application circuitry, according to XP Power. In addition, these devices are smaller than many 40-W alternatives, which typically come in larger, 2 × 1-inch (50.8 × 24.4-mm) packages, reducing required board space by 50%.

The series meets worldwide safety approvals, including IEC/UL/EN62368-1 standards, as well as applicable CE and UKCA directives. It also complies with EN55032 Class A/B for conducted and radiated emissions and EN61000-4-x for immunity. The BCT40T series is available now.

XP Power’s BCT40T series.XP Power’s BCT40T series (Source: XP Power)

Murata Manufacturing launched a high-performance, 1-W DC/DC converter with reinforced isolation and ultra-low capacitance, targeting communications and analog front-end measurement circuits.

The NXJ1T series addresses the need for robust isolation, delivering high electrical isolation, noise immunity, and thermal reliability for industrial, energy, and medical applications with 4.2-kVDC isolation (Hi Pot Test) and compliance with UL62368 safety standards.

The NXJ1T series, housed in a compact, 10.55 × 13.70 × 4.04-mm footprint, is designed for safety and durability in demanding environments. It features an unregulated, 1-W 5-V input to 5-V/200-mA output design, which is suited for embedded systems.

Each device delivers reinforced insulation to 200 Vrms and basic insulation to 250 Vrms. This adds a layer of protection in high-voltage environments. The undervoltage lockout (UVLO) functionality enhances operational stability, which prevents erratic behavior under fluctuating power conditions, Murata said.

These devices can also be used in medical equipment, where low leakage current is critical for patient-connected applications. They feature ultra-low isolation capacitance, which helps minimize unwanted leakage, supporting compliance with stringent safety standards such as IEC 60601-1 when used within a certified system, the company said.

The DC/DC converters also leverage proprietary molding technology, providing high ingress protection against dust and particulates in harsh industrial environments and extreme temperatures. The device has successfully undergone 1,000 temperature cycles between −40°C and 125°C, demonstrating its ability to withstand the highest levels of thermal stress, Murata said.

The series also uses Murata’s proprietary block-coil transformer technology, providing high isolation and low leakage current, and facilitates lower switching frequencies (500 kHz to 2 MHz) and higher efficiencies of approximately 80%.

The result is exceptional common-mode transient immunity and significantly lower isolation capacitance, according to Murata, making it suited for high-performance power isolation in electrically noisy environments.

Recom GmbH developed a 20-W DC/DC converter in a compact, 1.6 × 1 × 0.4-inch (40.6 × 25.4 × 10.2 mm) package, calling it a new level of high efficiency in DC/DC performance. The RPA20-FR series, targeting rail applications, delivers 20 W over its full 36-VDC to 160-VDC input range (200-VDC peak for 1 second) from −40°C to 70°C and 105°C with derating.

The series offers fully regulated, low-noise, and protected single outputs (5 V, 5.1 V, 12 V, 15 V, and 24 VDC), trimmable by +20%/−10% minimum, with ±5-V, ±12-V and ±15-VDC options available. The devices feature remote on/off control with positive or negative logic, UVLO is included, and no minimum load is required.

The parts are designed specifically for rolling stock applications with nominal input voltages of 48 V, 72 V, or 110 VDC. They are EN 45545-2– and EN 50155–compliant and meet UL/IEC/EN 62368-1 for audio/video and IT applications. Full 3-kVAC/1-minute reinforced isolation is provided, and the parts comply with EMC “Class A” levels as well as rail EMC standard EN 50121-3-2. A separate protection module, RSP150-168, is available to protect against surges according to RIA12 and NF F01-51 standards.

The RPA20-FR series meets environmental standards required for rail applications, particularly EN 45545-2 for fire protection, EN 60068-2-1 for dry and damp heat, and EN 61373 for shock and vibration. Mean time between failure is rated over 1.5 Mhrs at 25°C according to MIL-HDBK-217F GB.

Cincon Electronics Co. Ltd. recently launched the EC3AW8 and EC4AW8 series, delivering 3 W and 6 W of regulated power, respectively, tailored for demanding industrial environments. Applications include instruments, industrial automation and control systems, telecom and data communication equipment, test and measurement, IPC and embedded systems, and IT systems.

The EC3AW8 and EC4AW8 DC/DC converters feature an ultra-wide 8:1 input voltage range. They are available with single-output voltages of 3.3, 5, 12, or 15 VDC and dual outputs of ±5, ±12, or ±15 VDC, and they offer an optional positive remote on/off control for ease of system integration.

With an ultra-wide input range from 9 to 75 VDC, the EC3AW8 and EC4AW8 series are suited for industrial and IT power systems such as 12 V, 24 V, and 48 V. They deliver high efficiency up to 87% and ensure reliable performance under harsh conditions. The operating temperature range is −40°C to 105°C (with de-rating), and the maximum case temperature is 115°C.

Other features include very low no-load input current (7 mA max. for 3 W; 8 mA max. for 6 W), reducing power consumption in standby mode, and a range of protection including input UVLO, output overvoltage protection, overcurrent protection, and continuous short-circuit protection.

These converters also meet key safety and electromagnetic-interference (EMI) standards, including EN 55032 Class A without an external filter, simplifying design and integration for space-constrained applications, Cincon said.

They are also compliant with MIL-STD-810F for shock and vibration and support operating altitudes up to 5,000 meters. They meet IEC/UL/EN 62368-1 safety standards and provide 3,000-VDC input-to-output isolation.

These DC/DC converters are housed in a standard industrial DIP-24 package measuring 1.25 × 0.8 × 0.4 inches (31.8 × 20.3 × 10.2 mm).

Space and satellites

Micross Components Inc. recently introduced a series of Class H+-screened DC/DC converters for harsh space-based applications. The AFLS28XX Series of DC/DC converters delivers a radiation-tolerant power conversion solution for low-Earth-orbit (LEO) satellite constellations, new space missions, launch vehicles, and other space-based systems.

The AFLS series of 28-V, 120-W DC/DC converters builds on the AFL series, with updated technology and design enhancements. These converters meet MIL-PRF-38534 Class H screening requirements and include additional tests such as PIND and radiography to support reliability in LEO and new space environments. The AFLS series offers radiation specifications of 50-krad (Si) TID and 60-MeV·cm2/mg SEE.

These devices are tailored for space missions requiring radiation tolerance at a lower cost than traditional space-grade-qualified power supplies, Micross said.

The hermetically packaged DC/DC converters are available in single- and dual-output voltage configurations ranging from 5 V to 28 V. They feature proprietary magnetic pulse feedback for optimized dynamic line and load regulation and parallel operation for outputs above 120 W, with synchronization capability to a system clock in the 525-kHz range.

Other features include internal current sharing for balanced load distribution and high power density with no de-rating across the full operating temperature range. In addition, they meet reduced size, weight, and power (SWaP) requirements by eliminating shielding requirements and delivering lower power consumption.

These parts are currently under test, and engineering samples are available within four to six weeks ARO.

Micross’s AFLS series.Micross’s AFLS series (Source: Micross Components Inc.)

Also targeting space applications is a series of off-the-shelf, 15-W DC/DC converters from Microchip Technology Inc. This space-grade, non-hybrid DC/DC isolated power converter with a companion EMI filter operates from a 28-V satellite bus in harsh environments.

The SA15-28 radiation-hardened DC/DC power converter with a companion SF100-28 EMI filter are designed to meet MIL-STD-461 specifications. The SA15-28 and SF100-28 are fully compatible with Microchip’s existing SA50 series of power converters and SF200 filter.

The SA15-28 operates across a wide temperature range from −55°C to 125°C and offers radiation tolerance up to 100 krad TID. It is available with 5-V triple outputs that can be used with point-of-load converters and low-dropout linear regulators to power FPGAs and microprocessors. The output voltage combinations can be customized.

The SA15-28 weighs 60 grams and is approximately 1.68 in.3 to meet SWaP requirements. Microchip provides comprehensive analysis and test reports including worst-case analysis, electrical stress analysis, and reliability analysis. The SA15-28 DC/DC power converter and SF100-28 external EMI filter are now available.

Microchip’s SA15-28 DC/DC converter.Microchip’s SA15-28 DC/DC converter (Source: Microchip Technology Inc.) Brick converters

Advanced Energy Industries Inc. recently added two quarter-brick modules to its ultra-efficient, non-isolated bus converter family for 48-V power conversion. These DC/DC converters target advanced information and communication technology equipment including AI servers, compute and networking, and industrial applications such as robotics and test and measurement.

The Advanced Energy Artesyn NDQ1300 1,300-W and NDQ1600 1,600-W quarter-brick modules operate with peak efficiencies up to 98%, making them suited for high-performance applications. Each of the modules can convert a 48-V input into a fully regulated, 12-V output for non-isolated, low-voltage, high-current power stages as well as PCIE slots and memory devices.

The NDQ devices feature a flat efficiency curve that ensures that the modules deliver optimized power conversion across a wide load range. They also feature an integrated PMBus interface to support flexible digital control and monitoring as well as current-share and remote-sensing options to enable the connection of multiple power supplies in parallel, supporting higher load current or redundancy.

The NDQ modules use an advanced baseplate for better thermal management and heat-sink integration. They also benefit from an inherently safe, transformer-based topology that is resilient to transient loads and makes designing applications for inrush current control on startup easier, the company said.

Advanced Energy’s NDQ1300 quarter-brick module.Advanced Energy’s NDQ1300 quarter-brick module (Source: Advanced Energy Industries Inc.)

Another new converter in a brick format is Bel Fuse’s compact, 100-W DC/DC converter for rugged applications such as industrial automation, railway systems, telecom infrastructure, and electric vehicles/e-mobility. The PRA100 Series is housed in a standard 1/8th brick format, addressing the increased need for higher power density. The devices provide enhanced thermal performance, wide input flexibility, and an environmentally robust design.

The PRA100 operates across a 9-VDC to 74-VDC input range and delivers up to 54-V output with 3,000-VDC isolation. The operating temperature is −40°C to 105°C. All models are fully compliant with EN 62368-1 and carry CE, UKCA, and UL/cUL certifications. It is also compliant with EN 50155, making it well-suited for railway applications. The series offers optional baseplate cooling and negative logic features to extend its versatility in harsh conditions and EV platforms, Bel Fuse said.

Bel Fuse’s PRA100 Series.Bel Fuse’s PRA100 Series (Source: Bel Fuse) DC/DC converter modules

TDK Corp. developed a series of its microPOL (μPOL) power modules with full telemetry (voltage, current, and temperature). The FS160* series μPOL DC/DC converters deliver high power density in the smallest package sizes.

All FS160* μPOL modules measure 3.3 × 3.3 × 1.35 mm, making it easier to place them near complex ICs such as ASICs, FPGAs, and SoCs. Full telemetry is accessible via an I2C interface. The modules operate across a broad junction temperature range from −40°C to 125°C.

There are several versions of each of the 3-A parts (the FS1603 series), 4-A parts (the FS1604 series), and 6-A parts (the FS1606 series). The FS line also includes models at 12 A (the FS1412) and 25 A (the FS1525). The selection of DC/DC converter modules that range from 3 A to 200 A (if eight FS1525’s are connected in parallel) covers a wide range of applications, including big data, machine learning, AI, 5G cells, IoT, and enterprise computing.

TDK calls the module family’s configuration innovative, integrating a high-performance controller, drivers, MOSFETs, and logic core, using a semiconductor embedded in substrate. This packaging eliminates wire bonds and enhances thermal performance. Also integrated are the modules’ inductor and passives into a chip-embedded package to minimize parasitic inductance, which improves the module’s efficiency. Boot and Vcc capacitors are also incorporated into the module.

The FS160* series DC/DC converters deliver 1-W/mm3 in modules that are roughly half the size of other products in the same class, according to the company. In addition, TDK said the modules are so effective that they require no airflow for up to 15 W to 30 W in up to 100°C ambient temperature.

TDK has created multiple design tools, including tools specific to FPGAs from each of the major FPGA suppliers. Additional design tools for the FS160* series include SPICE simulator designs on QSPICE.

Evaluation boards are available, one each for modules at 3 A, 4 A, and 6 A. Fast starter designs for schematic and PCB layout are available at Ultra Librarian.

TDK’s FS160 μPOL DC/DC converters.TDK’s FS160 μPOL DC/DC converters (Source: TDK Corp.)

Aimed at the industry’s shift to high-performance, 48-V systems, Vicor Corp. launched its 48-V to 12-V DCM DC/DC converter modules last year. The DCM3717 and DCM3735 DC/DC power modules, offering up to 2 kW of output power, support the shift to 48-V power delivery networks (PDNs) that provide greater power system efficiency, power density, and lower weight than 12-V-based PDNs in a variety of applications, including communications, computing, automotive, and industrial.

The DCM products are non-isolated, regulated DC/DC converters, operating from a 40-V to 60-V input to generate a regulated output adjustable from 10 V to 12.5 V. The DCM3717 family is available in two power ranges, 750 W and 1 kW, and the DCM3735 is a 2-kW device. These DCM products can be paralleled with up to four modules to scale system power levels.

Claiming industry-leading power density at 5 kW/in.3, these high-density power modules enable power system designers to deploy 48-V PDNs for legacy 12-V loads, delivering size, weight, and efficiency benefits. These devices deliver high efficiency at 96% in a low-height, surface-mount converter housed in package, delivering a 6× reduction in size.

The smaller module is the DCM3717, with a wide input range of 40–60 V (48-V nominal) and an output of 10–12.5 V (12-V nominal). It comes with two power options, 750 W and 1 kW, and 96.5% efficiency. The module is housed in a compact, 36.7 × 17.3 × 5.2-mm footprint.

In a side-by-side comparison with a top competing product, the DCM3717 is less than half the size, with 20% higher output power and 7× higher power density, according to the company.

The larger device, the DCM3735, offers the same wide input range of 40–60 V (48 V nominal) and output of 10–12.5 V (12 V nominal). The power option is 2 kW with 96.4% efficiency. The module is housed in a compact, 36.7 × 35.4 × 5.2-mm footprint.

Vicor’s DCM3717 and DCM3735 DC/DC power modules.Vicor’s DCM3717 and DCM3735 DC/DC power modules (Source: Vicor Corp.)

The post Top 10 DC/DC converters and modules appeared first on EDN.

Antilog PWM and 2-way current mirror make buffered triangle and square waves

Wed, 04/08/2026 - 15:00

It can be fun (and productive!) to transplant a previous Design Idea into a new context, and even more so when modifying and mixing multiple ideas.  Here we’ll combine and comingle the following: 

  1. 5 decade antilogarithmic PWM current source
  2. A two-way mirror—current mirror that is
  3. Dual RRIO op amp makes buffered and adjustable triangles and square waves

This gets the buffered triangle and square-wave output oscillator shown in Figure 1.  It’s linear-in-log tunable from 10 Hz to 1 MHz and controlled with 8-bit PWM.

Figure 1 Incoming 8-bit antilog PWM interface (U1, U2, A1, Q1) generates 80 nA to 8 mA current to control 10 Hz to 1 MHz oscillator (Q2, Q3, Q4, A2, A3). The asterisked parts are precision (metal film) resistors and (C0G) capacitors.

Wow the engineering world with your unique design: Design Ideas Submission Guide

We’ll now proceed to vivisect it.

A single MCU 500 kHz (2 μs per count) PWM output bit controls the anti-log current source.  It’s isolated in blue in Figure 2 and works as explained in reference 1 above.

Figure 2 The U1 U2 switching circuit periodically charges precision timing cap Ct to 1.24 V, then exponentially discharges it at (Rt + R1)Ct = 43.4 μs time-constant, storing the result on sample and hold Csh.

The final sample-and-hold antilog Csh voltage = 1.24v*exp(-Tpwm/43.4μs) = 1.184 V to 11.8 μV as Tpwm goes from 2 to 500 μs = 1 to 250 lsb for a Q1 five-decade collector current range of Vcsh/R4 = 8 mA to 80 nA.  R1 provides for time constant fine-tuning.

Steering and periodic inversion/reflection of the 80nA to 8mA Q1 collector current into integrator A2 is the job of the Q2, Q3, and Q4 two-way current mirror.  It’s covered in reference 2 and in blue in Figure 3.

Figure 3 A two-way current mirror Q2, Q3 ramps A2 C1 integrator up/down at dV/dts ranging from 8E1 to 8E6 volts per second (V/s).  Q4 reduces the loading of A3 at high current/frequency while acting as the reference 2 D1.

Comparator A3 switches current mirror polarity when A2’s output reaches the 0.5 V and 4.5 V limits, which are similar to the theory of operation of reference 3, and are determined here by the resistor networks shown below in Figure 4.

Figure 4 R5 R6 set comparator’s 0.5V/4.5V switching points and thus the triangle wave’s 4 Vpp amplitude.

The output frequency versus the PWM setting-controlled current sink is shown in Figure 5.

Figure 5 Frequency versus PWM setting: linear (black) vs log (red).

And that’s the name of that (antilogarithmic) tun(ing).

Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974.  They have included best Design Idea of the year in 1974 and 2001.

Related Content

  1. 5 decade antilogarithmic PWM current source
  2. A two-way mirror—current mirror that is
  3. Dual RRIO op amp makes buffered and adjustable triangles and square waves

The post Antilog PWM and 2-way current mirror make buffered triangle and square waves appeared first on EDN.

IMUs demystified: The hidden sense of machines

Wed, 04/08/2026 - 12:49

Motion is invisible until something makes it measurable. That is where inertial measurement units (IMUs) step in—the silent sensors that give machines their hidden sense of balance, orientation, and trajectory. From smartphones that know when you have rotated the screen, to drones that hold steady against the wind, IMUs translate raw acceleration and angular velocity into actionable awareness.

In this installment of Fun with Fundamentals, we will peel back the layers of these compact marvels, showing how they evolved from bulky gyroscopes into today’s precision-packed silicon companions.

The silent navigators: IMUs

An IMU is a compact, high-precision device that captures how an object moves and orients itself in space. Whether steering rockets into orbit, stabilizing drones overhead, or enabling smartphones to guide us through crowded streets, IMUs are the unseen systems that make modern navigation possible.

At the heart of an IMU are sensors that detect linear acceleration with accelerometers and rotational velocity with gyroscopes. Many designs also incorporate a magnetometer to provide heading information. A typical configuration combines a 3-axis accelerometer and a 3-axis gyroscope, forming a 6-axis IMU. When a 3-axis magnetometer is added, the system becomes a 9-axis IMU. Together, these sensors deliver measurements of specific force, angular rate, and surrounding magnetic fields—producing a complete dataset for motion and orientation tracking.

The accelerometers, gyroscopes, and—when included—magnetometers inside an IMU are collectively referred to as inertial sensors. These components form the foundation of inertial navigation, working together to capture motion and orientation data without relying on external signals. By fusing their outputs, engineers can derive precise information about how a device moves through space, even in environments where GPS or other external references are unavailable.

So, accelerometers measure linear acceleration, capturing how quickly an object speeds up or slows down. Gyroscopes sense angular velocity, revealing the rate and direction of rotation. Magnetometers, when included, detect magnetic fields and provide heading information relative to Earth’s magnetic north.

It’s worth noting that engineers still deploy both 6-axis and 9-axis IMUs, depending on the demands of the application. A 6-axis unit, built from accelerometers and gyroscopes, is often sufficient for tasks like stabilizing drones, balancing robots, or monitoring automotive motion, where relative movement and rotation are the primary concerns.

In contrast, a 9-axis IMU adds a magnetometer, giving it the ability to resolve absolute heading. This makes it the preferred choice in smartphones, wearables, and advanced navigation systems, where orientation relative to Earth’s magnetic field is critical. In practice, the simpler 6-axis design remains a cost-effective workhorse, while the 9-axis variant dominates in consumer electronics and navigation-heavy applications.

Figure 1 A vintage mechanical inertial navigation system (INS) component achieves autonomous navigation by integrating an inertial measurement unit with a computational unit. Source: Author’s archives

Simply put, a typical IMU places one accelerometer and one gyroscope along each of the three principal axes, ensuring motion and rotation are captured in all directions. In some designs, a magnetometer is also added per axis to provide heading information, but this is not always the case—many IMUs operate effectively without it.

Beyond these core sensors, certain IMUs incorporate auxiliary elements such as temperature monitors, since accelerometers and gyroscopes are prone to thermal fluctuations that can compromise accuracy. By recording temperature data, the system compensates for thermal drift, stabilizing sensor outputs and improving overall reliability.

Evolution and types of IMUs

From the gimbaled IMUs of the aerospace pioneers to today’s miniaturized MEMS-based devices, IMUs have undergone a remarkable transformation. Early gimbaled systems relied on mechanically stabilized platforms, bulky yet precise, before giving way to strapdown IMUs that fixed sensors directly to the vehicle body, reducing size and complexity.

With the rise of microelectromechanical systems (MEMS), silicon MEMS IMUs became the standard for consumer electronics, robotics, and drones, prized for their low cost, compact size, and efficiency. For tactical and industrial applications, Quartz MEMS IMUs emerged, offering greater stability and resilience under temperature and vibration compared to silicon designs.

At the high-end, ring laser gyroscope (RLG) IMUs and fiber-optic gyroscope (FOG) IMUs represent the pinnacle of precision, both exploiting the Sagnac Effect to measure rotation. RLGs use laser beams circulating in a closed cavity, while FOGs rely on long coils of optical fiber—an approach that reduces maintenance needs and improves durability while delivering comparable accuracy.

Today, engineers select from this spectrum—silicon MEMS for affordability and portability, quartz MEMS for tactical reliability, and RLG/FOG systems for uncompromising accuracy—depending on mission requirements.

Figure 2 The Motus ultra‑high‑accuracy MEMS IMU enables precision in autonomous system applications. Source: Advanced Navigation

As a side note, it’s worth mentioning that while IMUs deliver raw measurements of acceleration and angular velocity, an attitude and heading reference system (AHRS) builds on this foundation by applying sensor fusion algorithms to provide stabilized orientation outputs: pitch, roll, yaw, and heading. In practice, AHRS units are IMUs with embedded processing, making them more directly usable in aircraft, marine, and robotic platforms where orientation data is required in real time.

Advanced IMU categories

Beyond the broad spectrum of MEMS and optical gyroscope technologies, IMUs can also be classified by their functional purpose. A north-seeking IMU is designed to determine true north without relying on external references such as the global navigation satellite system (GNSS) or magnetic compasses.

By exploiting the Earth’s rotation and combining precise gyroscope measurements, these systems achieve sub-degree heading accuracy, making them invaluable in marine navigation, underground operations, and defense applications where absolute orientation is critical.

In contrast, a navigation IMU focuses on tracking motion and orientation over time. It provides raw acceleration and angular velocity data that, when processed within an inertial navigation system (INS), yields position, velocity, and displacement. Navigation IMUs are widely deployed in aerospace, robotics, and consumer electronics, where continuous motion tracking and drift management are more important than absolute north-finding.

Together, these advanced categories highlight how IMUs are not only differentiated by sensor technology—silicon MEMS, quartz MEMS, RLG, or FOG—but also by the specific role they play in navigation systems, from heading determination to full trajectory tracking.

Practical pointers for engineering minds

IMUs are no longer the nightmares they once seemed. Thanks to today’s accessible sensor modules, open-source libraries, and low-cost development boards, even a novice maker can experiment with inertial measurement units without needing aerospace-grade expertise. What was once the domain of defense labs and high-end avionics has now become approachable for hobbyists, students, and engineers alike, making hand-on exploration of motion sensing and navigation both practical and affordable.

First off, note that modern inertial modules often advertise “IMU, AHRS, and INS options” because the same hardware platform can deliver different levels of functionality depending on firmware and processing. At the most basic level, the unit acts as an IMU, outputting raw accelerometer and gyroscope data. With onboard sensor-fusion algorithms, it becomes an AHRS, providing stabilized orientation in pitch, roll, yaw, and heading.

When paired with a computational unit and often GNSS input, the same device scales up to a full INS, achieving autonomous navigation with position, velocity, and orientation. This tiered approach lets engineers choose the level of integration that matches their application, from hobbyist UAVs to aerospace systems.

Modern IMUs give engineers and makers practical choices across performance levels. High-end devices like Analog Devices’ ADIS16575/ADIS16576/ADIS16577 deliver factory calibration, low bias drift, and digital outputs for precision robotics, autonomous systems, and aerospace projects.

At the same time, compact modules such as Murata’s SCH16T-K01 integrate gyro and accelerometer sensing for embedded applications, wearables, and IoT nodes. Together, these platforms show how inertial technology now scales from aerospace-grade accuracy down to plug-and-play modules, offering practical options for projects at every level.

Figure 3 The SCH16T‑K01 module combines a high‑performance 3‑axis angular rate sensor and 3‑axis accelerometer, delivering precise motion tracking for embedded, wearable, and IoT applications. Source: Murata

Besides, makers and hobbyists do not need to wrestle with bare chips anymore—prewired IMU breakout boards are widely available and come with headers and libraries, making motion sensing experiments plug-and-play. For newer designs, boards built around ST’s LSM6DSO/LSM6DSOX deliver reliable performance in a maker-friendly format, ensuring parts that are safe for ongoing projects.

Figure 4 Today’s prewired cards like the LSM6DSOX module—and other readily available IMU boards—let makers explore motion sensing with ease and enable reliable integration into advanced embedded projects. Source: Author

IMUs in practice and everyday life

Well, we are not balanced yet, but we have touched some fundamental and practical points in a rather random way. Still, the journey through IMUs shows how these sensors are not just abstract components for engineers; they are part of our everyday lives. From the stabilizing gimbals that keep cameras steady, to the motion tracking inside wearables, gaming controllers, and even automotive systems, IMUs quietly enable the seamless experiences we take for granted.

Figure 5 Today’s IMUs act as the unseen hand across entertainment, healthcare, and navigation—guiding cameras, gimbals, ships, trains, satellites, and aerospace systems, while also enabling makers to explore motion sensing with ease and integrate it reliably into advanced projects. Source: Author

The call now is to explore further—experiment with modules, build small projects, and see firsthand how this complex yet easy topic can transform ideas into motion-aware innovations.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post IMUs demystified: The hidden sense of machines appeared first on EDN.

Metasurface enables supersensitive, superfast thermal-based photodetector

Tue, 04/07/2026 - 15:53

I’ve always been interested in sensors and their related electronics. These devices are the interface between the real, physical world and the telemechanical systems that make use of their outputs. It’s also fascinating how many basic sensor approaches have been devised and enhanced for basic parameters such as temperature, pressure, distance, light intensity, and more.

Now we are entering a new phase where advances in materials—especially metamaterials, often aided by lasers—are creating breakthrough in sensors that could not be envisioned or implemented just a few years ago.

In short, a metamaterial is an engineered, 2D structure composed of subwavelength-scale elements that precisely control electromagnetic waves, such as light or microwaves, at an interface. The metasurface is an ultra-thin resonant element with special physical properties.

It’s typically composed of sub-wavelength structures (meta-elements) arranged in a 2D plane, enabling control over the propagation and scattering of electromagnetic waves at sub-wavelength scale by adjusting the phase, amplitude, or polarization of the incident waves

A good example of such an innovation is seen in the thermally based photon-detector project at Duke University, where researchers have demonstrated the fastest pyroelectric photodetector to date. It works by absorbing heat generated by incoming light and can capture light from wavelengths across the electromagnetic spectrum. The ultrathin device requires no external power, operates at room temperature, and can be readily integrated into on-chip applications.

Conventional semiconductor photodetectors work by initiating electron flow when struck by visible light. In contrast, the pyroelectric detector approach (also called a thermal detector) generates electric signals when it’s heated up after absorbing light.

Pyroelectric detectors have been in use for decades due to their wideband characteristic, unlike semiconductor sensors that tend to be narrowband devices (which is not necessarily a bad thing, of course). However, these pyroelectric devices are not as responsive as solid-state devices, since they are relatively bulky and have larger thermal mass.

Although using a thermal scheme is normally slow compared to using photons to stimulate electrical current, it does not have to be that way. In the Duke approach, the metasurface-enabled pyroelectric photodetectors are fabricated by layering a well-established nanogap cavity metasurface structure on top of a pyroelectric thin film (Figure 1).

Figure 1 Schematic representation of metasurface-enabled photodetectors illustrating key dimensions (a) with SEM image of the metasurface absorber (b). The red area represents the metasurface array. Finite element simulations of a single plasmonic nanostructure showing a cross-section of the pyroelectric layer 30 ps after resonant excitation of the metasurface (c).

The metallic metasurface consists of an array of nanoscale silver square prisms (90 nm × 90 nm × 35 nm) separated from a gold film by a thin (10 nm) dielectric layer of Al2O3 (aluminum oxide or alumina).

When light strikes the surface of a nanocube, it excites the silver’s electrons, trapping the light’s energy through a phenomenon known as plasmonics (the interaction between electromagnetic radiation such as light and conduction electrons at metallic-dielectric interfaces), but only at a specific frequency controlled by the nanocubes’ sizes and spacings.

In the latest iteration, the light-absorbing metasurface is circular rather than rectangular to maximize its exposure while minimizing the distance the signal must travel. This phenomenon is so efficient at trapping light and absorbing its energy that it only requires an extremely thin layer of pyroelectric material beneath it to create an electric signal.

Measuring the performance is another challenge. So, they devised an innovative arrangement with two distributed-feedback lasers that “brightened” when their frequencies became close to the same as the device’s operating speed.

The nearly perfect, spectrally selective absorption of the metasurface, which initiates the photodetector response, is shown by white light reflectivity spectra (Figure 2).

Figure 2 White light reflectance spectrum of a detector is shown with a 1.3 × 10−3 mm2 active area of 40 μm diameter (a). Photocurrent responsivity spectra of the detector shown in (a) measured upon pulsed 100 nW light excitation as compared to that of a detector in which a gold film rather than a metasurface layer acts as an absorber (b). Photocurrent measured for the device presented in a) and b) upon pulsed 783 nm excitation at the indicated power with the beam size maintained to consistently have a diameter 5 μm smaller than that of the device (c).

The gold mirror alone efficiently reflects near-infrared light, while the metasurface exhibits a relative decrease (>95%) in reflectivity centered at 790 nm. The resonance wavelength is determined by the size of the Ag nanostructures and the thickness of the Al2O3 dielectric layer, as it allows the possibility of photodetectors that are spectrally selective across the visible and infrared portions of the spectrum.

The team found that their new thermal photodetector operates at record-breaking 3-dB bandwidth of 2.8 GHz, which corresponds to a rise time of just 125 picoseconds. Also important, these ultrafast speeds were achieved while maintaining competitive responsivities and noise equivalent power (NEP) as low as 96 pW/√Hz.

This is just one of the many innovative applications in the RF and optical worlds which leverage metamaterials and metasurfaces. Among many other uses, these materials enable new ways to manage and channel electromagnetic energy at these wavelengths, often to create sensors of extraordinary accuracy and precision.

The full details of this work by the Duke University team are in their paper “Metasurface-Enhanced Thermal Photodetector Operating at Gigahertz Frequencies” published in Advanced Functional Materials. While that posted paper is behind a paywall, the Duke team has thoughtfully posted an open-source version at their departmental website here.

Have you seen or used any sensors based on metamaterials or metasurfaces? What sensing challenges would you tackle if you had the needed meta resources?

Related Content

The post Metasurface enables supersensitive, superfast thermal-based photodetector appeared first on EDN.

A convenient desktop-accessible calculator of E-series component values

Tue, 04/07/2026 - 15:00

As explained in the E series Wikipedia page: “The E series is a system of preferred numbers (also called preferred values) derived for use in electronic components. It consists of the E3, E6, E12, E24, E48, E96, and E192 series, where the number after the ‘E’ designates the quantity of logarithmic value ‘steps’ per decade. Although it is theoretically possible to produce components of any value, in practice, the need for inventory simplification has led the industry to settle on the E series for  resistors, capacitors, inductors, and zener diodes.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

It’s convenient at times to have a desktop calculator that accepts a computed value x and returns the standard, commercially available value closest to it for a specified E series. Here, “closest” means that candidate value for which the absolute value of the computed error (candidate/x – 1) is the smallest.

The following GitHub link:

hosts the files needed to create the desktop icon, which calls the application, both of which are shown in Figure 1. It also contains a README file, which details how to install the files on a Windows PC, and a User Manual.

Figure 1 The desktop icon that calls the application, which is also shown. The E3 series has been selected, and a computed value of 56 has been entered. The closest E3 series value of 47 is apparent, along with the calculated error of the selected candidate.

Selecting a different series will automatically calculate and present the nearest value and its error for that series. Pressing the <Enter> key in the Enter Value box will clear the entry so that a new one can be checked. The Enter Value numeric sequence may be followed by an exponent (e6, E-2, etc.). A single alpha character (for instance, M, k, n, or others) also may be appended. Neither is necessary, but the format of the Nearest E value will always follow that of the Enter value.

Although not needed often, this is convenient to have around with the touch of a Desktop icon. Move it elsewhere if the Desktop is not your preferred location.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post A convenient desktop-accessible calculator of E-series component values appeared first on EDN.

Netgear’s LM1200: A 4G LTE modem, modestly funded

Mon, 04/06/2026 - 15:00

It may not support the latest-and-greatest cellular data tech. But in a pinch, it’ll cost-effectively still do the Internet-access trick.

In one of last month’s posts, covering cellular hotspots for maintaining broadband connectivity when premises power goes down, as well as when you’re on the road, I wrote:

Last January I’d purchased on sale from Amazon two NETGEAR LM1200 cellular broadband modems, one for teardown-to-come and the other for precisely the scenario—premises power-loss connectivity backup—that I experienced in mid-December. They aren’t as-is usable [unless you only need to have one wired-connected device online, that is], requiring tether to a router. But I have plenty of those in inventory. And had we stuck around the home more than one night I probably would have pressed the modem-plus-router combo into service, fueled by a portable power unit. But another limitation, bandwidth, was the same one that already soured me on the Surface Pro X’s integrated modem (along in the ones in my Intel-based Surface Pros, for that matter). The LM1200 “only” supports 4G LTE, which is likely why I bought them (on closeout, I suspect) for only $19.99 each a year-plus back, versus the original $49.99 MSRP.

Today, I’ll be actualizing my year-plus back teardown aspiration, as usual beginning with some outer box shots…as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Flip up the top flap:

and the first things you’ll see are our patient, underneath two slips of paper (also found here in PDF form, along with a fuller user manual). Below them:

are two cables, one for power and the other for data connectivity, along with a power adapter:

Last things first; the AC-to-DC adapter, with a USB-A output (with only notable sides shown):

and the two cables:

Now for our patient:

TS-9 connectors (plus other interesting things, such as the nano SIM slot) ‘round back, the same as with the high-end NETGEAR MR6110 cellular hotspot I showcased a month back:

and as before intended for tethering the cellular modem to an optional external antenna:

Onward:

Note the passive ventilation abundance underneath; a curious choice, given that heat rises, not sinks (and don’t get me started on the confusion inherent to the term “heatsink”), but better than nothing, I guess:

A closeup of the label reveals, among other things, the all-important FCC ID (PY320300503):

60 FCC certification record entry results. That’s a new record, at least for me!

Rubberized feet tend to hide (albeit not always, mind you) screw heads, providing pathways inside:

The typical presence pans out once again in this instance:

And we’re in. The top and bottom chassis pieces both detach:

leaving behind the PCB, along with chassis remnants around the periphery:

which also separate straightaway, this time with no additional screws to mess with:

Let’s start with the top of the PCB:

Dominating the landscape is a Quectel EC25-AF PCIe LTE Cat 4 module, rotated 180° in this photo so you can discern the topside printing right-side-up:

Below it are the four status LEDs whose illumination ends up shining out the holes at the top of the device. And above it are two Youth Electronics GS12401C LAN transformers, one each for the cellular modem’s LAN and WAN ports.

Next, those two long-and-skinny shiny metal pieces, one on each side of the PCB:

They’re, you’ve probably already guessed, the 4G cellular antennae.

Now for the other (bottom) side of the PCB:

Faraday Cages. Regular readers already know what comes next:

Nothing terribly exciting here, that is unless you’re an RF engineer:

How about the larger one?

Another 4R7 (4.7 microhenry) inductor. Plus, a Qualcomm Atheros QCA8334 four-port Gbit Ethernet switch IC, only two ports’ worth of resources which are presumably in use (for the aforementioned LAN and WAN backside ports). And scattered about the remainder of this PCB side’s real estate are clusters of test points, passives, discretes and other diminutive doodads.

And there we are! After this writeup is published and I answer any lingering reader questions, I’ll pop the Faraday Cage tops back on, reassemble the surrounding chassis and see if it still works. And speaking of questions, please do sound off with your thoughts in the comments!

Brian Dipert is the associate editor, as well as a contributing editor, at EDN Magazine.

Related Content

The post Netgear’s LM1200: A 4G LTE modem, modestly funded appeared first on EDN.

Mastering differential probes: Fundamentals and advanced insights

Mon, 04/06/2026 - 04:29

Differential oscilloscope probes are indispensable tools for engineers who need to measure signals accurately in complex environments. Whether you are troubleshooting everyday low-voltage circuits or tackling the challenges of high-voltage power electronics, the right probe ensures safety, precision, and reliable data capture. Yet, with so many options available—each designed for specific ranges and applications—understanding how to select and use differential probes effectively can make the difference between clear insights and misleading results.

This article explores the essentials of differential probes, highlighting their role in both common and high-voltage measurements, and offering practical guidance for engineers who want to master their use.

Understanding differential probes

At their core, differential probes are designed to measure the voltage difference between two points that are not referenced to ground. Unlike single-ended probes, which assume one side of the signal is tied to earth ground, differential probes float with the circuit under test, making them ideal for analyzing signals in isolated systems, switching power supplies, motor drives, and other environments where ground-referenced measurements can be misleading—or even unsafe.

By rejecting common-mode noise and providing accurate readings across a wide voltage range, differential probes give engineers the confidence to capture clean waveforms in both everyday low-voltage circuits and demanding high-voltage applications.

The poor man’s alternative: A-B math mode

Some engineers turn to the oscilloscope’s A–B math mode as a low-cost substitute for a true differential probe. By connecting two standard single-ended probes to separate channels and subtracting one from the other, the scope can display the voltage difference between two points. While this trick works for basic low-voltage measurements, it suffers from a critical drawback: poor common-mode rejection ratio (CMRR).

Furthermore, this method creates a dangerous grounding hazard; because standard probes remain tied to the scope’s Earth-grounded chassis, attempting this on floating high-voltage circuits can cause a catastrophic short circuit that a true, isolated differential probe would easily prevent.

Dedicated differential probes are carefully designed with matched inputs, shielding, and circuitry that reject common-mode noise and interference. In contrast, the A–B math method relies on two independent channels that rarely match perfectly in gain, phase, or frequency response.

As a result, common-mode signals leak into the measurement, producing distorted or noisy waveforms. This makes A–B math unsuitable for precision work and unsafe for high-voltage applications, where accurate rejection of common-mode voltage is essential (while floating-input oscilloscopes are an effective alternative, we will not be covering them in this post).

Figure 1 The A–B math mode on an oscilloscope uses two channels to approximate a differential measurement. Source: Author

Isolation transformers: A stopgap, not a solution

One of the most dangerous pitfalls in high-voltage oscilloscope measurements is the ground clip trap. Even if the circuit is floated, the probe’s ground clip remains internally tied to earth ground. Accidentally clipping to a high-voltage node can instantly short the circuit, destroy equipment, and pose a severe shock hazard.

A common workaround is to power the device under test (DUT) through an isolation transformer, breaking the direct connection to earth ground. This allows probes to be connected more flexibly and can make certain measurements possible when a proper probe is unavailable.

Floating a circuit also introduces new risks: exposed nodes may sit at dangerous potentials relative to ground, and the oscilloscope itself can be compromised if isolation fails. For these reasons, the 1:1 isolation transformer approach should be regarded only as a stopgap “poor man’s” option. When working with high-voltage systems, the safe and reliable solution is always a properly rated probe designed for the task.

Figure 2 A 1:1 isolation transformer lets probes connect without a ground reference, but the ground clip stays internally tied to earth and poses risk. Source: Author

It’s worth noting is that isolating the DUT—rather than the oscilloscope—is a standard power electronics practice that significantly assists a differential probe by floating the entire circuit’s reference. This setup effectively eliminates ground loops that otherwise inject EMI into your measurements via the probe’s cable shielding.

More importantly, it reduces common-mode stress on the probe’s internal amplifiers; since the DUT is no longer hard-tied to Earth ground, the probe does not have to fight a massive voltage potential relative to the scope’s chassis. This results in a much cleaner signal with higher fidelity, particularly when probing high-side MOSFETs or bridge rectifiers where the reference point is constantly swinging.

The right take: Differential scope probes

So, differential probes are specialized tools for measuring the voltage difference between two points in a circuit. They feature two inputs that can be connected anywhere without requiring a ground reference. An internal differential amplifier produces an output voltage proportional to the difference between the chosen points, typically scaled by a user-defined attenuation factor.

Figure 3 An active differential probe extends the measurement capabilities of a standard oscilloscope. Source: Pico Technology

Recall that a major advantage of differential probes is their ability to reject common-mode signals—voltages present simultaneously at both inputs. This makes them highly effective for capturing low-level signals in noisy environments. They can also be used for single-ended measurements by grounding one of the leads.

As an aside, it’s worth mentioning that a differential probe is not the same as a differential preamplifier like the Tektronix ADA400A. Probes are designed for general oscilloscope measurements across a wide bandwidth, while preamplifiers are specialized for ultra-low-level, low-frequency signals. ADA400A, for example, offers selectable gain and filtering, making it ideal for micro-volt level work in noisy environments.

Although ADA400A is still supported and available through some distributors, it’s considered more of a legacy accessory than a mainstream option. In practice, that means it remains useful for precision applications but is not promoted for new designs the way modern differential probes are. In short, use a probe for broad, everyday measurements, and reach for a preamp when chasing precision at the very bottom of the signal scale.

Getting back on track, high-voltage differential probes are among the most widely used types in modern test and measurement setups. And, galvanically isolated HV differential probes go further by providing complete electrical separation between the high-voltage circuit under test and the oscilloscope, protecting both the operator and sensitive equipment.

This isolation—often implemented through optical coupling techniques—prevents ground loops, reduces noise interference, and ensures accurate measurements even in environments with large voltage swings. Their combination of safety, fidelity, and versatility makes them indispensable tools in high-voltage and high-power applications.

As a summary (kept simple for clarity), all differential probes rely on active circuitry, since measuring the voltage difference between two points requires rejecting common-mode signals. Everyday differential active probes are used for precision work in high-speed digital and low-level analog circuits.

For power electronics, high-voltage differential active probes are the standard, enabling safe measurement of floating signals and large common-mode voltages. And when maximum safety and fidelity are needed, galvanically isolated differential probes—often using optical isolation—provide complete separation between the circuit under test and the oscilloscope, preventing ground loops and protecting both operator and equipment.

Practical session: Use cases and key specifications

This session is on the practical side, focusing on when differential probes are actually needed and the key specifications that matter most when choosing one.

Needless to say, differential probes are required whenever signals are not referenced to ground or involve large common-mode voltages. A classic case is measuring the gate-to-source voltage on a high-side MOSFET in a switching converter. Because the source terminal is floating and rides on the switching node, a standard single-ended probe tied to ground would be unsafe and misleading.

In this situation, a high-voltage differential active probe captures the true waveform safely, and if voltages or noise are extreme, an optically isolated probe adds full separation between circuit and oscilloscope for maximum protection and accuracy.

Figure 4 A practical application example using a differential probe to capture floating gate-to-source voltage signals in a power electronics circuit. Source: Author

Below are the key specifications engineers should keep in mind:

  • Common mode rejection ratio (CMRR): Measures how well the probe ignores “noise” or voltages that appear equally on both leads. Note that CMRR is frequency-dependent and typically drops as the signal frequency increases. A higher CMRR ensures cleaner measurements in high-interference environments.
  • Voltage rating: Defined by both differential voltage (between leads) and common-mode voltage (leads to ground), often categorized by CAT safety ratings such as CAT II and CAT III). These ratings ensure the probe can safely handle both the signal’s magnitude and any potential transients in your application.
  • Attenuation ratio: Most differential probes provide fixed or switchable ratios. This setting defines how much the input signal is scaled down before reaching the oscilloscope, balancing high-voltage safety with signal fidelity.
  • Bandwidth: Determines how faithfully fast signals are captured. Because square waves are composed of high-frequency harmonics, a probe’s bandwidth should ideally be 3 to 5 times higher than the signal’s fundamental frequency to avoid “rounding off” sharp transitions.
  • Input Impedance: High resistance minimizes DC loading on the circuit. However, be aware that effective impedance drops significantly at high frequencies due to the effects of internal capacitance.
  • Input capacitance: This is the primary factor that “slows down” fast transitions or causes circuit loading at high speeds. Lower capacitance is essential for maintaining signal integrity and preventing the probe from changing the behavior of the circuit under test.

Clearing the mist on differential probes

As often, this post also leaves some mist but hopefully clears enough to reveal the essentials. Differential probes are not exotic extras—they are the right tool whenever signals float, swing at high voltages, or demand precision beyond what a single-ended probe can safely deliver.

From active types for clean digital and analog work, to high-voltage versions for power electronics, and galvanically isolated probes for maximum safety, the choice comes down to matching probe and specs to the measurement challenge. And those specs—CMRR, bandwidth, risetime, voltage rating, attenuation ratio, input impedance, capacitance—are not just numbers; they decide whether your waveform is faithfully captured or dangerously distorted.

So next time you reach for a probe, pause to check your choice and its specs—the right differential probe is not optional, it’s essential for accuracy, safety, and confidence in your measurements.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Mastering differential probes: Fundamentals and advanced insights appeared first on EDN.

How to design a digital-controlled PFC, Part 3

Fri, 04/03/2026 - 15:00

Editor’s note: This is a multi-part series on how to design a digital-controlled PFC: 

Total harmonic distortion (THD) and power factor are two major criteria used to evaluate power factor correction (PFC) performance. Meeting strict THD and power factor requirements is always a challenge for PFC designs. In this third installment of the series, I will introduce a set of digital methods to reduce THD and improve the power factor.

THD definition

THD is the total harmonic distortion present in a signal, defined as the ratio of the root-mean-square (RMS) amplitude of the total higher harmonic frequencies to the RMS amplitude of the fundamental frequency. Equation 1 expresses THD:

where Vn is the RMS value of the nth harmonic, and V1 is the RMS value of the fundamental component.

THD requirements have become stricter, especially in server applications, but meeting low THD requirements is more difficult than ever. The following methods can help reduce THD.

Make sure that the sensed signals are clean

To reduce THD, the first thing is to make sure that all of the sensed signals are clean. Because the sensed AC input voltage modulates the current reference, any spikes on the sensed AC signal will cause current reference distortion and affect THD. 

One common practice is to put a decoupling capacitor close to the analog-to-digital converter (ADC) pin of the controller and set the resistor-capacitor filter cutoff frequency about 10 times higher than the frequency you are interested in. If the sensed AC voltage is still noisy, you can use a software phase-locked loop (SPLL) [1] to generate an internal sine-wave signal in phase with the AC voltage, and then use that generated sine-wave signal to modulate the current reference. Since the SPLL-generated sine wave is clean, even if there is some noise on the sensed AC voltage, the current loop reference will still be clean.

For VOUT sensing, you can use a digital infinite impulse response filter, as shown in Equation 2, to process the sensed VOUT to further reduce noise; because the PFC voltage loop is slow, the extra delay caused by this digital filter is acceptable.

where k<1.

Oversampling

The PFC inductor current has switching ripples. The current-sensing circuit may not provide sufficient attenuation to this current ripple. If you sample this signal only once in each switching period, there is no perfect, fixed location where the signal represents the average current all of the time. To get a more accurate feedback signal, consider using an oversampling mechanism.

Figure 1 shows an example that evenly samples the current feedback signal eight times in every switching cycle, averages the results, and sends them to the control loop. This oversampling effectively averages the current ripple out such that the measured current signal gets closer to the average current value. Also, the controller becomes less sensitive to noise.

Figure 1 Oversampling eight times in every switching cycle to average the current ripple out in order to allow the measured current signal to get closer to the average current value. (Source: Texas Instruments)

Reduce the current spikes at AC zero crossing

Current spikes at AC zero crossing are an inherent issue for totem-pole bridgeless PFC. These spikes can be so big that it becomes impossible to pass THD specifications. Reference [2] analyzes the root cause of these spikes and provides a PWM soft-start algorithm to effectively reduce them, as shown in Figure 2.

Figure 2 PWM soft start after AC zero crossing to prevent current spikes common to totem-pole brideless PFCs.(Source: Texas Instruments)

In this algorithm, when VAC changes from a negative to a positive cycle after AC zero crossing, boost switch Q2 turns on first with a very small pulse width, then gradually increases to the duty cycle (D) generated by the control loop. A soft start on Q2 gradually discharges the switch-node drain-to-source voltage (VDS) to zero. Once Q2 soft start is complete, synchronous transistor Q1 starts to turn on. It begins with a tiny pulse width and gradually increases until the pulse width reaches 1–D. When Q2 soft start is complete and Q1 soft start begins, the low-frequency switch Q4 turns on.

The transition from the AC positive cycle to the negative cycle is similar. Turning off all of the switches at the end of each half AC cycle leaves a small dead zone at AC zero crossing. Figure 3 shows the test result.

Figure 3 Current waveforms without and with a PWM soft start: the traditional control method (a); PWM soft start (b). (Source: Texas Instruments)

Reduce voltage-loop effects

The PFC output voltage has double-line frequency ripples. Although the voltage loop compensator can reduce these ripples, it cannot totally remove them; there are still some ripples coupled to the current reference that then affect THD.

One way to reduce the effect of these ripples is to add a digital notch (band-stop) filter between the VOUT sensed signal and the voltage loop. This notch filter can effectively attenuate the double-line frequency ripple while still passing all other frequency signals, including the sudden VOUT change caused by the transient load. The load transient response will not be affected.

Another approach is to use VOUT at the AC zero-crossing value, or VOUT_ZC(t), as a voltage feedback signal; see Figure 4. Since VOUT_ZC(t) equals the average value of VOUT, and since it is a “constant” in steady state, using it as feedback signal can eliminate the double-line frequency ripple.

Figure 4 VOUT at the AC zero-crossing instant, using this method can eliminate the double-line frequency ripple. (Source: Texas Instruments)

To handle the load transient, use the voltage loop control law shown in Figure 5.

Figure 5 Using VOUT_ZC(t) as a feedback signal in the steady state. (Source: Texas Instruments)

If the instantaneous error is small, use the value at the AC zero-crossing instance, which is VOUT_ZC, and a small Kp, Ki for the voltage loop compensator Gv. When a load transient occurs, causing an instantaneous VOUT error greater than the threshold, use the instantaneous VOUT value and a large Kp, Ki for Gv to rapidly bring VOUT back to its nominal value.

Duty-ratio feedforward control

As the name suggests, duty-ratio feedforward control precalculates a duty ratio, then adds this duty ratio to the feedback controller. For a boost topology operating in continuous conduction mode, Equation 3 gives the duty ratio feedforward (dFF) as:

Figure 6 depicts the resulting control scheme. After using Equation 3 to calculate dFF, add dFF to the traditional average current-mode control output (dI). Then use the final duty ratio (d) to generate a PWM waveform to control PFC.

Figure 6 Average current-mode control with dFF. (Source: Texas Instruments)

Since dFF generates the majority of the duty cycle, the control loop only adjusts the calculated duty slightly. This technique can help improve THD for applications with a limited controller loop bandwidth.

Harmonic injection

In cases where a specific order of harmonics is too high, and the methods I’ve described still cannot meet the THD specification, a harmonic injection method [3] may resolve the problem. The basic idea of this method is to generate a sinusoidal signal with the same order of the harmonic that you want to compensate, and inject this signal into the PFC current control loop to compensate for that harmonic.

There are two ways to generate a sinusoidal signal. The first method is to use an SPLL to track the AC voltage and then generate the corresponding high-order harmonics. The second method is to generate a sine-wave table and then read the table element at a different speed to obtain different orders of sine waves [3]. Figure 7 shows a test result on a PFC that initially has high third- and fifth-order harmonics.

Figure 7 Harmonic injection to reduce third- and fifth-order harmonics. (Source: Texas Instruments)

Power factor definition

The power factor is the ratio of real power in watts to apparent power, which is the product of the RMS current and RMS voltage in volt amperes, as shown in Equation 4:

Ideally, the power factor should be 1; the load will then appear as a resistor to the AC source. In the real world, however, electrical loads not only cause distortions in AC current waveforms but also make the AC current either lead or lag with respect to the AC voltage, resulting in a poor power factor. For this reason, Equation 5 calculates the power factor by multiplying the distortion power factor by the displacement power factor:

where φ is the phase angle between the current and voltage, and THD is the total harmonic distortion of the current.

Equation 5 also shows that to improve the power factor, the first thing to do is to reduce THD. However, low THD does not necessarily mean that the power factor is high. If the PFC AC input current and AC input voltage are not in phase, even if the current is a perfect sine wave (low THD), φ will result in a power factor less than 1.

The phase difference between the input current and input voltage is mainly caused by the electromagnetic interference (EMI) filter used in the PFC. Figure 8 shows a typical PFC circuit diagram that consists of three major parts: an EMI filter, a diode bridge rectifier, and a boost converter.

Figure 8 Circuit diagram of a typical PFC comprising an EMI filter, a diode bridge rectifier, and a boost converter. (Source: Texas Instruments)

In Figure 8, C1, C2, C3 and C4 are EMI X-capacitors. Simplifying Figure 8 results in Figure 9, where C is now a combination of C1, C2, C3, and C4.

Figure 9 Simplified EMI filter combining the capacitances shown in Figure 8. (Source: Texas Instruments)

The X-capacitor causes the AC input current to lead the AC voltage, as shown in Figure 10. The PFC inductor current is , the input voltage is , and the X-capacitor reactive current is . The total PFC input current is , which is also the current from where the power factor is measured. Although the PFC current control loop forces to follow , the reactive current of leads by 90 degrees, which causes to lead . The result is a poor power factor.

This effect is amplified at a light load and high line, as takes more weight in the total current. As a result, it is difficult for the power factor to meet a rigorous specification.

Figure 10 X-capacitor causes the AC current to lead the AC voltage. (Source: Texas Instruments)

Fortunately, with a digital controller, you can resolve this problem with one of the following methods.

Delay the current reference

Since makes the total current lead the input voltage, you can force to lag  by some degree, as shown in Figure 11. The total current will then be in phase with the input voltage, improving the power factor.

Figure 11 Forcing to lag . (Source: Texas Instruments)

Since the current loop forces the inductor current to follow its reference, to let lag , the current reference needs to lag . To delay the current reference, a circulate buffer stores the measurement VAC results. Then, instead of using the newest input voltage VAC data, use previously stored VAC data to calculate the current reference for the present moment. The current reference will lag ; the current loop will then make  lag . This can compensate the leading X-capacitor  and improve the power factor.

The delay period needs dynamic adjustment based on the input voltage and output load. The lower the input voltage and the heavier the load, the shorter the delay needed. Otherwise will be over delayed, making the power factor worse than if there were no delay at all. To resolve this problem, use a look-up table to precisely and dynamically adjust the delay time based on the operating condition.

Subtract from the current reference

Since a poor power factor is caused mainly by the EMI X-capacitor , if you calculate  for a given X-capacitor value and input voltage and then subtract  from the total ideal input current to form a new current reference for the PFC current loop, you will get a better total input current that is in phase with the input voltage and can achieve a good power factor.

To explain in more detail, for a PFC with a unity power factor of 1, is in phase with . Equation 6 expresses the input voltage:

where VAC is the AC input peak value, and f is the AC frequency. The ideal input current then needs to be totally in phase with the input voltage, expressed by Equation 7:

where IAC is the input current peak value.

Equation 8 expresses the capacitor current:

Equation 9 comes from Figure 9:

Combining Equation 7, Equation 8, and Equation 9 results in Equation 10:

If you use Equation 10 as the current reference for the PFC current loop, you can fully compensate the EMI X-capacitor , achieving a unity power factor. In Figure 12, the blue curve is the waveform of the preferred input current, iAC(t), which is in phase with . The green curve is the capacitor current, iC(t), which leads  by 90 degrees. The red curve is iAC(t) ‒ iC(t). In theory, if the PFC current loop uses this red curve as its reference, you can fully compensate the EMI X-capacitor  and increase the power factor.

Figure 12 New current reference. (Source: Texas Instruments)

Equation 10 requires a cosine waveform cos (2πƒt). To get this cosine waveform, use an SPLL to generate an internal sine wave synchronized with the input voltage. For microcontrollers that cannot perform trigonometric calculations, reference [4] describes another way to calculate iC(t).

Reduce THD and improve PF

If you need to reduce THD and improve the power factor, choose one or a combination of the methods discussed here. In the next installment of this series, I will talk about how to improve efficiency, limit re-rush current, implement e-metering, and reduce PFC bulk cap with a baby boost converter.

Related Content

References

  1. Bhardwaj, Manish. “Software Phase Locked Loop Design Using C2000™ Microcontrollers for Single Phase Grid Connected Inverter.” Texas Instruments application report, literature No. SPRABT3A, July 2017.
  2. Sun, Bosheng. “How to Reduce Current Spikes at AC Zero Crossing for Totem-Pole PFC.” Texas Instruments Analog Design Journal article, literature No. SLYT650, 4Q 2015.
  3. Sun, Bosheng. “A Harmonic Injection Method to Reduce Harmonics and THD for PFC.” Power Electronics News, Nov. 20, 2023.
  4. Sun, Bosheng. “Increase power factor by digitally compensating for PFC EMI-capacitor reactive current.” Texas Instruments Analog Design Journal article, literature No. SLYT673, 2Q 2016.

The post How to design a digital-controlled PFC, Part 3 appeared first on EDN.

Why ISO/PAS 8800 is the new blueprint for AI safety in all critical industries

Thu, 04/02/2026 - 15:45

The rapid integration of artificial intelligence (AI) and machine learning (ML) into safety-critical systems is one of the most significant engineering challenges of our time. Whether it’s a medical device diagnosing an anomaly, an autonomous robot on a factory floor, or a train’s obstacle detection system, the question is no longer if we will use AI, but how can we guarantee its safe operation?

Enter ISO/PAS 8800, a new specification focused on the safety of AI applications in road vehicles. At first glance, the title implies that it’s solely for the automotive industry. However, for engineers in medical devices, industrial automation, rail, aerospace, and defense, dismissing this document as “just for cars” would be a missed opportunity.

Figure 1 ISO/PAS 8800 provides consensus-based framework for managing the unique risks of AI. Source: Parasoft

While ISO/PAS 8800 is tailored for the automotive V-cycle and references standards like ISO 26262, its core principles are fundamentally architecture- and domain-agnostic. It provides the most comprehensive, consensus-based framework to date for managing the unique risks of AI, such as nondeterministic behavior, data-driven bias, and performance degradation when systems encounter scenarios not represented in training data.

For example, in safety-critical systems, AI models used for perception or decision-making may behave unpredictably when exposed to rare or previously unseen conditions, potentially leading to incorrect or unsafe system responses if not properly validated and constrained. By understanding ISO/PAS 8800, engineers in other sectors can reinterpret its guidance to complement and enhance their existing safety standards, such as IEC 62304 (medical), IEC 61508 (industrial), EN 50716 (rail), and DO-178C (aerospace).

Here’s how the key principles of ISO/PAS 8800 can be adopted as a universal blueprint for AI safety.

The foundational shift: From “failure” to “insufficiency”

Traditional functional safety standards are built on a deterministic model: a component fails, and that failure must be managed. But AI/ML systems don’t “fail” in the traditional sense.

They can operate exactly as designed yet still be considered unsafe due to a lack of understanding the difference between a systematic fault (a bug in the C/C++ code) and a functional insufficiency (an AI model misclassifying a pedestrian because its training data lacked sufficient night-time examples). This is the single most important concept introduced in ISO/PAS 8800.

Figure 2 Here is how an AI model can misclassify a pedestrian because its training data lack sufficient night-time examples. Source: Parasoft

  • For the medical device engineer (IEC 62304): This reframes how to validate diagnostic AI. The software units may be perfectly coded, but the model’s safety must be argued based on the sufficiency of its training data across diverse patient populations, not just its lack of software bugs.
  • For the industrial robot integrator (IEC 61508): A collaborative robot’s safety function isn’t just about the hardware stopping in time. Its AI-based perception system might fail to detect a human in low light due to data insufficiency. ISO/PAS 8800 provides the language to specify and verify the “safety of the intended functionality” for AI, a concept that goes beyond traditional hardware/software failure rates.

AI is a system problem, not a model problem

The specification is adamant that an AI model is not a standalone “item.” It’s a component within a larger system. Clause 6 breaks down an AI system into three parts: pre-processing, the AI model, and post-processing. Safety, it argues, must be designed into the entire pipeline.

  • For the aerospace engineer (DO-178C/DO-254): This aligns perfectly with the systems engineering approach of ARP4754A. AI-based object detection for a taxiing aircraft isn’t just the job of a neural network. It’s the image signal processor (pre-processing) and the voting logic that cross-checks the AI’s output with a LiDAR (post-processing). The “assurance argument” required by Clause 8 of ISO/PAS 8800 forces a look at the entire data and control path, not just the model’s inference accuracy.
  • For the defense contractor (Def Stan 00-055): In a complex battlespace management system, the AI might propose courses of action. ISO/PAS 8800’s logic suggests that safety isn’t just about the AI’s recommendation, but about the “post-processing” layer, the human-machine interface and the rules of engagement that act as a final plausibility check before any action is taken.

The assurance argument: Moving beyond metrics

Clause 8 is the heart of the standard. It states that you cannot prove AI is “safe” simply by saying it is 99.9% accurate. Instead, you must build a structured assurance argument that combines quantitative data with qualitative reasoning.

An assurance argument must state a claim, provide evidence, and explain the reasoning that links them. For AI, the evidence requirement is multi-faceted:

  • Data coverage: Is the dataset representative of the real world? (Clause 11)
  • Robustness testing: How does the model perform under noisy or adversarial conditions? (Clause 12)
  • Architectural mitigations: Are there redundant sensors, model monitors, or out-of-distribution detectors? (Clause 10)
  • For the rail engineer (EN 50716 / CENELEC): Instead of just specifying an SIL rating for an AI-based track intrusion system, you would build an argument. The claim is “the system will detect an obstacle on the tracks.” The evidence includes: (1) traceability of the training data to a specification of the operational environment (for instance, all types of weather, debris, and times of day), (2) results from injection of anomalous sensor data to test robustness, and (3) the existence of a fallback to a traditional radar system if the AI’s confidence drops. This structured approach satisfies the rigorous traceability demands of rail safety.

Data as a safety-critical artifact

Clause 11 is revolutionary for its explicit treatment of data. In traditional software safety, the “code” is the master. In AI, the dataset is part of the specification. The standard mandates a full dataset lifecycle, from requirements definition to verification, validation, and maintenance.

  • For the medical device engineer: This maps directly onto the need for diverse, high-quality clinical data. Clause 11 requires active management of datasets for gaps and biases. If an AI for tumor detection was trained only on specific age demographics, the standard mandates this be treated as a safety gap that must be mitigated, either by expanding the dataset or restricting the device’s intended use (Clause 9).

Confidence in tools and underlying code

Finally, Clause 15 reminds us that all AI systems are built on a software foundation, often C and C++. The most sophisticated AI model is useless if the C++ function that executes its safe-state monitor has a memory leak. The standard requires confidence in the development of the toolchain itself, from training pipelines to compilers.

This is where traditional software testing practices become the bedrock of AI safety. The “guardrails” that catch AI errors, the fallback logic, the monitors, and the plausibility checks must all be verified to the highest integrity levels using methods like static analysis, unit testing, and integration testing.

Figure 3 Robust software testing is critical in ISO/PAS 8800 implementation. Source: Parasoft

Just as ISO 26262 relies on robust software engineering, so too does ISO/PAS 8800. The principles of shift-left testing, automated unit testing, and CI/CD integration remain nonnegotiable, regardless of the final application domain.

A universal language for AI risk

ISO/PAS 8800 is more than an automotive standard—it’s a Rosetta Stone for translating the abstract risks of AI into the concrete language of safety engineering. It’s a vocabulary for discussing insufficiencies, a structure for building assurance arguments, and a lifecycle for managing data as a critical component.

For engineers in medical, industrial, rail, and aerospace sectors, the path to certifying AI-enabled systems will not require reinventing the wheel. It will require adopting and adapting the principles of ISO/PAS 8800 to a domain that complements existing standards like IEC 62304, IEC 61508, and DO-178C. By doing so, the navigation of AI complexities can be done with a proven framework, ensuring that as systems become smarter, they remain unshakably safe.

Ricardo Camacho is director of product strategy for embedded and safety critical compliance at Parasoft.

 

 

Related Content

The post Why ISO/PAS 8800 is the new blueprint for AI safety in all critical industries appeared first on EDN.

Pages