Українською
  In English
EDN Network
How system-level validation compresses schedule risk in device design

Flagship consumer electronic device launches are among the most operationally complex events in modern engineering. They require years of coordination across hardware, silicon, RF, software, operations, supply chain, and manufacturing. Yet, despite mature processes and experienced teams, flagship programs remain vulnerable to schedule volatility.
The root cause is rarely inadequate engineering talent. More often, it’s structural. Manufacturing realities are integrated too late into architectural decision-making. System-level validation, when deployed early and continuously, functions not as a downstream quality checkpoint, but as an organizational mechanism for compressing schedule risk before capital and timeline commitments are locked.
Financial exposure at flagship scale
At flagship scale, schedule slip is not simply an engineering inconvenience. It’s a material financial event.
Apple’s fiscal year 2025 results reported approximately $416 billion in annual revenue, with iPhone revenue representing roughly half of total sales. Samsung’s Mobile Experience division reported approximately $26 billion in quarterly revenue during. For programs operating at this scale, a one-month delay during a peak launch cycle can defer revenue comparable to the annual revenue of many mid-sized technology firms.
Even outside tier-one OEMs, launch timing directly impacts channel readiness, carrier alignment, ecosystem momentum, and competitive positioning. In high-volume hardware, schedule is strategy.
The challenge is that many launch delays are not caused by unforeseen global disruptions, but by late-stage design changes triggered during production ramp. Industry analyses consistently show that a significant portion of late engineering change orders originate from integration and manufacturability issues that were technically detectable earlier in the development cycle.
When these issues surface during ramp, optionality has already collapsed. Tooling is frozen, suppliers are capacity-allocated, and marketing calendars are committed. At that stage, validation confirms risk rather than preventing it.
Why component-level validation fails at scale
Traditional validation strategies are optimized for component correctness. Subsystems are tested against modular specifications, and readiness decisions are based on aggregated subsystem pass rates. This approach ensures that parts function independently; however, it does not guarantee that the system functions reliably under real-world, high-volume conditions.
Many failure modes emerge only during full-system interaction. Digital signal interference, RF coexistence conflicts, thermal coupling between tightly integrated subsystems, and parasitic effects often cannot be fully replicated in isolated bench testing.
For example, a high-speed display flex cable may pass standalone signal integrity validation. During system-level engineering verification testing (EVT) under real RF load, that same cable can radiate broadband noise that desensitizes the primary cellular receiver. The result is a coexistence failure that frequently forces late-stage shielding changes or mechanical redesign.
Similarly, assembly processes introduce stress, tolerance stack-up, and handling variability that are absent in early prototypes. Component-level validation ensures parts are defect-free. It does not predict how those parts behave when integrated and manufactured at scale. The consequence is predictable: issues emerge when yield sensitivity tightens during ramp.
A defect observed in 1 out of 100 early validation units translates into 10,000 defective devices at a one-million-unit scale. At millions of units, small deltas compound rapidly.
The design–manufacturing impedance mismatch
A recurring root cause of late-stage validation failures is misalignment between design optimization and manufacturing constraints. Design teams optimize for performance, power efficiency, compact form factor, and cost targets. Manufacturing teams optimize for yield stability, throughput, repeatability, and process capability. Both are correct within their domains.
Failure occurs when manufacturing sensitivity is not structurally integrated into architectural trade-off decisions. In cross-functional reviews, performance metrics are often presented without quantified yield sensitivity analysis. Design freeze decisions may proceed based on functional validation, while manufacturing risk remains probabilistic rather than modeled. Schedule pressure can incentivize accepting integration risk with the assumption that ramp will resolve residual issues.
System-level validation acts as the translation layer between these domains. When embedded early, it exposes divergence between design intent and production feasibility while design changes remain affordable. The cost-of-change curve, widely cited in engineering economics literature, demonstrates that defects discovered during mass production can cost orders of magnitude more to correct than those identified during early design phases. Whether the multiplier is 10x or 100x depends on context, but the direction is consistent: late discovery amplifies cost and schedule exposure.
System-level validation as risk compression
Reframing system-level validation as a schedule-risk compression mechanism changes how engineering organizations deploy it. Risk compression means reducing the variance between projected and actual ramp performance before high-volume commitments are made. It means narrowing the gap between modeled yield and early ramp yield while architectural flexibility still exists.
Consider a ten-million-unit program targeting 97% yield but only achieving 94% during early ramp. A 3% delta produces 300,000 additional defective units. At a $500 bill-of-materials cost, that equates to $150 million in direct exposure: before accounting for logistics, containment actions, rework, warranty impact, and brand degradation.
When system-level validation is embedded earlier in the development cycle, integration uncertainty is resolved before tooling freeze and capacity allocation. Manufacturing sensitivity becomes an architectural input, not a downstream constraint. Validation shifts from reactive confirmation to proactive risk reduction.
Governance implications for senior managers
For senior engineering and manufacturing managers, the implication is structural. System-level validation must be positioned upstream of design freeze, not solely before ramp. In practice, this requires:
- Upstream integration: Embedding manufacturing engineering into early architecture discussions.
- Quantified sensitivity: Requiring quantified yield sensitivity data before design freeze.
- Strategic alignment: Aligning validation milestones with major financial commitments.
- Holistic ownership: Elevating system-level risk ownership to program leadership rather than distributing it across siloed subsystem teams.
Organizations that treat system-level validation as a downstream quality function implicitly accept schedule volatility as a cost of doing business. Organizations that embed it as a bridge between design architecture and manufacturing execution create structural advantage. They stabilize flagship launch timelines, reduce ramp inefficiency, and preserve optionality when trade-offs are still affordable.
Ayokunle Oni is a system engineering program manager at Apple, where he helps coordinate the iPhone hardware design and engineering process across cross-functional teams. He specializes in system integration and validation and has led complex engineering programs from concept through production, working closely with global manufacturing and vendor partners.
Related Content
- Basics of Bench Silicon Validation – PCB Passives
- Early verification and validation using model-based design
- Design Constraint Verification and Validation: A New Paradigm
- Design-Stage Analysis, Verification, and Optimization for Every Designer
- Hardware Verification: What AI Gets Right When It Generates Your Testbench — and What It Misses
The post How system-level validation compresses schedule risk in device design appeared first on EDN.
Magnet-free electric motors: Driving innovation beyond rare earths
Electric motors are everywhere—from the cars we drive to the appliances in our homes—but most rely on rare earth magnets that come with high costs and environmental challenges. A new wave of innovation is changing that story. Magnet-free electric motors are proving that smart engineering can deliver powerful performance without depending on scarce materials.
By removing rare earths from the equation, these designs promise cleaner supply chains, more sustainable production, and fresh opportunities for industries ranging from electric vehicles to renewable energy. It’s a shift that could redefine how we think about powering the future.
Why rare earths matter
Rare earth magnets, especially neodymium and dysprosium, have been the secret ingredient behind the compact, high-torque motors that power everything from electric vehicles to wind turbines. Their ability to deliver strong magnetic fields in small packages has made them indispensable in modern motor design.
But there is a catch: mining and processing rare earths is energy-intensive, environmentally challenging, and geographically concentrated in just a few regions of the world. This creates supply chain risks, price volatility, and sustainability concerns that ripple across industries.
By understanding why rare earths became so central to electric motors, we can better appreciate the significance of moving beyond them—and why magnet-free designs are more than just an engineering curiosity. They represent a strategic shift toward resilience, affordability, and cleaner technology.
How do you pull without a magnet
So how do you build a motor without magnets? The answer lies in clever engineering that takes advantage of the natural properties of materials and the geometry of the motor itself. Instead of relying on powerful magnets to create motion, magnet-free designs use principles like reluctance torque—where the rotor naturally aligns with the path of least magnetic resistance—or induction, where currents in the rotor generate the force needed to spin.
These approaches may sound technical, but the idea is simple: by rethinking the fundamentals, engineers can coax motors into delivering the same performance we expect, without the rare earth magnets. The result is a motor that can be lighter, more affordable, and easier to manufacture at scale. And because these designs lean on widely available materials, they sidestep the supply chain bottlenecks that have long plagued magnet-based motors.
Why it matters
Magnet-free motors are not just an engineering breakthrough; they are a practical step toward cleaner, more resilient technology. By removing rare earths, manufacturers can cut costs, ease supply chain pressures, and reduce environmental impact.
The benefits ripple across industries: in electric vehicles, they promise more affordable and sustainable mobility; in renewable energy, they support wind turbines and other systems without relying on scarce materials; and in industrial machinery, they offer reliable performance with simpler, more scalable production.
In short, magnet-free motors matter because they combine innovation with real-world impact, helping power a future that is smarter, greener, and less dependent on limited resources.

Figure 1 Today’s magnet-free electric motors deliver high efficiencies for heavy-duty and commercial vehicle applications. Source: Advanced Electric Machines
Working principles of magnet-free motors
For learners, makers, and anyone with a curious engineering mind, the real excitement lies in how magnet-free motors actually work. Instead of relying on rare earth magnets to generate motion, these designs tap into fundamental physics—using reluctance torque, induction, or clever rotor geometry to produce rotation.
Think of it as guiding the motor to “want” to align itself with paths of least resistance, or harnessing currents induced in the rotor to drive movement. The beauty is that these principles are elegant, scalable, and rooted in concepts every engineer encounters early in their studies. By revisiting the basics with fresh eyes, magnet-free motors show how fundamental science can be reimagined to solve modern challenges.
At their core, magnet-free motors rely on clever ways to generate motion without permanent magnets, using principles that every curious engineer can appreciate.
That is, reluctance motors exploit the tendency of a rotor to align with the path of least magnetic resistance, producing torque through geometry rather than magnets. Induction motors create rotation by inducing currents in the rotor with alternating fields, a design that is simple yet powerful. Synchronous reluctance motors combine aspects of both, offering efficiency and control that rival traditional designs.
Each approach shows how fundamental physics—magnetic fields, current flow, and mechanical alignment—can be harnessed in different ways to achieve the same goal: reliable rotation. For learners, makers, and innovators, these principles are a reminder that rethinking the basics can unlock new possibilities for sustainable engineering.

Figure 2 A synchronous reluctance motor demonstrates magnet‑free operation with smooth torque characteristics. Source: ABB
It’s important to note that not all reluctance motors are the same. A synchronous reluctance motor (SynRM) runs in step with the supply frequency, using flux barriers in the rotor to align with the path of least magnetic resistance, delivering smooth torque and efficiency. A switched reluctance motor (SRM), by contrast, relies on sequentially energizing stator phases to pull a simple steel rotor around; it’s rugged and powerful but tends to be noisier with more torque ripple.
Sitting between these designs is the permanent magnet assisted SynRM (PMA‑SynRM), which adds small magnets to stabilize the field and boost efficiency while still using far fewer rare earths than traditional permanent magnet motors. Together, these variations show the spectrum of approaches engineers use to balance performance, simplicity, and sustainability.
Unlocking SynRM performance with VFDs
While SynRMs deliver smooth torque and efficiency, they typically need a variable frequency drive (VFD) to start and stay synchronized with the stator’s rotating field. The VFD supplies control frequency and voltage, making these motors flexible but dependent on modern power electronics.
By contrast, older induction motors could start “across the line”—plugged directly into the grid—though at the cost of high inrush currents and less precise control. This reliance on VFDs underscores how magnet-free motor innovation is inseparable from advances in drive technology, reminding designers that motor and electronics progress go hand in hand.
As a worthy side note, VFD is the electronic brain that makes modern motors flexible. By adjusting the frequency and voltage, it lets a motor start gently, avoid the punishing inrush currents of direct grid connection, and run at variable speeds with precision. For SynRMs, the VFD is essential—it keeps the rotor locked in sync with the stator’s rotating field. Older induction motors could start “across the line” without such electronics, but that simplicity came at the cost of efficiency and control.

Figure 3 A compact VFD module suitable for driving 3-phase SynRM motors supports efficient control in both industrial and household applications. Source: Mean Well
From a design standpoint, the dependence on VFDs is both enabling and constraining. On the enabling side, drives unlock efficiency gains, smoother torque, and precise speed control that make SynRMs competitive with permanent-magnet machines.
On the constraining side, they add cost, require integration expertise, and shift part of the reliability burden from the motor to the electronics. For engineers, it means evaluating magnet-free motors is not just about rotor geometry; it’s about the total system, where sustainability benefits must be balanced against drive complexity and lifecycle economics.
Note that modern control strategies such as field-oriented control (FOC) and sensorless vector control extend the capabilities of these VFDs. FOC regulates stator currents to deliver precise torque and flux, while sensorless vector methods estimate rotor position without mechanical sensors, reducing cost and improving reliability. Together, they allow SynRMs—and other magnet-free designs—to match the responsiveness and efficiency of permanent-magnet machines.
Quick FOC take: Field‑oriented control does not have to be daunting. For makers eager to experiment, compact FOC shields/modules provide a straightforward, low‑power entry point. The Arduino SimpleFOC Shield is a practical example, lowering barriers and making hand-on exploration accessible.

Figure 4 SimpleFOC Shield empowers accessible FOC experimentation for Arduino users. Source: Author
Next, getting into design significance, the combination of magnet-free motor design, advanced VFDs, and intelligent control strategies has broad implications. Engineers gain access to motors that are lighter, more affordable, and easier to manufacture at scale, while sidestepping rare-earth supply chain constraints.
In the long run, magnet-free motors not only reduce dependence on scarce materials but also align with global sustainability goals, positioning them as a cornerstone of next-generation electrification across industries spanning from manufacturing to consumer appliances.
Closing thoughts
Magnet-free motors are steadily moving from concept to reality, driven by both maker ingenuity and industry ambition. With BMW and Mahle advancing externally excited synchronous motors to reduce rare-earth dependence, and Tesla having already demonstrated the scalability of induction motors, the message is clear: sustainable propulsion can deliver performance without compromise.
For makers and engineers alike, this is an invitation to experiment boldly and rethink motor design fundamentals, because the next leap in innovation may emerge as much from a personal workbench as from an automotive R&D lab.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Designer’s guide: Motor control and drivers
- Driving higher levels of efficiency in motor designs
- Motor driver IC for BLDC, stepper motor deployments
- Motor control design: an introduction to motors and controllers
- Designer’s Guide to High-Performance Motor Control for Robotics
The post Magnet-free electric motors: Driving innovation beyond rare earths appeared first on EDN.
Power electronics evolve to maximize efficiency

Following the introduction of Industry 4.0, power electronics are becoming more significant in both digital and industrial infrastructures. Factories, energy systems, and data centers are getting smarter and more connected. This requires efficient power solutions that offer high power density and can scale with them.
Semiconductors are expected to deliver performance beyond the limits of conventional silicon-based power devices. Wide-bandgap (WBG) materials such as silicon carbide (SiC) and gallium nitride (GaN), as well as novel approaches to designing, packaging, and controlling power devices, are helping achieve the main goals of Industry 4.0: efficiency, flexibility, scalability, and intelligence.
800-VDC power architectureOne of the most significant changes introduced in the power system is the move of data centers to 800-VDC distribution, as detailed in an Nvidia white paper. Traditional systems that use AC and low-voltage DC can’t keep up with the speed and growth needs of AI-based workloads. High-performance computing clusters, especially those that support generative AI and machine learning, demand more power and should use it as efficiently as possible.
By raising the distribution voltage to 800 VDC, operators can reduce the current for a given power level. This approach offers the benefits of reduced I2R losses and the ability to use thinner wires. Overall, efficiency can thus be increased, and more power can be integrated in the same area or volume. The design also becomes less complicated because there are fewer steps in the conversion process.
This new architecture directly affects semiconductor requirements. Power devices need to perform well at higher voltages with minimum loss and support fast switching. Chipmakers and manufacturers are developing power solutions to support Nvidia’s 800-VDC power architecture reference design for next-generation AI factories to improve efficiency and reduce power losses.
To support gigawatt-scale AI factories based on an 800-VDC power architecture, Flex, for example, introduced a new reference design (Figure 1) that integrates power, liquid cooling, and compute capabilities into a modular assembly. This prefabricated solution streamlines the implementation of 800-VDC architectures and, according to the company, enables 30% faster deployment than conventional systems.
Figure 1: Flex’s reference design accelerates giga-scale AI factory deployment through a modular and preassembled structure. (Source: Flex)
SiC semiconductor advances
Due to its physical properties, such as high breakdown voltage, low switching losses, and high thermal conductivity, SiC can operate efficiently and provide high reliability in high-voltage and high-power environments.
At the high-voltage end, SiC devices are going into the multi-kilovolt range. More devices are gaining ratings above 1,200 V, making SiC more common in places where silicon-based power devices used to be the norm.
Navitas Semiconductor recently announced the availability of samples for its 2,300-V and 3,300-V high-voltage SiC products, specifically designed to increase efficiency in AI data centers, power grids, and renewable energy infrastructure. The devices, available in discrete, module, and known-good-die formats, are based on the company’s Trench-Assisted Planar architecture.
This semiconductor structure optimizes electric-field management, significantly reducing voltage stress and improving avalanche robustness compared with traditional trench- or planar-MOSFET designs. It also achieves lower RDS(on) at high temperatures and better current spreading.
As power devices improve, their packaging becomes increasingly crucial to the overall performance of the system. Newer packages are designed to reduce parasitic inductance, improve thermal management, and handle larger current densities.
These advancements in packaging technology enable higher performance and efficiency gains. Texas Instruments (TI), for example, recently unveiled two isolated power modules for applications from data centers to electric vehicles that require improvements in power density, efficiency, and safety. The UCC34141-Q1 and UCC33420 isolated power modules leverage TI’s IsoShield technology, which copackages a high-performance planar transformer and an isolated power stage, providing functional, basic, and reinforced isolation capabilities.
TI’s proprietary multichip packaging solution claims up to 3× higher power density than discrete solutions in isolated power designs and shrinks the solution size by as much as 70% by packing more power into smaller spaces. Applications range from factory automation PLC modules and EV and powertrain systems to grid infrastructure and rack and server power.
Wolfspeed Inc. has revealed that its 300-mm SiC platform, leveraging patent-pending innovations, is set to become a key material component for AI and high-performance computing (HPC) packaging by the late 2020s. Figure 2 shows a conceptual demonstration of an interposer substrate built on the company’s 300-mm SiC wafer. According to Wolfspeed, the SiC substrate helps to improve the thermal, mechanical, and electrical performance of next-generation packaging structures required by AI and HPC systems.
Figure 2: Conceptual demonstration of a 100 × 100-mm interposer substrate enabled by Wolfspeed’s 300-mm SiC wafer (Source: Wolfspeed Inc.)
GaN advances
While SiC excels at high voltages, GaN is suited for low- and medium-voltage applications, especially below 650 V. This semiconductor can switch at high frequencies, up to the megahertz range, with very low power loss, making power converters more efficient and smaller and requiring less cooling.
One important trend in GaN’s growth is integration. For example, Schottky diodes could be incorporated into GaN transistors to reduce losses from reverse conduction and make it easier to build power stages. Following this concept, Infineon Technologies AG has introduced the industry’s first industrial-grade GaN power transistors featuring an integrated Schottky diode.
Traditionally, GaN devices in hard-switching applications suffer from higher power losses due to their large body-diode voltage drop. This issue gets worse during the “deadtime” of a power controller. Engineers previously solved this by adding an external Schottky diode or complex controller tuning, both of which increase design time and costs. The new CoolGaN transistor G5 family solves this by integrating the diode directly into the transistor, reducing deadtime losses and boosting overall system efficiency.
Another important trend is bidirectional switching, where new GaN devices can block current and voltage in both directions. This simplifies converter topologies and requires fewer components. This capability is especially crucial for applications such as energy storage systems, EV chargers, and power-factor-correction circuits.
Renesas Electronics Corp. has introduced the industry’s first bidirectional switch (TP65B110HRU) based on depletion-mode (d-mode) GaN technology (Figure 3). Most current high-power conversion systems rely on unidirectional silicon or SiC switches that block current in only one direction. This limitation forces engineers to design multi-stage circuits or use “back-to-back” switch configurations, which significantly increases component count and reduces overall efficiency.
By integrating bidirectional blocking into one GaN product, this technology enables “single-stage” power conversion. The high switching speed and low stored charge of GaN also enable higher power density and switching frequencies. According to the company, this architecture has demonstrated over 97.5% power efficiency, providing a solution well-suited for AI data centers, on-board EV chargers, and renewable energy applications.
Figure 3: Renesas’s TP65B110HRU high-voltage d-mode bidirectional GaN switches (Source: Renesas Electronics Corp.)
Solid-state transformers
Solid-state transformers (SSTs) are a huge change in how power is transferred and controlled. SSTs are not like ordinary transformers, as they use power electronic converters to modify, split, and control the voltage.
Using this technology, more advanced features become available. These include two-way power flow, real-time voltage management, and the capacity to operate with renewable energy sources. Smart grids, microgrids, and Industry 4.0 all need SSTs that can change rapidly and easily. For SSTs to grow, WBG semiconductors are particularly significant.
For example, Infineon and DG Matrix, a company specializing in SSTs, have partnered to integrate SiC semiconductors into the Interport multiport SST platform. This collaboration aims to modernize the connection between the public grid and energy-intensive applications such as AI data centers, EV charging, and industrial microgrids.
Unlike traditional copper- and iron-based transformers, SSTs are semiconductor-based devices. They are smaller and lighter, accelerating deployment and providing higher power density. Adopting Infineon’s SiC technology, these SST systems achieve improved efficiency and reliability.
The technology enables direct power conversion from medium-voltage grid levels to the low-voltage requirements of modern digital infrastructure. DG Matrix plans to scale toward higher-voltage platforms to support the global rollout of high-performance power infrastructure.
The post Power electronics evolve to maximize efficiency appeared first on EDN.
The Blink Sync Module 2: Faster response and local storage, too

The technology treadmill never stops, and so it goes with Blink’s second-generation hub device versus its predecessor.
Last month, I compared the conceptually similar (and thankfully, concurrent-use RF-compatible) hub-and-spokes approaches used by Blink and TP-Link for their respective battery-operated device ecosystems. Blink’s particular hub implementation, the first-generation Sync Module still in active use at my residence to this very day, doesn’t support local recording storage, only to the cloud, a service which fortunately is free for me (albeit in a somewhat limited-duration fashion) as a legacy customer.

(it’s more recently been moved from my office to the laundry room, and as regular readers know from other recent writeups, that Belkin Wemo smart switch above it is also now DOA)
Gratis capacity for non-geriatricsBut when I saw an inexpensive “for parts only” second-generation Sync Module available for sale on eBay, I still jumped on the opportunity, driven by curiosity. Primary differences between the two generations include, for the more recent model:
- A functionally active embedded USB-A connector, for mating with a flash stick or other mass storage device for local recording storage
- More robust, therefore more responsive, integrated processing, and
- Claimed wider-range Wi-Fi coverage
Turns out the device itself works fine, at least to the degree I’ve tested it so far; I was able to factory-reset it, and the Blink app can now “see” it (although I haven’t yet set it up). The only thing missing was the originally included AC/DC adapter with a micro-USB output, but I’ve got plenty of spares of those already, along with the one currently fueling its same-dimensions precursor in case I ever decide to upgrade in situ. So, let’s dive inside and see what we can learn, both in an absolute sense and relative to the first-gen Sync Module that I took apart…yikes….nearly seven years ago. Shall we?
Here’s today’s patient, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

All-important FCC ID (2AF77-H2121520):

Micro-USB power input:



and now-functional USB-A data port:

I wish everything I tore down was this easy to open up:



At this point…
Let’s pause a moment for some interesting (at least to me) background info. In re-reading my archaic first-gen teardown verbiage, I noted that I’d written (among other things) the following:
Today’s teardown candidate is that very same Sync Module. The one currently in use with my Blink XT cameras matches their black color; this particular one was purchased standalone off Ebay specifically for teardown purposes and is white (and previously used). Color scheme deviations aside, the two models are functionally identical.
I was right with my “identical” claim, at least with respect to the functional angle. And I’d already noted the color deviation. But further (and more recent) research has enlightened me that there were other (non-functional) hardware differences between my in-use device and the one I took apart, too. Blink actually brought to production multiple main variations of the first-generation Blink Sync Module (including a low-volume initial “launch” iteration), along with region-specific tweaks of each variant reflective of differing RF spectrum regulations:
There have been 5 main revisions of sync modules:
Version 0 which was white and has a (non-functional) ethernet port and (non-functional) USB and BLE (non-functional) available. This was the ‘launch’ era.
Version 1a which is white and has a (non-functional) ethernet port and (non-functional) USB.
Version 1b which is white or black and has a (non-functional) USB.
Version 1c which can be white or black and has no ports.
These were all the general ‘XT’ era modules.
Version 2 (the current one) which has a functional USB port.
All the modules are currently compatible with each other, but Modules 0, 1a,b,c have support ‘no longer guaranteed’.
However, this isn’t the end of the story, as the boards inside all come in combinations of EU and US and Intl flavors (due to regulatory / radio differences) too!
I’m guessing that the version I tore down back in mid-2019 was a “Version 1a”. I suppose it also could have been a “Version 0”, although I didn’t come across any Bluetooth Low Energy circuitry inside it. The one still in use here is a “Version 1b”.
Intra-generational variationWhen the Redditor who wrote the above shared his thoughts four years ago, there may have been only one (initial) version of the Sync Module 2 we’re looking at today. Fast forward to the present, however, there now have been (at least) two. The initial hardware was based on Atheros silicon for both the processor and Wi-Fi module; Blink subsequently switched to NXP-sourced ICs for both the processor and wireless subsystems, the latter this time supporting not only Wi-Fi but also both Bluetooth and BLE.
Onward. Remove two screws:

And the PCB pops right out:

You’ve already gotten a glimpse of the PCB frontside, so in fairness to its backside counterpart, let’s start there with the detailed analysis:
Admittedly, there’s not much of note, unless you’re into passives and embedded traces, that is. At lower left is the reset-and-pairing switch. And to its right is a Winbond W25Q256JV 256 Mbit serial NOR flash memory, presumably for system code storage. For comparisons sake, here’s the comparatively sparse backside of the first-gen Sync Module PCB:
Now flipping the PCB back over…
I didn’t bother expending much effort at peeling the initially stubborn sticker off the processor; I already know from the NXP logo visibly atop the chip in its upper right corner in conjunction with the helpful Wiki reference page I’d found that it’s the second iteration of the second-gen design, employing NXP’s MCIMX6Z0DVM09AB application processor with the following specs:
- ARM Cortex-A7 running Linux
- 900MHz
- SRAM: 128kB
- SPI/UART/I2C
- 96KB bootrom, 128KB internal RAM
- Has Arm TrustZone
That other NXP chip I previously noted is the 88W8987-NYE2 wireless “solution”. Below the processor is an ISSI IS43TR16640BL 1 Gbit DDR2 SDRAM. And at the top center of the PCB is one more notable (albeit tiny) IC. Labeled as follows:
455A
CQRX
220
It’s Silicon Labs’ Si4455 sub-GHz wireless transceiver, which (as the name) implies implements the proprietary long-range 900 MHz channel that Blink refers to as the LFR (low-frequency radio) beacon.
In closing, here’s the first-generation Sync Module PCB topside for comparisons sake:
And with that, I’ll turn it over to you for your thoughts in the comments!
—Brian Dipert is the associate editor, as well as a contributing editor, at EDN Magazine.
Related Content
- The Tapo Hub: TP-Link joins the low-bandwidth, long-range RF club
- Blink: Security cameras with a power- and bandwidth-stingy uplink
- Teardown: Security camera network module
- Blink: Security camera system installation and impressions
The post The Blink Sync Module 2: Faster response and local storage, too appeared first on EDN.
Memory solutions for firmware OTA updates

Firmware over-the-air (FOTA) updates are essential for improving system quality, adding new features after initial release, fixing bugs and vulnerabilities, improving system performance, and reducing recall and service costs. As new features are added, the size and complexity of the firmware stored in flash memory typically increases, inevitably leading to increased FOTA completion times.
Most of this time is spent on erasing and reprogramming. Beyond optimizing the user experience through faster updates, the irreversible nature of these operations must also be considered.
Another important consideration is that FOTA operations should ideally be performed in a stable environment similar to flash programming in a production environment. However, field update environments are relatively harsh and unstable. To avoid lengthy, risky, or potentially critical FOTA operations, the time required should be minimized.
But field updates are also vulnerable to various security threats, so thorough preparation is essential. These threats can range from third-party attacks to arbitrary modifications attempted by the product owner. This article outlines key considerations for implementing FOTA.
FOTA basics
FOTA is a technology that remotely updates a device’s firmware via wireless networks such as Wi-Fi, 5G, LTE, or Bluetooth without a physical connection. The flash memory used in this process serves as a core hardware resource, either temporarily storing the update package or ultimately writing the new executable code.
Let’s first examine the classification of FOTA based on flash memory configuration. This classification is determined by whether the flash memory is located internally or externally.
- Dual-bank architecture, internal NOR flash memory method
The dual-bank flash memory space within the MCU is allocated as active and passive slots, respectively. Each partitioned slot provides a space for executing existing software while simultaneously downloading new updates. This configuration features simple hardware configuration, high security, and fast bank switching through address remapping. However, it requires twice the flash memory density compared to the software size, resulting in increased hardware costs.
- External NOR flash memory method
This method uses external NOR flash memory connected to the application processor (AP)/microcontroller (MCU) via the QSPI (Quad SPI) or OSPI (Octal SPI) interface. Its large flash memory density makes it ideal for large-scale software updates. The update file or binary image is stored in flash memory and then copied to the internal flash memory. This method overcomes internal memory limitations and facilitates the storage of multiple versions of backup binary images, including emergency recovery binary images.
Let’s look at the classification of FOTA based on its implementation mechanism. These mechanisms can be used independently or combined and reconfigured.
- A/B update (seamless update)
The active slot (bank) where the current software/firmware is running and the passive slot (bank) for update downloads are physically separated, and software is installed or disabled across the two banks. This physical separation ensures that even if power is cut or a malfunction occurs during an update, the bank where the current software is running is preserved, preventing bricking.
- Execute-in-place (XIP) and concurrency
FOTA relies entirely on external NOR flash memory, meaning that code is read directly from external NOR flash memory. This technique involves executing code in one flash memory area while simultaneously downloading new updates to another area. However, the large capacity NOR flash memory used for FOTA is logically configured as a single bank, even when using multi-chip packaging technology. Therefore, the use of XIP for FOTA is limited.
- Delta update
This update only receives the changed differences or patches, rather than the entire software update or binary image. By reducing the amount of data transmitted, it reduces the time required for FOTA and saves on flash memory writes (program/erase cycles). Currently, optimized compression algorithm solutions are being employed to enable delta updates even on MCUs with low hardware specifications.
Reliability and security enhancements
FOTA design goes beyond simply writing data. It focuses on maximizing uptime (zero downtime) by leveraging safety, efficiency, and continuity, and securely controlling flash memory within a Trusted Execution Environment (TEE).
- Integrity verification
To ensure that data written to flash memory has not been corrupted or altered, the digital signature of the downloaded data is verified using a hardware security module (HSM) or TrustZone. After writing to flash memory, a checksum or CRC check is performed on the entire area to check defects in the flash memory.
- Rollback
If a boot failure occurs with a new update or software, the system must have the ability to immediately revert to the previous version.
- Flash memory life management (wear leveling)
Maximize the hardware lifespan of flash memory by preventing flash writes from being concentrated on specific areas of flash memory.
- Secure boot integration
Root of Trust (RoT) verifies that the software written to flash memory is signed by a trusted manufacturer.
- Secure storage
In addition to securing communication between the host and flash memory, flash memory must provide secure storage. The latest secure flash memory features a built-in HSM, enabling real-time encryption and decryption without performance degradation and providing secure storage capabilities.
NOR in FOTA architecture
Among the explanations mentioned above, the FOTA architecture utilizing external NOR flash memory is a strategy that overcomes the physical limitations of embedded memory and maximizes system flexibility. As of 2026, the role of external NOR flash memory is becoming increasingly important due to the increasing size of firmware and strengthened security requirements.
FOTA utilizing external NOR flash memory offers overwhelming advantages over embedded methods in terms of safety, density, and flexibility, and is becoming the standard for industrial devices requiring high reliability and smart devices using large-capacity firmware. We will delve into the five key advantages of FOTA using external NOR flash memory.
- Scalability and cost efficiency
- Large image accommodation: Firmware containing the latest operating systems (RTOS, Embedded Linux), graphics libraries, and AI models often exceed tens of MBs in size. Adding relatively inexpensive external NOR flash memory is more advantageous for reducing overall bill of materials (BOM) costs than increasing the internal flash capacity of expensive MCUs.
- Multi-image storage: Simultaneously storing multiple versions of firmware backups and user data images dramatically increases memory resource management flexibility.
- Provides a stable backup and rollback environment
- Fail-safe mechanism: Even if a power failure or communication error occurs during an update, the existing executable code in the internal flash remains intact. The replacement process only begins after the new image has been fully downloaded and verified to prevent bricking.
- Factory recovery: Factory recovery firmware can be stored in external memory. If a critical bug is discovered in a new version, it can be immediately restored to a stable previous version or factory settings from external memory without a server connection.
- Minimized downtime
- Non-intrusive background downloads: The internal flash memory focuses on running the current application, while the external flash memory receives data in the background via an independent bus. This facilitates zero-downtime implementation, ensuring device service is not interrupted even while receiving update packets.
- Bus separation: Using separate interfaces such as QSPI and OSPI prevents bus conflicts between internal memory access (command fetch) and external memory access (update write), minimizing system performance degradation.
- Extended flash life and maintainability
- Internal flash memory protection: Flash memory has a limited number of write/erase cycles (P/E cycles). During development with frequent updates or when frequent firmware changes are required, a significant portion of write operations are handled by external memory, protecting the life of the MCU’s internal flash, which cannot be replaced.
- Modular capacity expansion: Even if firmware capacity increases due to added functionality in the product lineup, the burden of hardware redesign is reduced because only the external flash memory can be replaced with a larger capacity without replacing the MCU.
- Security and data isolation
- Physical isolation: The executable code (internal) and the update standby image (external) can be physically separated and managed.
- Security update patch: By storing the firmware in an encrypted state in external memory and decrypting it only at boot time and uploading it to internal memory or RAM, an additional layer of defense against firmware theft attacks can be added.
FOTA implementation
The success of a FOTA solution hinges on the ability to provide secure and seamless updates. The implementation of the above architecture will be key to achieving this.
The automotive industry is already responding to the changes that make FOTA essential. As the transition to software-defined vehicles (SDVs) becomes more concrete, demand for software updates is skyrocketing. This is because it enables flexible changes or additions to vehicle functions even after mass production, enabling rapid response to errors and defects and continuous delivery of new services to customers.
As the frequency of software updates increases, their importance is also increasing. United Nations Economic Commission for Europe (UNECE) WP.29 enacted R156 in June 2020, which now covers not only passenger cars, commercial vehicles, and trailers with towing devices, but also agricultural machinery equipped with software update capabilities.
UNECE WP.29 R155 and R156 define the requirements OEMs must meet in the areas of cybersecurity and software updates. UNECE regulations R155 and R156 introduce framework conditions for cybersecurity and software update capabilities for all vehicles. They also require automakers to establish certified Cyber Security Management Systems (CSMS) and Software Update Management Systems (SUMS).
R155 requires the establishment of a cybersecurity risk identification and response system, consideration of security throughout the entire vehicle lifecycle, documentation and maintenance of a CSMS based on ISO/SAE 21434, and submission of documentation and evidence during the Vehicle Type Approval (VTA) audit.
R156 addresses the security assurance of OTA or wired updates, change impact analysis and verification systems, update history management, and auditability. It’s based on the ISO 24089 standard for software updates.
The introduction of FOTA is no longer an option. It’s essential for improving system quality, adding new features, fixing vulnerabilities, enhancing system performance, and reducing recall costs.
We have examined the important considerations before adopting these new solutions. In addition to providing safe and fast update methods for improved user experience, we have also briefly discussed the security regulations that must be considered.
Scott Heo is lead principal engineer at Infineon Technologies.
Related Content
- OTA Software Updates: Changes Ahead
- Addressing the challenge of automotive OTA update
- OTA: A Core Technology for Software-Defined Vehicles
- How PCM memory bolsters OTA firmware upgrades in vehicles
- The role of phase-change memory in automotive OTA firmware upgrades
The post Memory solutions for firmware OTA updates appeared first on EDN.
Designer’s guide: Motor control and drivers

Motor control integrated circuits (ICs) and motor drives are essential elements for implementing smart manufacturing within the framework of Industry 4.0. A common requirement in modern industrial applications is high-efficiency motor solutions. About 50% of global energy consumption is due to electric motors, and therefore, even a moderate improvement in efficiency can provide meaningful economic benefits, helping reduce the carbon footprint.
International efficiency standards for industrial motors, such as IE3 (Premium) and IE4 (Super Premium), have been introduced to reduce energy use. As of July 2023, European regulations mandate that three-phase induction motors between 75 kW and 200 kW adhere to the IE4 efficiency standard.
In addition to being more efficient, modern industrial motor solutions must be smart and connected. “Smart devices” are equipped with sophisticated capabilities. They can identify irregularities such as excessive heat or voltage surges and respond automatically. The introduction of AI technologies, such as machine learning, brings this function to the next level, allowing predictive maintenance and reducing factory downtime.
Connection is another key requirement for motor solutions deployed in the Industry 4.0 sector. This feature allows the devices to exchange data in real time, supporting predictive maintenance, energy efficiency improvements, and remote control. Using the industrial internet of things, electric motors can send operational data to cloud systems. This helps reduce downtime and allows for continuous improvement of production processes. Moreover, technicians can access performance data remotely, decreasing the need for on-site inspections and allowing faster troubleshooting.
Motor driver architectureMotor driver electronics is the power interface between digital control systems and electromechanical loads. This architecture is based on three components: control logic, gate drivers, and power stages.
Control logic typically resides within microcontrollers (MCUs), digital-signal processors, or dedicated motor control ICs, which are engineered to perform real-time control loops. Subsequently, gate drivers transform these logic-level signals into switching commands, which are then employed to regulate power transistors, encompassing MOSFETs and IGBTs. The power stage, frequently implemented via inverter or H-bridge configurations, supplies the desired current to the motor windings.
Furthermore, in Industry 4.0 contexts, motor drivers incorporate supplementary functionalities, encompassing fault monitoring, thermal sensing, communication interfaces, and energy management capabilities. Motor driver ICs also feature integrated protective measures, such as overcurrent, overvoltage, and thermal shutdown mechanisms. These protections improve system reliability and simplify the design process.
Microchip Technology Inc. recently introduced a lineup of 12 600-V gate drivers. These high-voltage drivers are designed to deliver output currents between 600 mA and 4.5 A. They are also available in a range of configurations, including half-bridge, three-phase driver, and high-side/low-side options.
These gate drivers facilitate rapid switching, thereby promoting efficient performance, and are particularly appropriate for industrial motor control applications. In addition, the logic inputs are compatible with standard TTL and CMOS levels, extending down to 3.3 V, which streamlines integration with conventional MCUs. The safe operation of the output power MOSFETs is ensured by Schmitt triggers on the inputs and an internal deadtime preset.
The MCP8062136, for instance, is a three-phase half-bridge with three high-side drivers operating in bootstrap operation up to 600 V and can provide a 200-mA source and 350-mA sink output current. The gate drivers also include several protection features, including shoot-through protection logic, undervoltage lockout for VCC, and overcurrent protection.
Figure 1: Microchip’s high-voltage (600 V) MOSFET and IGBT silicon gate drivers, designed for a range of applications, including stepper motors, compressors, pump motors, motor drives, industrial inverters, and renewable energy systems (Source: Microchip Technology Inc.)
To drive motor-controlled industrial applications such as sensorless three-phase fans and pumps up to 40 W, Melexis has introduced the MLX81339 motor control IC. The device is also suitable for driving brushless DC (BLDC) and bipolar stepper motor control for accurate positioning in applications such as automated valves, flaps, and small robotic motors.
The MLX81339 supports several types of communication interfaces with a host MCU, including the legacy PWM/FG, as well as the I2C, UART, and SPI interfaces. The motor control IC offers several protection and diagnostics features, including undervoltage, overvoltage, overcurrent, and overtemperature detection and protection, and integrates a programmable flash memory that can be used for application customization and IC configuration.
Connectivity in smart motor controlIn Industry 4.0 applications, motor drivers often adopt communication protocols, such as EtherCAT, Profinet, and Ethernet/IP, to exchange real-time data with other drives, sensors, or systems supervising the industrial network. Typical data that can be exchanged includes torque, speed, temperature, and vibration. When processed at the edge or remotely on the cloud, this data allows predictive-maintenance models to provide valuable insights into motor operation, helping to detect potential faults before they occur.
Drive units mounted directly on motors or industrial machines are becoming very common. These devices, which include embedded controllers and communication interfaces, reduce the wiring complexity and allow machines to be reconfigured quickly for different production requirements.
For example, the RA8T2 MCU from Renesas Electronics Corp. is optimized for industrial motor control. Based on a 1-GHz Arm Cortex-M85 processor (with an optional 250-MHz Arm Cortex-M33 processor available in the dual-core version), the RA8T2 is designed for industrial motor control applications that require real-time performance and a high-speed communication interface.
These devices (Figure 2) integrate a 14-channel PWM timer for motor control, different types of memories (including a low-latency and high-speed TCM memory), and analog functions in a single chip. They also provide a dual-channel Gigabit Ethernet MAC with DMA and an optional EtherCAT slave controller that supports synchronous networks in industrial fields.
Figure 2: Renesas’s RA8T2 MCU supports high-speed connectivity in industrial motor control applications. (Source: Renesas Electronics Corp.)
Wide-bandgap semiconductors
Wide-bandgap materials, such as silicon carbide (SiC) and gallium nitride (GaN), provide higher breakdown voltages, faster switching speeds, and lower on-resistance per unit area than silicon IGBTs and MOSFETs. From a designer’s perspective, this means that lower switching losses, improved thermal management, and higher operating frequencies can be achieved. These characteristics also lead to higher efficiency across the load range and a reduced footprint due to a reduced size of the passive components.
SiC is usually preferred in high-voltage and high-current applications above 600 V, such as high-power industrial drives and inverters. GaN, meanwhile, operates well in the 100- to 650-V range, with switching frequencies up to about 1 MHz. It is well-suited for mid-power motor drives in appliances, HVAC, pumps, small robots, and light industrial equipment.
Through a partnership, Qorvo Inc. and Cambridge GaN Devices (CGD) have developed the 400-W PAC5556AEVK2 and the 800-W PAC5556AEVK3 evaluation kits, suitable for developing motor control solutions in applications such as industrial fans, pumps, compressors, and white goods. The kits combine Qorvo’s PAC5556A mixed-signal system-on-chip with CGD’s ICeGaN HEMTs. The PAC5556A is a programmable 32-bit MCU that integrates a 600-V DC/DC buck controller and 600-V gate drivers.
The PAC5556AEVK2 evaluation kit features CGD’s 240-mΩ ICeGaN power devices, achieving up to 400-W peak performance without requiring a heat sink. The PAC5556AEVK3 integrates CGD’s 55-mΩ ICeGaN switches and provides a peak output power of 800 W, requiring minimal airflow cooling. The usage of GaN transistors improves the overall efficiency due to reduced power loss, reduces heat dissipation, and allows for smaller and more reliable motor control solutions.
Efficient Power Conversion (EPC), a company focused on e-mode GaN solutions, introduced the EPC91202 evaluation board for motor drive applications. It integrates a three-phase BLDC motor drive inverter built on the EPC2361 100-V eGaN FET and can provide an output current up to 70 A peak (50 ARMS), with a switching frequency up to 150 kHz.
The EPC91202 is designed to handle sensorless and encoder-based motor control, boasting a low-voltage change rate, specifically a dV/dt rate of under 10 V/ns. This low voltage change rate reduces electromagnetic interference and acoustic noise. This board is well-suited for developing motor drive applications in various sectors. These include industrial automation, e-mobility, robotics, drones, and battery-powered devices.
AI and ML integrationIntegrating AI/ML in motor control systems offers a valuable solution to investigate the behavior of motors during normal operation, helping to prevent anomalies or possible faults in advance. An example of a hardware and software integrated solution is the STSPIN32G4-ACT reference design and the FP-IND-MCAI1 STM32Cube function pack from STMicroelectronics.
The STSPIN32G4 is an advanced system-in-package that combines an STM32G431 MCU (based on an Arm Cortex-M4 core with CORDIC mathematical accelerator) with a three-phase gate driver. This architecture is specifically designed for controlling BLDC/permanent-magnet synchronous motors and provides the computing power needed to handle field-oriented control (FOC) algorithms, as well as local data analysis tasks (edge AI).
The FP-IND-MCAI1 software provides an implementation example for condition monitoring and predictive maintenance. This package collects data from internal sensors (current and voltage) and from external sensors (vibration and temperature), using it to feed pre-trained ML models.
Using ST’s NanoEdge AI Studio tool, optimized libraries can be generated that run directly on the chip, enabling the drive to “learn” the motor’s normal behavior and detect anomalies (such as mechanical imbalances or bearing failures) in real time.
Software toolsSeveral vendors offer software toolchains that cover the full development workflow from motor parameter identification through algorithm configuration, real-time debugging, and production code generation.
Infineon Technologies AG recently expanded its ModusToolbox Motor Suite to include a hardware-abstracted motor control core library covering advanced algorithms such as FOC and trapezoidal control, multiple startup methods including rotor alignment and six-pulse injection for initial position detection, and SVPWM modulation schemes. The integrated graphical user interface (GUI) provides a configurator and testbench that auto-detects connected evaluation boards; a digital oscilloscope monitoring up to eight firmware variables simultaneously; a motor profiler for automated extraction of resistance, inductance, and inertia parameters; and a PID tuner for closed-loop optimization.
Power Integrations released its MotorXpert v3.0 last year, a suite developed for its BridgeSwitch motor driver ICs (Figure 3). It adds shuntless and sensorless FOC support, a two-phase modulation scheme that cuts inverter switching losses by 33% in high-temperature environments, and a five-fold improvement to its waveform visualization tool. The codebase is written in MISRA-C and is MCU-agnostic, covering applications from 30 W to 750 W.
Figure 3: Power Integrations’ MotorXpert v3.0 offers an easy-to-use control interface and GUI. (Source: Power Integrations)
Other development tools available from leading semiconductor manufacturers include ST’s STM32 Motor Control SDK (X-CUBE-MCSDK) and Texas Instruments Inc.’s MotorControl SDK.
The post Designer’s guide: Motor control and drivers appeared first on EDN.
Negative resistance amplification

We once looked at how conducted emissions testing could be affected by the negative input impedance of a switch-mode power supply. Please see: “Conducted Emissions testing.”
Digital data signals that a client’s electric power company was putting on the power lines were being amplified by the negative input impedance of the power supply being tested, which made it look like the power supply itself was generating conducted emissions, which, in fact, it was not.
I have since been asked by someone, “How can a negative impedance result in amplification?” The sketch below will illustrate how that can come about.

Figure 1 Negative resistance amplification.
Let our “impedance” in question be a resistance. In our sketch, voltages E2 and E4 are derived by voltage dividers from identical “Esig” sources for which standard voltage division equations apply. What is NOT standard here is that we are going to set R4 to negative numerical values.
My SPICE simulator will not let me assign a negative number to any resistance value (I think of that as picky, picky, picky!), but given that as the case, the voltage divider equations can be set up in GWBASIC. Line 150 of that code is where that happens.
With R1 and R3 arbitrarily set to 1K each and held there, we vary R2 and R4 together as shown to look at the effects on outputs E2 and E4, where we find the following.
E2 is always a lesser voltage than Esig. E2 varies versus the choices of value of R2, but it is always smaller than Esig.
On the other hand, E4 is always a greater voltage than Esig. E4 varies versus the negative value of R4, but it is always larger than Esig.
This effect on E4 is the amplification effect referred to in the earlier essay.
John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Conducted Emissions testing
- Vacuum tube negative resistance
- Diode classifications
- Coaxial Z—breaking down the impedance of a coaxial transmission line
The post Negative resistance amplification appeared first on EDN.
GaN ICs drive robotics and motion control

Four 100-V GaN power-stage ICs from EPC are optimized for motor drives in humanoid robots, drones, and battery-powered platforms. The EPC23108, EPC23109, EPC23110, and EPC23111 integrate a gate driver, high- and low-side eGaN FETs, and level-shifting circuitry in a half-bridge configuration. They support operation up to 100 V with load currents of 35 A (EPC23108, EPC23109) and 20 A (EPC23110, EPC23111).

The control interface includes an active-low fast-shutdown and standby input with a 65-kΩ pull-up. It meets industrial logic standards, letting designers connect directly to standard controllers. This simplifies designs and ensures consistent operation across platforms. Safety is enhanced through deterministic shutdown.
The series supports continuous 100% duty-cycle operation, enabling full-torque and uninterrupted conduction in motion control, robotics, and precision regulation systems. The EPC23109 and EPC23111 offer a single-pin PWM input with enable logic and fixed dead time, simplifying multi-axis designs. The EPC23108 and EPC23110 feature dual PWM inputs for adaptive dead-time modulation.
Engineering samples are available for qualified designs. The EPC23108, EPC23109, EPC23110, and EPC23111 can be ordered through EPC’s distributor partners.
The post GaN ICs drive robotics and motion control appeared first on EDN.
Tiny filters curb 5-GHz audio-line noise

Built with low-distortion ferrite material, TDK’s MAF0603GWY series of filters attenuates noise on audio lines in the 5-GHz band. The filters fit in a compact 0.6×0.3×0.3-mm package for use in small consumer devices like smartphones and wearables with Bluetooth and Wi-Fi audio lines.

Electromagnetic noise radiated from audio lines in electronic devices can interfere with the internal antenna and reduce receiver sensitivity. While chip beads are commonly used to suppress noise, they can degrade sound quality.
TDK reports its newly developed ferrite material minimally affects audio-line characteristics while reducing distortion. The filters provide high attenuation at 5 GHz (impedance up to 3220 Ω) to suppress noise. They also limit attenuation of audio signals with lower resistance than conventional products, enabling a wide dynamic range.

Mass production of the MAF0603GWY series is set to begin in April 2026.
The post Tiny filters curb 5-GHz audio-line noise appeared first on EDN.
Photovoltaic driver streamlines EV power designs

A photovoltaic MOSFET driver from Vishay, the VODA1275 increases safety and reliability in high-voltage automotive applications. The device provides a typical open-circuit voltage of 20 V, short-circuit current of 20 µA, and turn-on time of 80 µs—said to be three times faster than competing devices.

The AEC-Q102-qualified device targets pre-charge circuits, wall chargers, and battery management systems for EVs and HEVs. Its high open-circuit output voltage allows a single driver to be used, removing the need for two devices in series to generate higher voltages. The VODA1275 also enables custom solid-state relays to replace electromechanical relays in next-generation vehicles.
A working isolation voltage of 1260 Vpeak and isolation test voltage of 5300 VRMS make the driver well-suited for 800-V+ battery systems. The device comes in a compact SMD-4 package with an 8-mm creepage distance and a mold compound with a CTI of 600.
Samples and production quantities of the VODA1275 are available now, with lead times of eight weeks.
The post Photovoltaic driver streamlines EV power designs appeared first on EDN.
Shielded inductors reduce emissions in tight layouts

Bourns’ SRP2008DP series of shielded power inductors provides the saturation current needed for dense DC/DC converter designs and miniature electronic devices. These low-profile devices, with dimensions of just 2.0×1.6×0.8 mm, enable use in compact circuits with minimal routing changes.

The eight inductors in the SRP2008DP series cover inductances from 0.24 µH to 4.70 µH, heating current (IRMS) from 1.10 A to 3.50 A, and saturation current (ISAT) from 1.60 A to 5.50 A. DC resistance ranges from 36 mΩ to 468 mΩ, and operating temperature spans -40°C to +125°C.
In crowded layouts, radiated emissions and magnetic coupling can compromise signal integrity and complicate EMC compliance. The SRP2008DP series addresses these issues with a small, shielded package and a metal-alloy powder core. The shielded design contains magnetic flux, reducing emissions to nearby circuitry, while the high-resistivity core suppresses eddy currents and limits core losses at high switching frequencies. Contained flux also minimizes coupling to adjacent traces, lowering interference in densely populated layouts.
The SRP2008DP series is available through Bourns’ authorized distributors. Request samples here.
The post Shielded inductors reduce emissions in tight layouts appeared first on EDN.
RISC-V SoC supports voice-enabled IoT devices

Espressif Systems is sampling its ESP32-S31 dual-core RISC-V SoC with Wi-Fi 6, Bluetooth 5.4, Thread, Zigbee, and Ethernet. Rich HMI and security features make it well-suited for IoT applications such as consumer and industrial appliances, voice-controlled devices, and automation systems.

Running at 320 MHz, the ESP32-S31’s 32-bit RISC-V microcontroller achieves 6.86 CoreMark/MHz and integrates a memory management unit and 60 GPIOs for design flexibility. One of its two cores features a 128-bit-wide SIMD data path for fast parallel processing. Memory resources comprise 512 KB SRAM and support for 250-MHz, 8-bit DDR PSRAM, with concurrent flash and PSRAM access. External memory expansion (up to octal SPI) further supports memory-intensive multimedia and AI/ML workloads at the edge.
The ESP32-S31’s HMI capabilities include a DVP camera interface, LCD support, and up to 14 capacitive touch channels. Security features span secure key management, secure boot, flash and PSRAM encryption, cryptographic hardware acceleration, and a trusted execution environment. Supported by Espressif’s open-source IoT Development Framework, the device works with common LLMs to build voice-enabled client devices that run or interact with AI agents.
To request samples of the ESP32-S31 SoC, contact Espressif’s customer support team.
The post RISC-V SoC supports voice-enabled IoT devices appeared first on EDN.
Leveling up Industry 4.0

Industry 4.0 is all about transforming manufacturing processes with advances in smart capabilities, data connectivity, and automation. It encompasses devices from sensors that capture data to motors and motor control and power devices that have a big impact on efficiency. Edge computing is also playing a larger role to combat challenges around latency, particularly in safety critical applications, and cybersecurity is critical for protecting connected devices.
(Source: Adobe Stock)
The March/April issue covers some of the key components that are vital to Industry 4.0, from new sensing approaches such as event-based sensing that enable faster and more reliable decisions to the latest designs in power devices to deliver higher efficiency in industrial systems. We also look at designing edge AI for industrial and industrial IoT systems for cybersecurity.
Machine vision plays a big role in industrial automation applications, ranging from object tracking to vibration monitoring. Prophesee believes the industry should be rethinking machine vision in industrial automation, addressing challenges around latency, data processing, and decision-making.
“As industrial systems move toward higher levels of automation and autonomy, vision is becoming a core component of the perception pipeline,” said Thibaut Willeman, head of business development and go-to-market at Prophesee.
This is driving the demand for new sensing approaches that address these challenges: reducing latency, limiting unnecessary data, and enabling faster and more reliable decisions, he added.
Willeman explains how event-based vision addresses these challenges: “By mimicking biological vision, this technology utilizes efficient sensing and collection techniques that capture changes within a specific scene. This reduces processing requirements compared with traditional frame-based methods while revealing details that conventional systems miss, opening new possibilities for precision and performance in industrial applications.”
Applications that can benefit from event-based vision include industrial automation, IoT, automotive, and edge applications.
Another component area that has a large impact on applications in the Industry 4.0 world is power electronics. As factories, energy systems, and data centers get smarter and more connected, it requires more efficient power solutions that offer high power density, said Stefano Lovati, contributing writer.
Lovati discusses some of the latest approaches to designing, packaging, and controlling power devices to deliver higher efficiency, flexibility, and scalability. One of the most significant changes introduced in the power system is the move to 800-VDC distribution in data centers.
There is also a key focus on wide-bandgap materials such as silicon carbide (SiC) and gallium nitride (GaN). SiC can operate efficiently and provide high reliability in high-voltage and high-power environments, thanks to its high breakdown voltage, low switching losses, and high thermal conductivity, while GaN, suited for low- and medium-voltage applications, can switch at high frequencies, up to the megahertz range, with very low power loss, making power converters more efficient and smaller and requiring less cooling, Lovati said.
In addition, GaN is delivering on integration, which is helping to simplify power design.
Another big element of implementing smart manufacturing within Industry 4.0 is motor control ICs and motor drives. Similar to power devices, a big challenge is efficiency. “About 50% of global energy consumption is due to electric motors, and therefore, even a moderate improvement in efficiency can provide meaningful economic benefits, helping reduce the carbon footprint,” Lovati reports.
These modern industrial motor solutions are smart and connected with advanced capabilities to identify irregularities such as excessive heat or voltage surges and respond automatically. Lovati said the introduction of AI technologies brings this function to the next level, allowing predictive maintenance and reducing factory downtime.
He covers everything from motor driver architecture and connectivity in smart motor control to AI and ML integration and software tools.
Edge computing is becoming critical for real-time data processing in industrial automation. Industrial manufacturing systems require real-time decision-making, adaptive control, and autonomous operation, but many cloud-dependent architectures can’t deliver the millisecond response required for safety-critical functions such as robotic-collision avoidance, in-line quality inspection, and emergency shutdown, said Sam Al-Attiyah, head of machine learning at Infineon Technologies AG.
Al-Attiyah said edge AI addresses high-performance and low-latency requirements by embedding intelligence directly into industrial devices and enabling local processing to support machine-vision workloads for real-time defect detection, adaptive process control, and responsive human-machine interfaces that react instantly to dynamic conditions.
He outlines an approach to designing edge AI systems for industrial applications, covering everything from requirements analysis to deployment and maintenance.
Security is also a growing concern and an industry requirement as more devices are connected in industrial environments. Francesco Vaiani, senior product manager at Seco, looks at how designing for industrial IoT systems is changing to meet the European Cyber Resilience Act and the cybersecurity extension of the Radio Equipment Directive. This marks a structural shift in how connected products must be designed, documented, and maintained, he said.
For industrial OEMs, this means more than documentation updates and demands architectural decisions that remain technically defensible throughout the operational lifetime of the device, which often exceeds 10 years, Vaiani said.
Also in this issue, we select the top 10 DC/DC converters introduced over the past year. DC/DC converter manufacturers continue to focus on two big areas: delivering higher efficiency and offering greater flexibility.
Don’t miss the APEC 2026 product roundup. This annual conference showcases the latest in power electronics devices and solutions across industries. Some of these power devices highlight major technology advances in areas such as topologies and packaging, along with meeting growing demand for higher efficiency and higher power density. They also address system complexity by helping to simplify power design.
The post Leveling up Industry 4.0 appeared first on EDN.
METCASE expands accessory options for enclosures

METCASE’s new enclosure accessories brochure features its expanded range of options including metal tilt/swivel bail arms, a wide range of molded enclosure feet, PCB mounting parts, 19″ front panels, rack shelves and rack hardware.
(Source: METCASE USA)
These universal accessories fit METCASE models and other manufacturers’ enclosures, as well as bespoke OEM equipment housings. Applications include networking, communications, laboratory instrumentation, industrial control, test/measurement, peripherals, interfaces and medical devices.
Bail arms with 30° indexing double as desk stands. The aluminum handle profile (ordered separately) fits between two diecast side arms. It is supplied cut to the required length for the customer’s enclosure. The bail arms are available in a range of color combinations including off-white, anthracite, light gray, black and traffic white.
METCASE’s recently expanded range of molded ABS (UL 94 HB) enclosure feet kits can be specified with/without tilt legs. They are suitable for metal and plastic enclosures. There are two models: robust CASE FEET and the designer TECHNOFEET. The feet are easy to fit (just three holes required) with the fixing screws supplied. TPE non-slip inserts are included to prevent the enclosure skidding on the desk. Choose from five standard colors: off-white, traffic grey A, light gray, black and anthracite.
For mounting circuit boards, METCASE offers a range of snap-in guides (for slide-in PCB fitment) in different lengths and for board thicknesses from 0.031″ to 0.078″. For screw fitting PCBs to enclosure panels, there is a kit that includes M3 PCB pillars (0.394″ high) and mounting hardware.
METCASE also offers a range of accessories for 19″ racks. This includes matt anodized aluminum 10.5″/19″ front panels (ventilated/unventilated) in all standard heights from 1U to 6U; the 10.5″ front panels are 3U and 4U. There are also mild steel CR4 2U cantilever rack shelves for mounting equipment without rack brackets. Choose from two depths 11.02″/15.75″ in light gray or anthracite. 19″ equipment mounting kits include four bolts, four cup washers (black or gray) and four caged nuts.
For further information, view the METCASE website and download the accessories brochure: https://www.metcaseusa.com/en/Accessories/Accessories-for-Enclosures.htm
The post METCASE expands accessory options for enclosures appeared first on EDN.
Advancing AI performance with HBM4, SPHBM4 DRAM solutions

Over the past two decades, the raw compute capability of processors used in high‑performance computing (HPC) and artificial intelligence (AI) systems has increased at an extraordinary pace. Figure 1 illustrates this trend: XPU floating‑point performance has scaled by more than 90,000×, while DRAM bandwidth and interconnect bandwidth have improved by only about 30× over the same period.

Figure 1 The above chart highlights increases in XPU performance and interconnect bandwidth over 20 years.
This growing disparity between compute capability and data movement—often described as the memory wall and the I/O wall—has become one of the most significant constraints on achievable system performance.
For system designers, this imbalance translates directly into underutilized compute resources, rising power consumption, and increasing architectural complexity. As a result, memory bandwidth and packaging technologies have become just as critical to AI performance scaling as transistor density or core count.
HBM as a foundation for modern AI architectures
To address these bandwidth challenges, HPC and AI systems have increasingly adopted disaggregated architectures built around chiplets. While LPDDR and DDR memories continue to play important roles, high bandwidth memory (HBM) has emerged as the highest‑bandwidth DRAM solution available and a key enabler for modern accelerators.
HBM devices consist of a buffer (or base) die at the bottom and multiple 3D‑stacked DRAM layers above it. The buffer die uses very fine‑pitch micro‑bumps, allowing the memory stack to be co‑packaged with an ASIC using advanced packaging technologies such as silicon interposers or silicon bridges. Supported by rigorous standardization through the JEDEC HBM task group, HBM has become one of the most successful and widely adopted examples of chiplet‑based integration in production systems.
Figure 2 shows a representative side view of an HBM DRAM stack connected to an ASIC through a silicon interposer.

Figure 2 Here is how an HBM DRAM stack is connected to an ASIC through a silicon interposer. Source: Eliyan
A widely deployed example of HBM in practice is Nvidia’s B100 Blackwell accelerator, shown in Figure 3. The package contains two large, reticle‑sized XPU dies connected to one another through high‑bandwidth links, with HBM devices placed along the top and bottom edges of each die. Each XPU die integrates four HBM stacks—two on each long edge—resulting in a total of eight HBM devices per package.

Figure 3 Nvidia’s B100 Blackwell accelerator uses two XPUs connected to eight HBMs in a single package. Source: Nvidia
Using typical HBM3 specifications available at the time the JEDEC standard was adopted, each HBM3 device could employ an 8‑high stack of 16-Gb DRAM layers, providing 16 GB of capacity per stack. With a data rate of 6.4 Gb/s and 1,024 I/Os, each HBM3 device delivers approximately 0.8 TB/s of bandwidth. Across eight devices, this configuration provides 128 GB of total memory capacity and roughly 6.6 TB/s of aggregate bandwidth.
HBM4: Scaling bandwidth and capacity
To continue scaling memory performance alongside compute, JEDEC recently published JESD270‑4, the HBM4 standard. HBM4 introduces a number of architectural improvements over HBM3 that directly address the growing bandwidth and capacity requirements of AI workloads.
One of the most significant changes in HBM4 is a doubling of the channel count, increasing the number of I/Os from 1,024 to 2,048. In parallel, supported data rates have increased into the 6–8 Gb/s range and beyond. Memory density has also scaled, with 24 Gb and 32 Gb DRAM layers specified, along with support for 12‑high and 16‑high stacks. Reliability, availability, and serviceability (RAS) features—including DRFM—have also been enhanced.
Taken together, these advances enable substantial improvements in bandwidth, power efficiency, and capacity relative to HBM3. As an illustrative example, an HBM4e device using a 16‑high stack of 32 Gb layers provides 64 GB of capacity per device, as shown in Figure 4.

Figure 4 Eight HBM4 devices are shown in an example package accomplishing increased total capacity and bandwidth. Source: Eliyan
With 2,048 I/Os operating at 8 Gb/s, such a device can deliver up to 2 TB/s of bandwidth. In a package containing eight HBM4 devices, total memory capacity scales to 512 GB—four times that of the earlier HBM3 example—while aggregate bandwidth exceeds 16 TB/s, a 2.5× increase.
Custom HBM and the role of the base die
As HBM4 adoption accelerates, some system designers are exploring the development of custom HBM solutions optimized for specific applications. A key enabler of this trend is the evolution of the HBM base die.
In earlier HBM generations, the base die was typically manufactured using a DRAM‑optimized process, well suited for capacitor structures but less optimal for high‑speed logic. With HBM4, most suppliers are transitioning to standard advanced logic processes for the base die. This shift aligns more closely with the processes already familiar to SoC designers and opens the door to customization opportunities.
Whether using standard or custom HBM4 devices, these solutions continue to rely on advanced packaging and silicon substrates—such as interposers or bridges—to accommodate the large number of fine‑pitch connections between the memory and the ASIC.
SPHBM4: Bringing HBM‑class bandwidth to organic packaging
Despite its performance advantages, traditional HBM integration requires advanced packaging, which can increase cost and complexity. Many system designers, particularly those focused on volume production and reliability, prefer standard organic substrates. To address this gap, JEDEC has announced that it is nearing completion of a new standard for Standard Package High Bandwidth Memory (SPHBM4).
SPHBM4 devices use the same DRAM core dies as HBM4 and provide equivalent aggregate bandwidth, but they introduce a new interface base die designed for attachment to standard organic substrates. Figure 5 illustrates a side view of an SPHBM4 DRAM mounted directly on an organic package substrate, alongside an ASIC. The ASIC itself may also reside on the organic substrate, or it may remain on advanced packaging such as a silicon bridge for multi‑XPU integration.

Figure 5 Side-view of an SPHMB4 DRAM and ASIC connection is shown with the SPHBM4 DRAM attached directly to the organic package substrate. Source: Eliyan
To achieve HBM4‑class throughput with fewer pins, SPHBM4 employs higher interface frequencies and serialization. While HBM4 defines 2,048 data signals, SPHBM4 is expected to use 512 data signals with 4:1 serialization, enabling the relaxed bump pitch required for organic substrates.
Because SPHBM4 uses the same DRAM stacks as HBM4, per‑stack capacity remains unchanged. However, organic substrate routing supports longer channel lengths between the SoC and the memory, which can enable new system‑level trade‑offs. In particular, longer routing distances and angled trace routing can allow more memory stacks to be placed around a given die.
Figure 6 illustrates this effect. When HBM devices are mounted on silicon substrates, they must be placed immediately adjacent to the XPU, limiting the number of stacks to two per 25-mm die edge. With SPHBM4 on an organic substrate, three memory devices can be connected along the same edge, increasing both memory capacity and bandwidth by approximately 50%.

Figure 6 This is how 12 SPHBM4 devices in example package boost capacity and total bandwidth. Source: Eliyan
Even when a silicon substrate is still used beneath the XPU—for example, to support high‑bandwidth XPU‑to‑XPU links—the overall interposer size can be significantly reduced when memory devices are moved to the organic package. This reduction can translate into meaningful benefits in system cost, manufacturability, and test complexity.
Looking ahead
AI workloads continue to push the limits of memory bandwidth, capacity, and packaging technology. JEDEC’s HBM4 standard represents a major step forward in addressing these demands, while the emerging SPHBM4 standard expands the design space by enabling HBM‑class performance on standard organic substrates.
For system architects, these technologies offer new flexibility in balancing performance, cost, and integration complexity. As memory and packaging increasingly shape overall system capability, early consideration of options such as HBM4, custom HBM, and SPHBM4 will be essential to fully unlocking the next generation of AI and HPC performance.
Kevin Donnelly is VP of strategic marketing at Eliyan.
Related Content
- What the special section on chiplets design has to offer
- Chiplet innovation isn’t waiting for perfect standards
- Scoping out the chiplet-based design flow
- Demystifying 3D ICs: A practical framework for heterogeneous integration
- Chiplets: 8 best practices for engineering multi-die designs
- Overcoming interconnect obstacles with co-packaged optics (CPO)
The post Advancing AI performance with HBM4, SPHBM4 DRAM solutions appeared first on EDN.
Rethinking machine vision in industrial automation

Machine vision has always played a critical role in ensuring safe, efficient, and reliable operation in many industrial settings. However, as vision-enabled machines become more numerous and the type and volume of data they can collect expand, challenges are forcing system makers to look at new approaches to efficiently acquire, process, and utilize visual data.
If we look at the current challenges, they span the spectrum in terms of improving operational efficiency, accuracy, and reliability.
Data overload and processing efficiency that limits throughput are major issues as industries move toward more advanced, faster automation, tasking vision systems with capturing and analyzing vast amounts of data. Traditional vision systems often struggle with the sheer volume of images they capture, much of which can be redundant. The requirement now is not just about capturing high-resolution images but doing so in a way that first and foremost accelerates throughput (in part by minimizing irrelevant data) while maximizing the precision and relevance of the information captured.
Real-time processing is becoming increasingly important, especially in environments where machines need to make instantaneous decisions, such as in quality control or defect detection on production lines. This requires more efficient processing methods and data reduction techniques.
High-speed and high-precision demands increase as production lines get faster. High-speed processing, low latency, and the ability to capture minute changes in a scene in real time are critical. Traditional frame-based systems struggle with motion blur and data overload when capturing fast-moving objects. For example, in applications such as high-speed counting, even the slightest delay in image acquisition and processing can lead to errors.
Sustainability is a growing priority, as many industrial systems operate in environments where power efficiency is key. Vision systems need to operate for extended periods without consuming significant amounts of energy. Traditional image-processing systems, especially those that capture entire frames at a fixed rate, can be power-intensive and require sophisticated cooling or energy management.
Complex lighting and environmental conditions are common in many settings, including extreme brightness, low light, or dynamic lighting scenarios. Vision systems need to cope with high-dynamic-range requirements to capture high-quality images without losing detail in either the darkest or brightest areas. Conventional frame-based systems have struggled in such conditions, leading to the need for more adaptable and sensitive vision technologies.
Predictive maintenance and condition monitoring are growing needs. Vision systems must not only react to issues but also help to predict potential problems before they occur. Predictive maintenance requires vision systems that can monitor machine vibrations, detect wear and tear, and identify early signs of equipment failure.
These challenges point to a more fundamental limitation: Traditional frame-based vision was designed for image capture and human viewing, not for machines that must detect, interpret, and react to changes in real time. As industrial systems move toward higher levels of automation and autonomy, vision is becoming a core component of the perception pipeline.
This shift is driving demand for sensing approaches that reduce latency, limit unnecessary data, and enable faster, more reliable decisions across applications such as monitoring, inspection, counting, and control.
Event-based vision addresses these challengesEvent-based vision, inspired by the human eye and brain, is increasingly used in industrial machine vision to address these challenges. By mimicking biological vision, this technology utilizes efficient sensing and collection techniques that capture changes within a specific scene. This reduces processing requirements compared with traditional frame-based methods while revealing details that conventional systems miss, opening new possibilities for precision and performance in industrial applications.
Event-based vision is particularly suited for industrial automation, IoT, automotive, and edge applications that demand high performance, low power consumption, and operation in challenging lighting conditions. The technology offers significant advantages in speed, power efficiency, dynamic range, and low latency, driving use cases such as high-speed counting, preventive maintenance, and inspection.
From frame-based imaging to event-based perceptionIn conventional video systems, entire images (i.e., the light intensity at each pixel) are recorded at fixed intervals, known as the frame rate. Standard movies are recorded at 24 fps, with some videos using higher frame rates like 60 fps (16.7-ms intervals). While effective for representing the “real world” on a screen, this method oversamples unchanged parts of an image, especially at high frame rates, while undersampling the most dynamic areas. As a result, critical motion information can be missed between frames.
In contrast, the human eye samples changes up to 1,000× per second without focusing on static backgrounds at such high frequencies. Event-based sensing offers a biologically inspired solution to this under- and oversampling. Unlike traditional cameras, event sensors don’t use a uniform acquisition rate (frame rate) for all pixels. Instead, each pixel defines its sampling points by reacting to changes in the amount of light it detects. Information about contrast changes is encoded in “events”—data packets containing the pixel’s coordinates and the precise time of the event.
Figure 1: Frame-based vs. event-based sensing—discrete frame sampling vs. continuous motion capture (Source: Prophesee)
Prophesee’s patented event-based sensors, for instance, allow each pixel to activate intelligently based on detected contrast changes. This enables continuous acquisition of essential motion information at the pixel level. The pixels operate asynchronously (unlike traditional CMOS cameras) and at much higher speeds, as they don’t need to wait for a complete frame before reading data.
The advantages of event sensors include high-speed operation (equivalent to 10,000 fps), extremely efficient power consumption (down to the microwatt range), low latency, reduced data processing requirements (10× to 10,000× less than frame-based systems), and high dynamic range (up to 140 dB).
Because only changes are transmitted, event-based data streams are inherently sparse and temporally precise, allowing downstream processing systems—including AI-based processing—to focus on what matters: motion, variation, and anomalies rather than static background information. These attributes make event-based vision systems suited for a wide range of applications and products.
This technology is being commercialized more widely, such as in Prophesee’s Metavision, which has evolved over the past decade to deliver high performance through integrated hardware and software solutions.
Real-time industrial automation with event-based visionEvent-based vision excels in a variety of industrial automation applications. Typical use cases (see Figure 2) range from object tracking and high-speed counting to predictive maintenance and quality control.
Figure 2: Applications of event-based vision in industrial automation (Source: Prophesee)
Safety: Object tracking
Event-based vision systems excel at tracking moving objects, leveraging their low data rate and sparse information capabilities. This approach allows for precise object tracking with minimal computational resources, eliminating traditional “blind spots” between frame acquisitions. Additionally, event sensors offer native segmentation, focusing solely on movement and disregarding static backgrounds for improved tracking accuracy and efficiency. Event-based vision enhances safety by monitoring worker and machine interactions in real time, even in complex lighting, without capturing images.
Productivity: high-speed countingReal-time vision systems powered by event-based sensing enable objects to be counted at unprecedented speeds with high accuracy and minimal motion blur. Sensors independently trigger each pixel as objects pass through the field of view, achieving a throughput of over 1,000 objects per second and an accuracy of more than 99.5%, ensuring rapid and precise counting in high-speed environments.
Predictive maintenance: vibration monitoringEvent-based vision enables continuous, remote vibration monitoring with pixel-level precision. By tracking the temporal evolution of each pixel in the scene, the sensors record each event’s coordinates, polarity of change, and exact timestamp. This data provides valuable insights into vibration patterns across frequencies from 1 Hz to the kilohertz range, aiding in predictive maintenance.
Figure 3: Event-based vibration monitoring in industrial systems; frame-based imaging shown for reference (Source: Prophesee)
Quality: particle/object size monitoring
In high-speed production environments, event-based sensing allows for real-time control, counting, and measurement of particle or object sizes on conveyors or channels. The sensors capture instantaneous quality statistics, ensuring accurate process control at speeds of up to 500,000 pixels per second with a counting precision of 99%, optimizing quality assurance in production lines.
Figure 4: High-speed event-based particle counting and size monitoring; frame-based image shown for reference (Source: Prophesee)
Quality control
Event-based vision systems help lower reject rates with real-time feedback and advanced processing down to a 5-µs time resolution and blur-free asynchronous event output. One specific use case is in the automatic detection and classification of the finest imperfections in manufacturing materials—for example, in automotive parts to perform paint defect inspection, scratch detection, and planarity testing (see Figure 5).
Figure 5: Event-based surface contamination and defect detection in industrial production (Source: Prophesee)
As event-based vision continues to evolve and address diverse market needs, it is establishing itself as a new industry standard. Over the past several years, the technology has expanded to serve a wide array of applications.
Thousands of product developers are now adopting event-based vision for sophisticated camera and perception systems, supported by open-source technology and a growing inventors’ community. These advancements are transforming how machines perceive, process, and react to visual information in real time, bringing greater precision, efficiency, and intelligence to industrial automation operations.
Thibaut Willeman is head of business development and go-to-market at Prophesee, where he works on the market development of event-based vision systems for industrial automation, robotics, and defense applications. He previously held strategy and innovation roles at companies such as Boston Consulting Group, working on growth strategy, product strategy, and innovation initiatives for industrial and technology companies. He holds an engineering degree and a master’s degree in innovation and technology management.
The post Rethinking machine vision in industrial automation appeared first on EDN.
Humidifiers and such: How much “smart” is too much?

This engineer’s new humidifier is—he kids you not—Wi-Fi enabled, therefore “smart”. What upsides does such a product deliver? And at what tradeoffs?
Within one of last month’s writeups, I mentioned that my wife and I had recently acquired two DREO 4 liter-capacity ionizing humidifiers. That purchase led to my interest in hygrometers (humidity measurement devices) such as the TP-Link Tapo T315, which ended up supplanting the bad data I’d previously relied upon, coming from my furnaces’ touchscreen thermostats.
Ionizing advancementsThe baseline DREO HM311:
relies on front panel buttons for user control purposes. It works well, and I enjoy the dynamic bubbling-water “light show” projected through the center mist tube, particularly visible at night:

The ionizing design approach is also interesting; just make sure to remember to keep ‘em clean:

Its slightly more expensive “smart” sibling, the HM311S, adds Wi-Fi support, thereby making it controllable (and more broadly manageable) via a mated smartphone or other mobile device:

or even, courtesy of its integrated Amazon Alexa and Google Assistant support, your voice:
And the tri-color mist tube (which I’d been calling a “pillar” until I revisited the user manual just now), is a handy visual reference to the current measured humidity level (I’ve yet to see blue):
|
Light Color |
Humidity Level |
|
Yellow |
≤30% |
|
Green |
31-60% |
|
Blue |
≥61% |
Believe it or not, the HM311S is even the beneficiary of periodic firmware updates, such as the one that I was prompted to install as part of initial out-of-box setup:
















Another update, I noticed, was available as I re-accessed the device via my smartphone two-plus months later, just prior to writing these words:



And yes, the humidifier’s status and settings are even accessible over the Internet; note the cellular-only connection in the following screenshot (per the reported 436 hours of use to date, this was an Amazon Warehouse-sourced, apparently previously-used unit, even though it arrived in seemingly brand-new condition):

Nifty. But also potentially (more than) a bit scary. First off, what’s the realistic benefit (if any) of remote status monitoring from my mobile device? It’s not like I have a robot sitting at home in my absence that can alternatively grab a water pitcher, fill it and transfer its contents to the humidifier if it empties, after all. Not yet, at least:

More generally, is it convenient to turn on and off (and raise and lower the output intensity) of the humidifier from the couch, using either the aforementioned smartphone or my voice? Sure. But on the other hand, I could also always use the exercise. And what do I give up in exchange for all this supposed connectivity “goodness”?
For one thing, I’m sharing WAN IP address, device usage and ambient analytics data with the manufacturer. For a humble DREO humidifier, maybe this degree of reveal isn’t such a big deal. But what about my Google Nest Wifi mesh network, similarly managed via the cloud? Or my Blink security camera setup, which leverages cloud services not only for monitoring and control purposes but also to store recordings (at least currently; stay tuned for next week’s teardown)?
And what happens if those cloud services, not only from DREO (or its Amazon Alexa partner), Google or Blink but any other similar supplier, get hacked? Sure, it’s annoying to have someone remotely switching on and off your humidifier out of your control. That time someone used my then-firewall-exposed IPP port to spit pages (and pages and pages) of gibberish out of my laser printer was a bit more annoying. But that’s not what I’m talking about when I say “scary”.
The hackers now know who I am from my account profile and can easily determine my location via an online search using my name. Since they know my WAN IP address, they can now attempt to hack me. They also know my Wi-Fi network credentials, which makes it even easier to get inside my LAN if, since they now know my location, they’re motivated to pull up and park on the street outside. They know my account username and password, which theoretically should be unique to this particular cloud service but—get real—is undoubtedly reused elsewhere. And for a paid cloud service, they also now know my credit card and/or bank account info. Fun times!
Is elementary (especially) convenience worth the potential consequences? If you’re a consumer, it’s a question you should be asking yourself pre-purchase…although you’re likely to be unaware of the possible downsides. Therefore, if you’re a manufacturer, it’s a question you should be asking on behalf of your potential customers during the initial development process…although you’ve also got marketing breathing down your neck for new features, and your competitors may have already unveiled similar capabilities, so you’re also under late-to-market pressure, so…
When, if ever, is a product too “smart”? Or taking the thought to the other end of the extremist spectrum, should products be “smart” at all, at least for the mass market? As always, I welcome your thoughts in the comments!
—Brian Dipert is the associate editor, as well as a contributing editor, at EDN Magazine.
Related Content
- The Tapo Hub: TP-Link joins the low-bandwidth, long-range RF club
- The whole-house LAN: Achilles-heel alternatives, tradeoffs, and plan
- Blink: Security cameras with a power- and bandwidth-stingy uplink
- IoT device vulnerabilities are on the rise
The post Humidifiers and such: How much “smart” is too much? appeared first on EDN.
Top 10 DC/DC converters and modules

DC/DC converters for demanding applications, ranging from industrial, railway systems, and satellites to communications and information technology equipment (ITE), are required to meet stringent requirements. They call for enhanced performance and high reliability, including operating in extreme conditions, while often requiring compact designs.
Over the past year, DC/DC converter manufacturers have focused on providing higher efficiency, offering greater flexibility with more options, saving board space with smaller packages, and delivering more cost-effective solutions. These devices are available in a variety of form factors, including brick types, DIPs, and modules.
Here’s a sampling of DC/DC converters introduced over the past year that deliver improvements in performance and packaging while providing the right-sized features for the application.
Meeting demanding requirementsMany of the latest families of DC/DC converters are designed to operate in demanding and harsh environments, including industrial, railway, ITE, and communications. They also often need to fit into tight spaces.
XP Power recently developed a family of DC/DC converters for space-constrained applications in demanding environments such as industrial, ITE, and communications systems. The BCT40T series of 40-W DC/DC converters offer high power density in a 1 × 1-inch (25.4 × 25.4-mm) package.
The BCT40T series features high efficiency, up to 89% depending on the model, and remote on/off functionality to enable energy savings and safe shutdowns. The series offers a wide 4:1 input voltage range, enabling operation across multiple input voltages. Models are available with nominal 24-VDC inputs (ranging from 9.0 V to 36.0 VDC) and 48-VDC inputs (ranging from 18.0 V to 75.0 VDC).
The devices operate over a wide operating temperature range of −40°C to 105°C and a broader full-load operating temperature range than many alternatives, XP Power said.
The BCT40T offers single regulated outputs ranging from 3.3 V to 24 VDC, as well as dual regulated outputs at ±12 VDC and ±15 VDC. The single-output models offer the flexibility of ±10% output voltage adjustment via an external trim resistor, enabling specific voltage requirements.
Targeting applications such as test and measurement, robotics, process control, analytical instruments, and communications equipment, these DC/DC converters feature an ultra-compact metal package that saves printed-circuit-board (PCB) area and allows more room for customer application circuitry, according to XP Power. In addition, these devices are smaller than many 40-W alternatives, which typically come in larger, 2 × 1-inch (50.8 × 24.4-mm) packages, reducing required board space by 50%.
The series meets worldwide safety approvals, including IEC/UL/EN62368-1 standards, as well as applicable CE and UKCA directives. It also complies with EN55032 Class A/B for conducted and radiated emissions and EN61000-4-x for immunity. The BCT40T series is available now.
XP Power’s BCT40T series (Source: XP Power)
Murata Manufacturing launched a high-performance, 1-W DC/DC converter with reinforced isolation and ultra-low capacitance, targeting communications and analog front-end measurement circuits.
The NXJ1T series addresses the need for robust isolation, delivering high electrical isolation, noise immunity, and thermal reliability for industrial, energy, and medical applications with 4.2-kVDC isolation (Hi Pot Test) and compliance with UL62368 safety standards.
The NXJ1T series, housed in a compact, 10.55 × 13.70 × 4.04-mm footprint, is designed for safety and durability in demanding environments. It features an unregulated, 1-W 5-V input to 5-V/200-mA output design, which is suited for embedded systems.
Each device delivers reinforced insulation to 200 Vrms and basic insulation to 250 Vrms. This adds a layer of protection in high-voltage environments. The undervoltage lockout (UVLO) functionality enhances operational stability, which prevents erratic behavior under fluctuating power conditions, Murata said.
These devices can also be used in medical equipment, where low leakage current is critical for patient-connected applications. They feature ultra-low isolation capacitance, which helps minimize unwanted leakage, supporting compliance with stringent safety standards such as IEC 60601-1 when used within a certified system, the company said.
The DC/DC converters also leverage proprietary molding technology, providing high ingress protection against dust and particulates in harsh industrial environments and extreme temperatures. The device has successfully undergone 1,000 temperature cycles between −40°C and 125°C, demonstrating its ability to withstand the highest levels of thermal stress, Murata said.
The series also uses Murata’s proprietary block-coil transformer technology, providing high isolation and low leakage current, and facilitates lower switching frequencies (500 kHz to 2 MHz) and higher efficiencies of approximately 80%.
The result is exceptional common-mode transient immunity and significantly lower isolation capacitance, according to Murata, making it suited for high-performance power isolation in electrically noisy environments.
Recom GmbH developed a 20-W DC/DC converter in a compact, 1.6 × 1 × 0.4-inch (40.6 × 25.4 × 10.2 mm) package, calling it a new level of high efficiency in DC/DC performance. The RPA20-FR series, targeting rail applications, delivers 20 W over its full 36-VDC to 160-VDC input range (200-VDC peak for 1 second) from −40°C to 70°C and 105°C with derating.
The series offers fully regulated, low-noise, and protected single outputs (5 V, 5.1 V, 12 V, 15 V, and 24 VDC), trimmable by +20%/−10% minimum, with ±5-V, ±12-V and ±15-VDC options available. The devices feature remote on/off control with positive or negative logic, UVLO is included, and no minimum load is required.
The parts are designed specifically for rolling stock applications with nominal input voltages of 48 V, 72 V, or 110 VDC. They are EN 45545-2– and EN 50155–compliant and meet UL/IEC/EN 62368-1 for audio/video and IT applications. Full 3-kVAC/1-minute reinforced isolation is provided, and the parts comply with EMC “Class A” levels as well as rail EMC standard EN 50121-3-2. A separate protection module, RSP150-168, is available to protect against surges according to RIA12 and NF F01-51 standards.
The RPA20-FR series meets environmental standards required for rail applications, particularly EN 45545-2 for fire protection, EN 60068-2-1 for dry and damp heat, and EN 61373 for shock and vibration. Mean time between failure is rated over 1.5 Mhrs at 25°C according to MIL-HDBK-217F GB.
Cincon Electronics Co. Ltd. recently launched the EC3AW8 and EC4AW8 series, delivering 3 W and 6 W of regulated power, respectively, tailored for demanding industrial environments. Applications include instruments, industrial automation and control systems, telecom and data communication equipment, test and measurement, IPC and embedded systems, and IT systems.
The EC3AW8 and EC4AW8 DC/DC converters feature an ultra-wide 8:1 input voltage range. They are available with single-output voltages of 3.3, 5, 12, or 15 VDC and dual outputs of ±5, ±12, or ±15 VDC, and they offer an optional positive remote on/off control for ease of system integration.
With an ultra-wide input range from 9 to 75 VDC, the EC3AW8 and EC4AW8 series are suited for industrial and IT power systems such as 12 V, 24 V, and 48 V. They deliver high efficiency up to 87% and ensure reliable performance under harsh conditions. The operating temperature range is −40°C to 105°C (with de-rating), and the maximum case temperature is 115°C.
Other features include very low no-load input current (7 mA max. for 3 W; 8 mA max. for 6 W), reducing power consumption in standby mode, and a range of protection including input UVLO, output overvoltage protection, overcurrent protection, and continuous short-circuit protection.
These converters also meet key safety and electromagnetic-interference (EMI) standards, including EN 55032 Class A without an external filter, simplifying design and integration for space-constrained applications, Cincon said.
They are also compliant with MIL-STD-810F for shock and vibration and support operating altitudes up to 5,000 meters. They meet IEC/UL/EN 62368-1 safety standards and provide 3,000-VDC input-to-output isolation.
These DC/DC converters are housed in a standard industrial DIP-24 package measuring 1.25 × 0.8 × 0.4 inches (31.8 × 20.3 × 10.2 mm).
Space and satellitesMicross Components Inc. recently introduced a series of Class H+-screened DC/DC converters for harsh space-based applications. The AFLS28XX Series of DC/DC converters delivers a radiation-tolerant power conversion solution for low-Earth-orbit (LEO) satellite constellations, new space missions, launch vehicles, and other space-based systems.
The AFLS series of 28-V, 120-W DC/DC converters builds on the AFL series, with updated technology and design enhancements. These converters meet MIL-PRF-38534 Class H screening requirements and include additional tests such as PIND and radiography to support reliability in LEO and new space environments. The AFLS series offers radiation specifications of 50-krad (Si) TID and 60-MeV·cm2/mg SEE.
These devices are tailored for space missions requiring radiation tolerance at a lower cost than traditional space-grade-qualified power supplies, Micross said.
The hermetically packaged DC/DC converters are available in single- and dual-output voltage configurations ranging from 5 V to 28 V. They feature proprietary magnetic pulse feedback for optimized dynamic line and load regulation and parallel operation for outputs above 120 W, with synchronization capability to a system clock in the 525-kHz range.
Other features include internal current sharing for balanced load distribution and high power density with no de-rating across the full operating temperature range. In addition, they meet reduced size, weight, and power (SWaP) requirements by eliminating shielding requirements and delivering lower power consumption.
These parts are currently under test, and engineering samples are available within four to six weeks ARO.
Micross’s AFLS series (Source: Micross Components Inc.)
Also targeting space applications is a series of off-the-shelf, 15-W DC/DC converters from Microchip Technology Inc. This space-grade, non-hybrid DC/DC isolated power converter with a companion EMI filter operates from a 28-V satellite bus in harsh environments.
The SA15-28 radiation-hardened DC/DC power converter with a companion SF100-28 EMI filter are designed to meet MIL-STD-461 specifications. The SA15-28 and SF100-28 are fully compatible with Microchip’s existing SA50 series of power converters and SF200 filter.
The SA15-28 operates across a wide temperature range from −55°C to 125°C and offers radiation tolerance up to 100 krad TID. It is available with 5-V triple outputs that can be used with point-of-load converters and low-dropout linear regulators to power FPGAs and microprocessors. The output voltage combinations can be customized.
The SA15-28 weighs 60 grams and is approximately 1.68 in.3 to meet SWaP requirements. Microchip provides comprehensive analysis and test reports including worst-case analysis, electrical stress analysis, and reliability analysis. The SA15-28 DC/DC power converter and SF100-28 external EMI filter are now available.
Microchip’s SA15-28 DC/DC converter (Source: Microchip Technology Inc.)
Brick converters
Advanced Energy Industries Inc. recently added two quarter-brick modules to its ultra-efficient, non-isolated bus converter family for 48-V power conversion. These DC/DC converters target advanced information and communication technology equipment including AI servers, compute and networking, and industrial applications such as robotics and test and measurement.
The Advanced Energy Artesyn NDQ1300 1,300-W and NDQ1600 1,600-W quarter-brick modules operate with peak efficiencies up to 98%, making them suited for high-performance applications. Each of the modules can convert a 48-V input into a fully regulated, 12-V output for non-isolated, low-voltage, high-current power stages as well as PCIE slots and memory devices.
The NDQ devices feature a flat efficiency curve that ensures that the modules deliver optimized power conversion across a wide load range. They also feature an integrated PMBus interface to support flexible digital control and monitoring as well as current-share and remote-sensing options to enable the connection of multiple power supplies in parallel, supporting higher load current or redundancy.
The NDQ modules use an advanced baseplate for better thermal management and heat-sink integration. They also benefit from an inherently safe, transformer-based topology that is resilient to transient loads and makes designing applications for inrush current control on startup easier, the company said.
Advanced Energy’s NDQ1300 quarter-brick module (Source: Advanced Energy Industries Inc.)
Another new converter in a brick format is Bel Fuse’s compact, 100-W DC/DC converter for rugged applications such as industrial automation, railway systems, telecom infrastructure, and electric vehicles/e-mobility. The PRA100 Series is housed in a standard 1/8th brick format, addressing the increased need for higher power density. The devices provide enhanced thermal performance, wide input flexibility, and an environmentally robust design.
The PRA100 operates across a 9-VDC to 74-VDC input range and delivers up to 54-V output with 3,000-VDC isolation. The operating temperature is −40°C to 105°C. All models are fully compliant with EN 62368-1 and carry CE, UKCA, and UL/cUL certifications. It is also compliant with EN 50155, making it well-suited for railway applications. The series offers optional baseplate cooling and negative logic features to extend its versatility in harsh conditions and EV platforms, Bel Fuse said.
Bel Fuse’s PRA100 Series (Source: Bel Fuse)
DC/DC converter modules
TDK Corp. developed a series of its microPOL (μPOL) power modules with full telemetry (voltage, current, and temperature). The FS160* series μPOL DC/DC converters deliver high power density in the smallest package sizes.
All FS160* μPOL modules measure 3.3 × 3.3 × 1.35 mm, making it easier to place them near complex ICs such as ASICs, FPGAs, and SoCs. Full telemetry is accessible via an I2C interface. The modules operate across a broad junction temperature range from −40°C to 125°C.
There are several versions of each of the 3-A parts (the FS1603 series), 4-A parts (the FS1604 series), and 6-A parts (the FS1606 series). The FS line also includes models at 12 A (the FS1412) and 25 A (the FS1525). The selection of DC/DC converter modules that range from 3 A to 200 A (if eight FS1525’s are connected in parallel) covers a wide range of applications, including big data, machine learning, AI, 5G cells, IoT, and enterprise computing.
TDK calls the module family’s configuration innovative, integrating a high-performance controller, drivers, MOSFETs, and logic core, using a semiconductor embedded in substrate. This packaging eliminates wire bonds and enhances thermal performance. Also integrated are the modules’ inductor and passives into a chip-embedded package to minimize parasitic inductance, which improves the module’s efficiency. Boot and Vcc capacitors are also incorporated into the module.
The FS160* series DC/DC converters deliver 1-W/mm3 in modules that are roughly half the size of other products in the same class, according to the company. In addition, TDK said the modules are so effective that they require no airflow for up to 15 W to 30 W in up to 100°C ambient temperature.
TDK has created multiple design tools, including tools specific to FPGAs from each of the major FPGA suppliers. Additional design tools for the FS160* series include SPICE simulator designs on QSPICE.
Evaluation boards are available, one each for modules at 3 A, 4 A, and 6 A. Fast starter designs for schematic and PCB layout are available at Ultra Librarian.
TDK’s FS160 μPOL DC/DC converters (Source: TDK Corp.)
Aimed at the industry’s shift to high-performance, 48-V systems, Vicor Corp. launched its 48-V to 12-V DCM DC/DC converter modules last year. The DCM3717 and DCM3735 DC/DC power modules, offering up to 2 kW of output power, support the shift to 48-V power delivery networks (PDNs) that provide greater power system efficiency, power density, and lower weight than 12-V-based PDNs in a variety of applications, including communications, computing, automotive, and industrial.
The DCM products are non-isolated, regulated DC/DC converters, operating from a 40-V to 60-V input to generate a regulated output adjustable from 10 V to 12.5 V. The DCM3717 family is available in two power ranges, 750 W and 1 kW, and the DCM3735 is a 2-kW device. These DCM products can be paralleled with up to four modules to scale system power levels.
Claiming industry-leading power density at 5 kW/in.3, these high-density power modules enable power system designers to deploy 48-V PDNs for legacy 12-V loads, delivering size, weight, and efficiency benefits. These devices deliver high efficiency at 96% in a low-height, surface-mount converter housed in package, delivering a 6× reduction in size.
The smaller module is the DCM3717, with a wide input range of 40–60 V (48-V nominal) and an output of 10–12.5 V (12-V nominal). It comes with two power options, 750 W and 1 kW, and 96.5% efficiency. The module is housed in a compact, 36.7 × 17.3 × 5.2-mm footprint.
In a side-by-side comparison with a top competing product, the DCM3717 is less than half the size, with 20% higher output power and 7× higher power density, according to the company.
The larger device, the DCM3735, offers the same wide input range of 40–60 V (48 V nominal) and output of 10–12.5 V (12 V nominal). The power option is 2 kW with 96.4% efficiency. The module is housed in a compact, 36.7 × 35.4 × 5.2-mm footprint.
Vicor’s DCM3717 and DCM3735 DC/DC power modules (Source: Vicor Corp.)
The post Top 10 DC/DC converters and modules appeared first on EDN.
Antilog PWM and 2-way current mirror make buffered triangle and square waves

It can be fun (and productive!) to transplant a previous Design Idea into a new context, and even more so when modifying and mixing multiple ideas. Here we’ll combine and comingle the following:
- 5 decade antilogarithmic PWM current source
- A two-way mirror—current mirror that is
- Dual RRIO op amp makes buffered and adjustable triangles and square waves
This gets the buffered triangle and square-wave output oscillator shown in Figure 1. It’s linear-in-log tunable from 10 Hz to 1 MHz and controlled with 8-bit PWM.

Figure 1 Incoming 8-bit antilog PWM interface (U1, U2, A1, Q1) generates 80 nA to 8 mA current to control 10 Hz to 1 MHz oscillator (Q2, Q3, Q4, A2, A3). The asterisked parts are precision (metal film) resistors and (C0G) capacitors.
Wow the engineering world with your unique design: Design Ideas Submission Guide
We’ll now proceed to vivisect it.
A single MCU 500 kHz (2 μs per count) PWM output bit controls the anti-log current source. It’s isolated in blue in Figure 2 and works as explained in reference 1 above.

Figure 2 The U1 U2 switching circuit periodically charges precision timing cap Ct to 1.24 V, then exponentially discharges it at (Rt + R1)Ct = 43.4 μs time-constant, storing the result on sample and hold Csh.
The final sample-and-hold antilog Csh voltage = 1.24v*exp(-Tpwm/43.4μs) = 1.184 V to 11.8 μV as Tpwm goes from 2 to 500 μs = 1 to 250 lsb for a Q1 five-decade collector current range of Vcsh/R4 = 8 mA to 80 nA. R1 provides for time constant fine-tuning.
Steering and periodic inversion/reflection of the 80nA to 8mA Q1 collector current into integrator A2 is the job of the Q2, Q3, and Q4 two-way current mirror. It’s covered in reference 2 and in blue in Figure 3.

Figure 3 A two-way current mirror Q2, Q3 ramps A2 C1 integrator up/down at dV/dts ranging from 8E1 to 8E6 volts per second (V/s). Q4 reduces the loading of A3 at high current/frequency while acting as the reference 2 D1.
Comparator A3 switches current mirror polarity when A2’s output reaches the 0.5 V and 4.5 V limits, which are similar to the theory of operation of reference 3, and are determined here by the resistor networks shown below in Figure 4.

Figure 4 R5 R6 set comparator’s 0.5V/4.5V switching points and thus the triangle wave’s 4 Vpp amplitude.
The output frequency versus the PWM setting-controlled current sink is shown in Figure 5.

Figure 5 Frequency versus PWM setting: linear (black) vs log (red).
And that’s the name of that (antilogarithmic) tun(ing).
Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974. They have included best Design Idea of the year in 1974 and 2001.
Related Content
- 5 decade antilogarithmic PWM current source
- A two-way mirror—current mirror that is
- Dual RRIO op amp makes buffered and adjustable triangles and square waves
The post Antilog PWM and 2-way current mirror make buffered triangle and square waves appeared first on EDN.
IMUs demystified: The hidden sense of machines

Motion is invisible until something makes it measurable. That is where inertial measurement units (IMUs) step in—the silent sensors that give machines their hidden sense of balance, orientation, and trajectory. From smartphones that know when you have rotated the screen, to drones that hold steady against the wind, IMUs translate raw acceleration and angular velocity into actionable awareness.
In this installment of Fun with Fundamentals, we will peel back the layers of these compact marvels, showing how they evolved from bulky gyroscopes into today’s precision-packed silicon companions.
The silent navigators: IMUs
An IMU is a compact, high-precision device that captures how an object moves and orients itself in space. Whether steering rockets into orbit, stabilizing drones overhead, or enabling smartphones to guide us through crowded streets, IMUs are the unseen systems that make modern navigation possible.
At the heart of an IMU are sensors that detect linear acceleration with accelerometers and rotational velocity with gyroscopes. Many designs also incorporate a magnetometer to provide heading information. A typical configuration combines a 3-axis accelerometer and a 3-axis gyroscope, forming a 6-axis IMU. When a 3-axis magnetometer is added, the system becomes a 9-axis IMU. Together, these sensors deliver measurements of specific force, angular rate, and surrounding magnetic fields—producing a complete dataset for motion and orientation tracking.
The accelerometers, gyroscopes, and—when included—magnetometers inside an IMU are collectively referred to as inertial sensors. These components form the foundation of inertial navigation, working together to capture motion and orientation data without relying on external signals. By fusing their outputs, engineers can derive precise information about how a device moves through space, even in environments where GPS or other external references are unavailable.
So, accelerometers measure linear acceleration, capturing how quickly an object speeds up or slows down. Gyroscopes sense angular velocity, revealing the rate and direction of rotation. Magnetometers, when included, detect magnetic fields and provide heading information relative to Earth’s magnetic north.
It’s worth noting that engineers still deploy both 6-axis and 9-axis IMUs, depending on the demands of the application. A 6-axis unit, built from accelerometers and gyroscopes, is often sufficient for tasks like stabilizing drones, balancing robots, or monitoring automotive motion, where relative movement and rotation are the primary concerns.
In contrast, a 9-axis IMU adds a magnetometer, giving it the ability to resolve absolute heading. This makes it the preferred choice in smartphones, wearables, and advanced navigation systems, where orientation relative to Earth’s magnetic field is critical. In practice, the simpler 6-axis design remains a cost-effective workhorse, while the 9-axis variant dominates in consumer electronics and navigation-heavy applications.

Figure 1 A vintage mechanical inertial navigation system (INS) component achieves autonomous navigation by integrating an inertial measurement unit with a computational unit. Source: Author’s archives
Simply put, a typical IMU places one accelerometer and one gyroscope along each of the three principal axes, ensuring motion and rotation are captured in all directions. In some designs, a magnetometer is also added per axis to provide heading information, but this is not always the case—many IMUs operate effectively without it.
Beyond these core sensors, certain IMUs incorporate auxiliary elements such as temperature monitors, since accelerometers and gyroscopes are prone to thermal fluctuations that can compromise accuracy. By recording temperature data, the system compensates for thermal drift, stabilizing sensor outputs and improving overall reliability.
Evolution and types of IMUs
From the gimbaled IMUs of the aerospace pioneers to today’s miniaturized MEMS-based devices, IMUs have undergone a remarkable transformation. Early gimbaled systems relied on mechanically stabilized platforms, bulky yet precise, before giving way to strapdown IMUs that fixed sensors directly to the vehicle body, reducing size and complexity.
With the rise of microelectromechanical systems (MEMS), silicon MEMS IMUs became the standard for consumer electronics, robotics, and drones, prized for their low cost, compact size, and efficiency. For tactical and industrial applications, Quartz MEMS IMUs emerged, offering greater stability and resilience under temperature and vibration compared to silicon designs.
At the high-end, ring laser gyroscope (RLG) IMUs and fiber-optic gyroscope (FOG) IMUs represent the pinnacle of precision, both exploiting the Sagnac Effect to measure rotation. RLGs use laser beams circulating in a closed cavity, while FOGs rely on long coils of optical fiber—an approach that reduces maintenance needs and improves durability while delivering comparable accuracy.
Today, engineers select from this spectrum—silicon MEMS for affordability and portability, quartz MEMS for tactical reliability, and RLG/FOG systems for uncompromising accuracy—depending on mission requirements.

Figure 2 The Motus ultra‑high‑accuracy MEMS IMU enables precision in autonomous system applications. Source: Advanced Navigation
As a side note, it’s worth mentioning that while IMUs deliver raw measurements of acceleration and angular velocity, an attitude and heading reference system (AHRS) builds on this foundation by applying sensor fusion algorithms to provide stabilized orientation outputs: pitch, roll, yaw, and heading. In practice, AHRS units are IMUs with embedded processing, making them more directly usable in aircraft, marine, and robotic platforms where orientation data is required in real time.
Advanced IMU categories
Beyond the broad spectrum of MEMS and optical gyroscope technologies, IMUs can also be classified by their functional purpose. A north-seeking IMU is designed to determine true north without relying on external references such as the global navigation satellite system (GNSS) or magnetic compasses.
By exploiting the Earth’s rotation and combining precise gyroscope measurements, these systems achieve sub-degree heading accuracy, making them invaluable in marine navigation, underground operations, and defense applications where absolute orientation is critical.
In contrast, a navigation IMU focuses on tracking motion and orientation over time. It provides raw acceleration and angular velocity data that, when processed within an inertial navigation system (INS), yields position, velocity, and displacement. Navigation IMUs are widely deployed in aerospace, robotics, and consumer electronics, where continuous motion tracking and drift management are more important than absolute north-finding.
Together, these advanced categories highlight how IMUs are not only differentiated by sensor technology—silicon MEMS, quartz MEMS, RLG, or FOG—but also by the specific role they play in navigation systems, from heading determination to full trajectory tracking.
Practical pointers for engineering minds
IMUs are no longer the nightmares they once seemed. Thanks to today’s accessible sensor modules, open-source libraries, and low-cost development boards, even a novice maker can experiment with inertial measurement units without needing aerospace-grade expertise. What was once the domain of defense labs and high-end avionics has now become approachable for hobbyists, students, and engineers alike, making hand-on exploration of motion sensing and navigation both practical and affordable.
First off, note that modern inertial modules often advertise “IMU, AHRS, and INS options” because the same hardware platform can deliver different levels of functionality depending on firmware and processing. At the most basic level, the unit acts as an IMU, outputting raw accelerometer and gyroscope data. With onboard sensor-fusion algorithms, it becomes an AHRS, providing stabilized orientation in pitch, roll, yaw, and heading.
When paired with a computational unit and often GNSS input, the same device scales up to a full INS, achieving autonomous navigation with position, velocity, and orientation. This tiered approach lets engineers choose the level of integration that matches their application, from hobbyist UAVs to aerospace systems.
Modern IMUs give engineers and makers practical choices across performance levels. High-end devices like Analog Devices’ ADIS16575/ADIS16576/ADIS16577 deliver factory calibration, low bias drift, and digital outputs for precision robotics, autonomous systems, and aerospace projects.
At the same time, compact modules such as Murata’s SCH16T-K01 integrate gyro and accelerometer sensing for embedded applications, wearables, and IoT nodes. Together, these platforms show how inertial technology now scales from aerospace-grade accuracy down to plug-and-play modules, offering practical options for projects at every level.

Figure 3 The SCH16T‑K01 module combines a high‑performance 3‑axis angular rate sensor and 3‑axis accelerometer, delivering precise motion tracking for embedded, wearable, and IoT applications. Source: Murata
Besides, makers and hobbyists do not need to wrestle with bare chips anymore—prewired IMU breakout boards are widely available and come with headers and libraries, making motion sensing experiments plug-and-play. For newer designs, boards built around ST’s LSM6DSO/LSM6DSOX deliver reliable performance in a maker-friendly format, ensuring parts that are safe for ongoing projects.

Figure 4 Today’s prewired cards like the LSM6DSOX module—and other readily available IMU boards—let makers explore motion sensing with ease and enable reliable integration into advanced embedded projects. Source: Author
IMUs in practice and everyday life
Well, we are not balanced yet, but we have touched some fundamental and practical points in a rather random way. Still, the journey through IMUs shows how these sensors are not just abstract components for engineers; they are part of our everyday lives. From the stabilizing gimbals that keep cameras steady, to the motion tracking inside wearables, gaming controllers, and even automotive systems, IMUs quietly enable the seamless experiences we take for granted.

Figure 5 Today’s IMUs act as the unseen hand across entertainment, healthcare, and navigation—guiding cameras, gimbals, ships, trains, satellites, and aerospace systems, while also enabling makers to explore motion sensing with ease and integrate it reliably into advanced projects. Source: Author
The call now is to explore further—experiment with modules, build small projects, and see firsthand how this complex yet easy topic can transform ideas into motion-aware innovations.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Evaluating inertial measurement units
- GPS system with IMUs tracks first responders
- The role of motion sensors in the industrial market
- A Wireless Micro Inertial Measurement Unit (IMU)
- Inertial sensors are “stepping up” their game, at a cost
The post IMUs demystified: The hidden sense of machines appeared first on EDN.









