EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 47 min ago

GaN transistor cuts losses and heat

Thu, 02/05/2026 - 23:28

EPC’s first Gen 7 eGaN power transistor, the 40-V EPC2366, delivers up to 3× better performance than equivalent silicon MOSFETs. Now entering mass production, the device features a typical RDS(ON) of 0.84 mΩ and an optimized RDS(ON) × QG figure of merit of 12.6 mΩ·nC. This enables the EPC2366 to reduce conduction and switching losses while improving thermal performance.

Designed for high-efficiency, high-density power systems, the EPC2366 is suitable for synchronous rectifiers, DC/DC converters, AI server power supplies, and motor drives. It is rated for a drain-to-source voltage (VDS) up to 40 V, transient voltages up to 48 V, and a continuous drain current (ID) of 88 A, with pulsed currents reaching 360 A.

To assist design-in and evaluation, the EPC90167 half-bridge development board integrates two EPC2366 transistors in a low-parasitic layout, with PWM drive signals and flexible input modes.

The EPC2366 comes in a compact 3.3×2.6-mm PQFN package and is priced at $1.56 each in quantities of 3000 units. The EPC90167 development board is available for $211.65 each.

EPC2366 product page 

Efficient Power Conversion 

The post GaN transistor cuts losses and heat appeared first on EDN.

High-density power module fits compact AI servers

Thu, 02/05/2026 - 23:28

Enabling higher power delivery within the same rack space, Microchip’s MCPF1525 power module delivers up to 25 A per device and can be stacked to 200 A. The module integrates a 16-VIN buck converter with programmable PMBus and I²C control, making it well suited for powering PCIe switches and high-compute MPU applications used in AI deployments.

With dimensions of approximately 6.8×7.65×3.82 mm, the MCPF1525’s vertical construction maximizes board space, providing up to a 40% reduction in board area compared to alternative solutions. For improved reliability, the device incorporates multiple diagnostic functions reported over PMBus, including overtemperature, overcurrent, and overvoltage protection to help prevent undetected faults.

Housed in a thermally enhanced package, the MCPF1525 supports a junction temperature range from −40°C to +125°C. An embedded EEPROM enables users to program the default power-up configuration.

The MCPF1525 is available now, priced at $12 each in 1000-unit quantities.

MCPF1525 product page 

Microchip Technology 

The post High-density power module fits compact AI servers appeared first on EDN.

Vishay shrinks inductors, keeps full performance

Thu, 02/05/2026 - 23:28

Four power inductors in 0806 and 1210 case sizes from Vishay offer improved performance for commercial and automotive applications. Compared to competing inductors with similar performance, the devices use considerably less board space—up to 64% smaller in 0806 and 11% smaller in 1210 packages. They also support higher operating temperatures, a wider range of inductance values, and lower DC resistance to enhance efficiency.

The commercial IHLL-0806AZ-1Z and IHLL-1210AB-1Z have terminals plated only on the bottom, enabling smaller land patterns for more compact board spacing. The automotive-grade IHLP-0806AB-5A and IHLP-1210ABEZ-5A feature terminals plated on the bottom and sides, allowing a solder fillet that strengthens the mount against mechanical shock and simplifies joint inspection. These automotive devices are AEC-Q200 qualified for high reliability and elevated operating temperatures.

Samples and production quantities of the IHLL-0806AZ-1Z, IHLL-1210AB-1Z, IHLP-0806AB-5A, and IHLP-1210ABEZ-5A inductors are available now, with lead times of 10 weeks.

Vishay Intertechnology 

The post Vishay shrinks inductors, keeps full performance appeared first on EDN.

Added-conductor and directional audio interconnects: Real-life benefits?

Thu, 02/05/2026 - 15:00

Does vendor-claimed audio cable directionality make theoretical sense, far from delivering real-life perceptible benefit? And what about the number and organization of in-cable conductors?

Within my recently published two-part series on the equipment comprising my newly upgraded home office audio setup, I intentionally left out one key piece of the puzzle: the cables that interconnect the various pieces of gear in each “stack”. Come to think of it, I also didn’t mention the speaker wire that mates each monoblock power amplifier to its companion speaker:

but that’s a hype-vs-reality quagmire all its own! Maybe someday…for now, I’ll tease you with the brief revelation that it’s a 2m (3.3 foot) GearIT 14 AWG banana-plug-based set purchased in like-new condition from Amazon’s Resale (Warehouse) section for $17.18:

Conventional recommendations

Back to today’s quagmire 😉 When spanning the equipment placed on consecutive shelves of each audio “stack”, the 6” cable length is ideal. For the balanced interconnect-based setup located to my left on my desk:

wherein all of the connectors are XLR in form factor, I’ve found Coluber’s cables, available in a variety of connection-differentiating colors as well as as-needed longer lengths, to be excellent:

This particular setup, now based on a Drop + Grace Design SDAC Balanced DAC:

initially instead used Topping’s D10 Balanced DAC:

whose analog line-out connections were ¼” TRSs, not XLRs:

In that earlier gear configuration, I’d relied on a set of WJSTN Suanqi TRS-to-XLR cables to tether the DAC to the headphone amp (the Schiit Lokius equalizer wasn’t yet in the picture, either):

What about the unbalanced (i.e., single-ended) interconnection-based setup to my right?

In this case, I’ve mixed-and-matched RCA-to-RCA cables from WJSTN:

and equally highly-rated CNCESS:

depending on whose were lower-priced at any particular purchase point in time.

A pricier (albeit discounted) experiment

Speaking of economic factors, as regular readers may recall from past case studies (not to mention my allusion by example earlier in this writeup), I regularly troll Amazon’s Resale (formerly Warehouse) site for bargains. Last summer, I came across a set of “acceptable” condition (i.e., packaging-deficient) 0.5-foot-long RCA cables from a company called (believe it or not) “World’s Best Cables”:

and titled as follows:

0.5 Foot RCA Cable Pair – WBC-PRO-Quad Ultra-Silent, Ultra-Flexible, Star-Quad Audiophile & Pro-Grade Audio Interconnect Cable with Amphenol ACPR Gold RCA Plugs – Gray & Red Jacket – Directional

Say that ten times real fast, and without pausing to catch a breath midway through!

They normally sell for $30.99 a pair on the company’s Amazon storefront, which is pretty “salty” considering that the CNCESS and WJSTN alternatives are a third that amount ($10.99 for two). That said, these were discounted to $18.82, nearly half off the original price tag. I took the bait.

Like I said earlier, “packaging-deficient”.

How’d they sound? Fine. But no different, at least in my setup and to my ears, than the brand new but still notably less expensive CNCESS and WJSTN ones. This was the case in spite of the fact that among other things they were claimed to be “directional”, the concluding word in the voluminous product title and the one that had caught my ever-curious eye in the first place.

Directional details

As I’ve groused about plenty of times in the past, the audio world is rife with “snake oil” claims of products and techniques that supposedly improve sound quality but in actuality only succeed in extracting excess cash from naïve enthusiasts’ wallets and bank accounts. My longstanding favorite snake-oil theory, albeit one that mostly only wasted adoptees’ time, was that applying a green magic marker to the edges of an optical audio disc would improve its sound by reducing laser reflections.

Further magnifying this madness, at resultant higher damage- therefore wallet-induced devotee expense, was the practice of beveling (i.e. shaving down) those same edges:

I’ve also come across plenty of cables, both signal and power, and in various shapes and sizes, that claim to benefit from directionality induced by their implementations. Such directionality is, of course, forced on the implementation by USB cables, for example, which (for example, redux) have a Type A connector on one end for tethering to a computer and a Type B connector on the other end for mating with, say, a printer. Both types are shown at right in the following photo:

Conceptually, the same thing occurs with power cords, of course, such as this one:

But that’s not what I’m referring to. I’m talking about claimed directionality introduced within the cable itself—by the materials used to construct it, the conductors within it, etc. For cables that carry digital signals, this is pure hogwash as far as I can tell. But for analog cables like the one I’m showcasing today? There may, it turns out, be some reality behind the hype, depending on what kind of signal the cable’s carrying and for what span length, along with the ambient EMI characteristics of the operating environment. Quoting from the Amazon product page:

Each cable is configured as a “Directional” cable and as such the shield of the cable is connected to the ground only at the signal emitting end. This allows the shield of the cable to work as a Faraday’s cage which rejects external noise that could degrade the signal. The cable will work even if you plug it the opposite direction, but this will diminish the noise rejection capabilities of the directional design. This enhances the noise rejection capabilities of our cables over our competition.

To clarify: when I said earlier that I discerned no difference in the sound between the “World’s Best Cables” interconnect and its more cost-effective alternatives, I was referring to:

  • Short cable spans (6”) transporting
  • Reasonably high-level innate signals (specifically line level, 0.3V to 1.2V)

Would an alternative RCA cable set carrying, for example, the lower magnitude output signal of a turntable cartridge—moving magnet (3-7 mV) and especially moving coil (0.2-0.6 mV)—to a phono preamp be more prone to the corrupting effects of environmentally induced noise, especially in high EMI (with an overlapping spectral profile, to be precise) environments and across long cable runs? Low-level microphone outputs are another example. And would shielding—especially if directional in its nature—be of benefit in such scenarios?

Twist, double up and fan out

Truth be told, I’d originally planned to stop at this point and turn those questions over to you for your thoughts (both on them specifically and on the topic more generally) in the comments. But in looking again at the conceptual cable construction diagram this morning while prepping to dive into writing:

I noticed not only the shielding, which I’d seen before, but that there were four conductors within it. Each RCA connector is normally associated with only two wires, corresponding to the positive and negative per-channel connections to the audio source and destination devices.

Version 1.0.0

Four total wires might make sense, for example, if we were looking at the middle of a unified cable, with both channels’ dual conductors combined within a common shield. And it might also make sense (albeit seemingly still with one spare wire) if the per-channel cable connections were balanced. But these are RCA cables: unbalanced, i.e. single-ended, and only one cable per channel. So why four connectors inside, instead of just two?

My first clue as to the answer came when I then looked at the top of this graphic (table, to be precise):

Followed by my noticing the words “WBC-PRO-Quad” and “Star-Quad” in the aforementioned wordy product title. My subsequent research suggests that the term “Star Quad” isn’t unique to “World’s Best Cables”, although it typically refers to mic and other balanced interconnect applications:

The star quad design is a unique configuration of wires used in microphone cables. Unlike traditional cables that consist of two conductors, the star quad design incorporates four conductors. These conductors are twisted together in a specific pattern, resembling a star shape, hence the name. The layout of the conductors in a star quad cable significantly reduces electromagnetic interference (EMI), resulting in cleaner and more reliable audio transmission.

And how do two connections at each cable end translate into four conductors within the cable?

Star-quad microphone cables are specially designed to provide immunity to magnetic fields. These microphone cables have 4 conductors arranged in a precise geometry that provides immunity to the magnetic fields which easily pass through the outer RF shield. Four conductors are arranged in a four pointed star configuration and the wires at opposite points of the star are connected together at each end of the cable.

When the cables are wired in this manner, the + and – legs of the balanced connection each receive equal induced voltages from any magnetic field. This configuration balances the interference to the + and – legs of the balanced connection. The key to the success of star-quad cable is the fact that the magnetically-induced interference is exactly the same on the + and – legs of the balanced connection. The star-quad geometry of the cable keeps the interference signal identical on both legs no matter what direction the magnetic interference is coming from.

In the “a picture paints a thousand words” spirit, this additional graphic might be of assistance:

Along with this lab equipment- and measurement-flush video:

But again, we’re still talking about long-length, low-level balanced cables and connections used in high-EMI operating environments. How, if at all, do these results translate to the few-inch, comparatively high-level and low-EMI applications that my “World’s Best Cables” target, especially considering that they also include heavily hyped directional shielding? Even audiophiles have mixed opinions on the topic.

And so, at this point, after twice as long a write-up as originally planned, I will now stop and turn these and my prior questions over to you for your thoughts (both on them specifically and on the topic more generally) in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Added-conductor and directional audio interconnects: Real-life benefits? appeared first on EDN.

Designing energy-efficient AI chips: Why power must be an early consideration

Thu, 02/05/2026 - 09:54

AI’s demand for compute is rapidly outpacing current power infrastructure. According to Goldman Sachs Global Institute, upcoming server designs will push this even further, requiring enough electricity to power over 1,000 homes in a space the size of a filing cabinet.

As workloads continue to scale, energy efficiency is now as critical as raw performance. For engineers developing AI silicon, the central challenge is no longer just about accelerating models, but maximizing performance for every watt consumed.

A shift in design philosophy

The escalation of AI workloads is forcing a paradigm shift in chip development. Energy optimization must be addressed from the earliest design phases, influencing decisions throughout concept, architecture, and production. Considering thermal behavior, memory traffic, architectural tradeoffs, and workload characteristics as part of a single power-aware design flow enables the development of systems that scale efficiently without breaching data center or edge-device energy limits.

Traditionally, design teams have primarily focused on timing and performance, only addressing energy consumption at the end of the process. Today, that strategy is outdated.

Synopsys customer surveys across numerous design projects show that addressing power at the architectural stage can yield 30-50% savings, whereas waiting until implementation typically achieves only marginal improvements. Early exploration enables decisions about architecture, memory hierarchy, and workload mapping before they become fixed, allowing trade-offs that balance throughput, area, and efficiency.

Architecture analysis as a power tool

Before RTL is finalized, a comprehensive power analysis flow helps reveal where energy is being spent and what trade-offs exist between voltage, frequency, and performance. Architectural modeling enables rapid evaluation of techniques—such as dynamic voltage and frequency scaling (DVFS), power gating to shut down inactive circuits, and optimizing data flow within the network-on-chip (NoC)—and supports smarter, more energy-efficient design choices.

Transaction-level simulation allows teams to measure expected workloads and predict the impact of configuration changes. This early insight informs hardware-software partitioning, interface sizing, and memory placement, all critical factors in the chip’s overall efficiency.

Data movement: The hidden power sink

Computation isn’t the only factor driving energy use. In many AI chips, data movement consumes more power than the arithmetic itself. Each transfer between memory hierarchies or across chiplets adds significant overhead. This is the essence of the so-called memory wall: compute capability has outpaced memory bandwidth.

To close that gap, designers can reduce unnecessary transfers by introducing compute-in-memory or analog approaches, choosing high-bandwidth memory (HBM) interfaces, or adopting sparse algorithms that minimize data flow. The earlier the data paths are analyzed, the greater the potential savings, because late-stage fixes rarely recover wasted energy caused by poor partitioning.

The growing thermal challenge

As designs move toward multi-die and chiplet architectures, thermal density has become a first-order constraint. Packing several dies into one package creates concentrated heat zones that are difficult to manage later in the flow. Effective thermal planning, therefore, starts with system partitioning: examining how compute blocks are distributed and how heat will flow through the stack or interposer.

By modeling various configurations early, before layout or floor planning, engineers can avoid thermally stressed regions and plan for cooling strategies that support consistent performance under load.

Optimizing the real workload

Unlike traditional semiconductors, AI chips are rarely general-purpose. Whether a device runs edge inference, data center training, or specialized analytics, its efficiency depends on how closely the hardware matches the target workload. Simulation, emulation, and prototyping before tapeout make it possible to test representative use cases and fine-tune hardware parameters accordingly.

Profiling multiple operating modes, from idle to sustained training, exposes inefficiencies that might otherwise remain hidden until silicon returns from the fab. And it helps ensure the design can maintain high utilization and consistent energy performance across all conditions.

Extending efficiency beyond tapeout

Energy monitoring and management must persist even after chips are manufactured. Variability, aging, and environmental factors can shift operating characteristics over time. Integrating on-chip telemetry and control using silicon lifecycle management (SLM) solutions allows engineers to track power behavior in the field and apply adjustments to sustain optimal performance per watt throughout the product’s lifecycle.

The next breakthroughs in AI hardware will come not just from faster chips, but from smarter engineering that treats power as a foundational design dimension, not an afterthought. For today’s AI hardware, efficiency is performance.

Godwin Maben is a Synopsys Fellow.

Special Section: AI Design

The post Designing energy-efficient AI chips: Why power must be an early consideration appeared first on EDN.

Classic constant current cascode

Wed, 02/04/2026 - 15:00

An important figure of merit for all precision constant current sources is their active impedance.  Which is to say, just how “constant” is their output held against changes in applied voltage?  Frequent and expert Design Idea (DI) commentator Ashutosh Sapre (Ashu) was kind enough to measure this parameter for a design of mine and share his results. The circuit, applied as a 4 to 20mA current mirror, is shown in Figure 1 and discussed in “Combine two TL431 regulators to make versatile current mirror.”

Figure 1 A 4 to 20mA current mirror with poor active impedance.

Said Ashutosh: “I tried the fig.2 circuit for 4-20mA mirroring, with R1 and R2 of 100E, and using a Tl431 (2.5V). It worked quite well. One issue I found was that the output impedance (di/dv) was quite low; there was a change of 40uA over a supply swing of 20V (if I remember correctly), not linear with supply voltage change. It is possibly due to the 2.5V reference voltage modulation with cathode voltage swing.

It could be compensated for, but some error will remain due to the non-linearity.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

His observation and analysis were both absolutely correct. Table 6.6 in the TL431 datasheet reveals a maximum reference-voltage error of up to 2 mV per volt of cathode-to-anode voltage swing, consistent with the mediocre 20V/40µA = 500k active impedance he observed.

Fortunately, a simple and effective remedy is available and waiting in the pages of the common cookbook of current mirror circuits: the cascode. Figure 2 shows how it can be added (as D1 + Q2) to Figure 1.

Figure 2 D1/Q2 cascode reduces reference modulation error, improving active impedance by orders of magnitude.

The effect of the added parts is to isolate Z1’s cathode/anode voltage from voltage variation at the I2 node, thus holding the cathode/reference differential near zero and constant to within millivolts.

The resultant orders of magnitude reduction of reference modulation should produce a proportional increase in active impedance.

Thanks, Ashu!  Another example of the magic of editor Aalyia Shaukat’s DI kitchen collaboration in action!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Classic constant current cascode appeared first on EDN.

Silicon coupled with open development platforms drives context-aware edge AI

Wed, 02/04/2026 - 10:12

Edge AI reached an inflection point in 2025. What had long been demonstrated in controlled pilots—local inference, reduced latency, and improved system autonomy—began to transition into scalable, production-ready deployments across industrial and embedded markets. This shift has exposed a deeper architectural reality: many existing silicon platforms and development environments are poorly matched to the demands of modern, context-aware edge AI.

As AI workloads move from centralized cloud infrastructure to distributed edge devices, design priorities have fundamentally changed. Edge systems must execute increasingly complex models under strict constraints on power, thermal envelope, cost, and real-time determinism. Addressing these requirements demands both a new class of AI-native silicon and a development platform that is open, extensible, and aligned with modern machine learning workflows.

Why legacy architectures are no longer sufficient

Conventional microprocessors and application processors were not designed for sustained AI workloads at the edge. While they can support inference through software or add-on accelerators, their architectures typically lack three essential characteristics required for modern Edge AI:

  1. Dedicated AI acceleration capable of efficiently executing convolutional, transformer-based, and multimodal workloads.
  2. Deterministic real-time processing for latency-sensitive industrial and embedded applications.
  3. Energy efficiency at scale, enabling always-on intelligence without excessive thermal or power budgets.

As edge AI applications expand beyond simple classification toward sensor fusion, contextual reasoning, and on-device generative inference, these limitations become more pronounced. The result is a growing gap between what software frameworks can express and what deployed hardware can efficiently execute.

Edge AI design as a full value chain

Successful edge AI deployment requires a system-level view spanning the entire design value chain:

Data collection and preprocessing

Industrial edge systems, for example, operate in noisy, variable environments. Training data must reflect real-world conditions such as lighting changes, mechanical vibration, sensor drift, and interference.

Hardware-accelerated execution

Today’s edge designs rely on heterogeneous compute architectures: AI-native NPUs handle dense matrix and tensor operations, while CPUs, GPUs, DSPs, and real-time cores manage control logic, signal processing, and exception handling.

Model training, adaptation, and optimization

Although training is often performed off-device, edge deployment constraints must be considered early. Transfer learning and hybrid model architectures are commonly used to balance accuracy, explainability, and compute efficiency. Hardware-aware compilation enables models to be transformed to match accelerator capabilities while maintaining deterministic performance characteristics.

The role of open development platform

Historically, edge AI development has been fragmented across proprietary toolchains, closed runtimes, and framework-specific optimizations. This fragmentation has slowed adoption and increased development risk, particularly as model architectures evolve rapidly.

An open development platform addresses fragmentation challenges with:

  • Framework diversity: Edge developers increasingly rely on PyTorch, ONNX, JAX, TensorFlow, and emerging toolchains. Supporting this diversity requires compiler infrastructures that are framework-agnostic.
  • Rapid model evolution: The rise of transformers and large language models (LLMs) has introduced new operator patterns that closed toolchains struggle to support efficiently.
  • Long product lifecycles: Industrial and embedded devices often remain in service for a decade or more, requiring platforms that can adapt to new models without hardware redesign.

Additionally, open compiler and runtime infrastructures based on standards such as MLIR and RISC-V enable a separation between model expression and hardware execution. This decoupling allows silicon to evolve while preserving software investment.

Figure 1 Synaptics’ open edge AI development platform features Astra SoCs, the Torq compiler, and the industry’s first deployment of Google’s Coral NPU. Source: Synaptics

Context-aware AI and the move toward multimodal inference

A defining trend of edge AI in 2025 was the transition from single-sensor inference toward context-aware, multimodal systems. Rather than processing isolated data streams, edge devices increasingly combine vision, audio, motion, and environmental inputs to build a richer understanding of their surroundings.

This shift places new demands on edge platforms which must now support:

  • Heterogeneous data types and operators
  • Efficient execution of attention mechanisms and transformer-based models
  • Low-latency fusion of multiple sensor streams

Figure 2 The Grinn OneBox AI-enabled industrial single-board computer (SBC), designed for embedded edge AI applications, leverages a Grinn AstraSOM compute module and the Synaptics SL1680 processor. Source: Grinn Global

Designing for scalability and future workloads

One of the key architectural challenges in edge AI is scalability—not only across product tiers, but across time. AI-native silicon must scale from low-power endpoints to higher-performance systems while maintaining software compatibility.

This is typically achieved through:

  • Modular accelerator architectures that scale performance without changing programming models.
  • Heterogeneous compute integration, allowing workloads to migrate between NPUs, CPUs, and GPUs as needed.
  • Standardized toolchains that preserve model portability across devices.

For designers, this approach reduces risk by allowing a single software stack to span multiple products and generations.

Testing, validation, and long-term reliability

Edge AI systems operate continuously and often autonomously. Validation must extend beyond functional correctness to include:

  • Worst-case latency and power analysis
  • Thermal stability under sustained workloads
  • Behavior under degraded or unexpected inputs

Monitoring and logging capabilities at the edge enable post-deployment diagnostics and iterative model improvement. As models become more complex, explainability and auditability will become increasingly important, particularly in regulated environments.

Looking ahead

In 2026, AI is expected to move further into mainstream embedded system design. The focus is shifting from proving feasibility to optimizing performance, reliability, and lifecycle cost. This transition highlights the importance of aligning silicon architecture, software openness, and system-level design practices.

A new class of AI-native silicon, coupled with an open and extensible development platform, provides a foundation for this next phase. For system designers, the challenge—and opportunity—is to treat edge AI not as an add-on feature, but as a core architectural element spanning the entire design value chain.

Neeta Shenoy is VP of marketing at Synaptics.

Special Section: AI Design

The post Silicon coupled with open development platforms drives context-aware edge AI appeared first on EDN.

EDN announces Product of the Year Awards

Tue, 02/03/2026 - 20:30

EDN has announced the winners of the annual Electronic Products Product of the Year Awards in the January/February digital magazine. Now in its 50th year, EDN editors looked at over 100 products across 13 component categories to select the best new components. These categories include analog/mixed-signal ICs, development kits, digital ICs, electromechanical devices, interconnects, IoT platforms, modules, optoelectronics, passives, power, RF/microwave, sensors, and test and measurement.

These award-winning products demonstrate a significant advancement in a technology or its application, an exceptionally innovative design, a substantial achievement in price/performance, improvements in design performance, and/or the potential for new product designs and opportunities. This year, the awards have two ties, in the categories of power and sensors.

Also in the January/February issue, we look at some of the most advanced electronic components launched at the Consumer Electronics Show (CES). This year’s show highlighted the rise of AI across applications from automotive to smart glasses. Chipmakers are placing big bets on edge AI as a key growth area along with robotics, IoT, and automotive.

A few new AI chip advances announced at CES include Ambarella Inc.’s CV7 edge AI vision system-on-chip, optimized for a wide range of AI perception applications, and Ambiq Micro’s industry-first ultra-low-power neural processing unit built on its Subthreshold Power Optimized Technology platform and designed for real-time, always-on AI at the edge.

Though chiplets hold big promises in delivering more compute capacity and I/O bandwidth, design complexity has been a challenge. Cadence Design Systems Inc. and its IP partners may have made this a bit easier with pre-validated chiplets, targeting physical AI, data center, and high-performance-computing applications. At CES, Cadence announced a partner ecosystem to deliver pre-validated chiplet solutions, based on the Cadence physical AI chiplet platform. The new chiplet spec-to-packaged parts ecosystem is designed to reduce engineering complexity and accelerate time to market for developing chiplets while reducing risk.

We also spotlight the top 10 edge AI chips with an updated ranking, curated by AspenCore’s resident AI expert, EE Times senior reporter Sally Ward-Foxton. As highlighted by several CES product launches, more and more AI chips are being designed for every application niche as edge devices become AI-enabled. These devices range from handling multimodal large language models in edge devices to those designed for vision processing and minimizing power consumption for always-on applications.

Giordana Francesca Brescia, contributing writer for Embedded.com, looks at microcontrollers with on-chip AI and how they are transforming embedded hardware into intelligent nodes capable of analyzing and generating information. In addition to hardware innovations, she also covers software development and key areas of application such as biomedical and industrial automation.

We also spotlight several emerging trends in 2026, from 800-VDC power architectures in AI factories and battery energy storage systems (BESSes) to advances in autonomous farming and power devices for satellites.

The wide adoption of AI models has led to a redesign of data center infrastructure, according to contributing writer Stefano Lovati. Traditional data centers are being replaced with AI factories to meet the computational capacity and power requirements needed by today’s machine-learning and generative AI workloads.

However, a single AI factory can integrate several thousand GPUs, reaching power consumption levels in the megawatt range, Lovati said. This has led to the design of an 800-VDC power architecture, which is designed to support the multi-megawatt power demand required by the compute racks of next-generation AI factories.

Lovati also discusses how wide-bandgap semiconductors such as silicon carbide and gallium nitride can deliver performance and efficiency benefits when implementing an 800-VDC architecture.

The adoption of BESSes is primarily being driven by the need to improve efficiency and stability in power distribution networks. BESSes can balance supply and demand by storing energy from both renewable sources and the conventional power grid, Lovati said. This helps stabilize power grids and optimize power uses.

Lovati covers emerging trends in BESSes, including advances in battery technologies, hybrid energy storage systems—integrating batteries with alternative energy storage technologies such as supercapacitors or flywheels—and AI-based solutions for optimization. Some of the alternatives to lithium-ion discussed include flow batteries and sodium-ion and aluminum-ion batteries.

We also look at the challenges of selecting the right power supply components for satellites. Not only do they need to be rugged and small, but they must also be configurable for customization.

The configurability of power supplies is an important factor for meeting a variety of space mission specifications, according to Amit Gole, marketing product manager for the high-reliability and RF business unit at Microchip Technology.

Voltage levels in the electrical power bus are generally standardized to certain values; however, the voltage of the solar array is not always standardized, Gole said, which calls for a redesign of all of the converters in the power subsystems, depending on the nature of the mission.

Because this redesign can result in cost and development time increases, it is important to provide DC/DC converters and low-dropout regulators across the power architecture that have standard specifications while providing the flexibility for customization depending on the system and load voltages, he said.

Gole said functions such as paralleling, synchronization, and series connection are of key importance for power supplies when considering the specifications of different space missions.

We also look at the latest advances in smart farming. With technological innovations required to improve the agricultural industry and to meet the growing global food demands, smart farming has emerged to support farming operations thanks to the latest advancements in robotics, sensor technology, and communication technology, according to Liam Critchley, contributing writer for EE Times.

One of the key trends in smart farming is the use of drones, which help optimize a variety of farming operations. These include monitoring the health of the crops and soil and communicating updates to the farmer and active operations such as planting seeds and field-spraying operations. Drones leverage technologies such as advanced sensors, communication, IoT technologies and, in some cases, AI.

Critchley said one of the biggest developing areas is the integration of AI and machine learning. While some drones have these features, many smart drones will soon use AI to identify various pests and diseases autonomously, eliminating the need for human intervention.

Cover image: Adobe Stock

The post EDN announces Product of the Year Awards appeared first on EDN.

EDN announces winners of the 2025 Product of the Year Awards

Tue, 02/03/2026 - 15:05
Electronic Products of the Year 2025 logo.

The annual awards, now in its 50th year, recognizes outstanding products that represent any of the following qualities: a significant advancement in a technology or its application, an exceptionally innovative design, a substantial achievement in price/performance, improvements in design performance, and the potential for new product designs/opportunities. EDN editors evaluated 100+ products across 13 categories. There are two ties, in the power and sensors categories. Here are this year’s winners:

  • Allegro MicroSystems Inc. and SensiBel (Sensors)
  • Ambiq (Development Kits)
  • Cree LED (Optoelectronics)
  • Circuits Integrated Hellas (Modules)
  • Empower Semiconductor and Ferric Corp. (Power)
  • Littelfuse Inc. (Passives)
  • Marvell Technology Inc. (Interconnects)
  • Morse Micro Ltd. (IoT Platforms)
  • Renesas Electronics Corp. (Digital ICs)
  • Rohde & Schwarz (Test & Measurement)
  • Semtech Corp. (RF/Microwave)
  • Sensata Technologies (Electromechanical)
  • Stathera Inc. (Analog/Mixed-Signal ICs)
Allegro MicroSystems Inc. Sensors: ACS37100 magnetic current sensor

Allegro MicroSystems’ ACS37100 is a fully integrated tunneling magnetoresistive (TMR) current sensor that delivers high accuracy and low noise for demanding control loop applications. Marking a critical inflection point for magnetic sensors, it is the industry’s first commercially available magnetic current sensor to achieve 10-MHz bandwidth and 50-ns response time, the company said.

The ACS37100 magnetic current sensor, based on Allegro’s proprietary XtremeSense TMR technology, is 10× faster and generates 4× lower noise than alternative Hall-based sensors. This performance solves challenges in high-voltage power conversion, especially related to gallium nitride (GaN) and silicon carbide (SiC) solutions. The ACS37100 helps power system designers leverage the full potential of fast-switching GaN and SiC FETs by providing precise current measurement and integrated overcurrent fault detection.

The current sensor delivers a low noise of 26-mA root mean square across the full 10-MHz bandwidth, enabling precise, high-speed current measurements for more accurate and responsive system performance.

While GaN and SiC promise greater power density and efficiency, the faster switching speeds of wide-bandgap semiconductors create significant control challenges. At sub-megahertz frequencies, conventional magnetic current sensors lack the speed and precision to provide the high-fidelity, real-time data required for stable control and protection loops, Allegro MicroSystems said.

Target applications include electric vehicles, clean-energy power conversion systems, and AI data center power supplies, in which the 10-MHz bandwidth and 50-ns response time provide the high-fidelity data needed. The operating temperature range is –40°C to 150°C.

Allegro MicroSystems’ ACS37100 TMR magnetic current sensor.Allegro MicroSystems’ ACS37100 TMR magnetic current sensor (Source: Allegro MicroSystems Inc.) Ambiq Development Kits: neuralSPOT AI development kit

Ambiq’s neuralSPOT software development kit (SDK) is designed specifically for embedded AI on the company’s ultra-low-power Apollo system-on-chips (SoCs). It helps AI developers handle the complex process of model integration with a streamlined and scalable workflow.

The SDK provides a comprehensive toolkit comprising Ambiq-optimized libraries, feature extractors, device drivers, and pre-trained AI models, making it easier for developers to quickly prototype, test, and deploy models using real-world sensor data while integrating optimized static libraries into production applications. This reduces both development effort and energy consumption.

The neuralSPOT SDK and Toolkit bridge the gap between AI model creation, deployment, and optimization, Ambiq said, enabling developers to move from concept to prototype in minutes, not days. This is thanks in part to its intuitive workflow, pre-validated model templates, and seamless hardware integration.

The latest neuralSPOT V1.2.0 Beta release includes ready-to-use example implementations of popular AI applications, such as human activity recognition for wearable and fitness analytics, ECG monitoring, keyword spotting, speech enhancement, and speaker identification.

Key challenges that the neuralSPOT SDK addresses include high power consumption, energy limits, limited development tools, and complex setup. This is particularly important when enabling AI on compact, battery-powered edge devices in which manufacturers must balance performance, power efficiency, and usability.

The SDK provides a unified, developer-friendly toolkit with Ambiq-optimized libraries, drivers, and ready-to-deploy AI models, which reduces setup and integration time from days to hours. It also simplifies model validation for consistent results and quicker debugging and provides real-time insights into energy performance, helping developers meet efficiency goals early in the design process.

Ambiq’s neuralSPOT for the Apollo5 SoCs.Ambiq’s neuralSPOT for the Apollo5 SoCs (Source: Ambiq) Circuits Integrated Hellas Modules: Kythrion antenna-in-package

The Kythrion chipset from Circuits Integrated Hellas (CIH) is called a game-changer for satellite communications. It is the first chipset to integrate transmit, receive, and antenna functions into a proprietary 3D antenna-in-package and system-in-package architecture.

By vertically stacking III-V semiconductors (such as gallium arsenide and GaN) with silicon, Kythrion achieves more than 60% reductions in size, weight, power, and cost compared with traditional flat-panel antenna modules, according to the company. This integration eliminates unnecessary printed-circuit-board (PCB) layers by consolidating RF, logic, and antenna elements into a dense 3D chip for miniaturization and optimized thermal management within the package. This also simplifies system complexity by combining RF and logic control on-chip.

CIH said this leap in miniaturization allows satellites to carry more advanced payloads without increasing mass or launch costs, while its 20× bandwidth improvement delivers real-time, high-throughput connectivity. These features deliver benefits to aerospace, defense, and commercial networks, with applications in satellite broadband, 5G infrastructure, IoT networks, wireless power, and defense and aviation systems.

Compared with traditional commercial off-the-shelf phased-array antennas, which typically require hundreds of separate chips (e.g., 250 transmit and 250 receive chips) and require a larger footprint around 4U, Kythrion reduces the module count to just 50 integrated modules, fitting into a compact, 1U form factor. This results in a weight reduction from 3 kg to 4 kg, down to approximately 1.5 kg, while power consumption is lowered by 15%. Cost per unit is also significantly reduced, CIH said.

The company also considered sustainability when designing the Kythrion antenna-in-package. It uses existing semiconductor processes to eliminate capital-intensive retooling, which lowers carbon impact. In addition, by reducing satellite mass, each kilogram saved in satellite payload can reduce up to 300 kg of CO2 emissions per launch, according to CIH.

CIH’s Kythrion antenna-in-package.CIH’s Kythrion antenna-in-package (Source: Circuits Integrated Hellas) Cree LED, a Penguin Solutions brand Optoelectronics: XLAMP XP-L Photo Red S Line LEDs

Advancing horticulture lighting, Cree LED, a Penguin Solutions brand, launched the XLAMP XP-L Photo Red S Line LEDs, optimized for large-scale growing operations, including greenhouses and vertical farms, with higher efficiency and durability.

Claiming a new standard in efficiency and durability for horticultural LED lighting, the XLAMP XP-L Photo Red S Line LEDs provide a 6% improvement in typical wall-plug efficiency over the previous generation, reaching 83.5% at 700 mA and 25°C. Horticultural customers can reduce operating costs with the same output with less power consumption, or they can lower initial costs with a redesign that cuts the number of Photo Red LEDs required by up to 35%, Cree LED said.

Thanks to its advanced S Line technology, the XP-L Photo Red LEDs offer high sulfur and corrosion resistance that extend their lifespan and deliver reliable performance. These features reduce maintenance costs while enabling the devices to withstand harsh greenhouse environments, the company said.

Other key specifications include a maximum drive current of 1,500 mA, a low thermal resistance of 1.15°C/W, and a wide viewing angle of 125°. The LEDs are binned at 25°C. They are RoHS- and REACH-compliant.

These LEDs also provide seamless upgrades in existing designs with the same 3.45 × 3.45-mm XP package as the previous XP-G3 Photo Red S Line LEDs.

Cree LED’s XLamp XP-L Photo Red S Line LEDs.Cree LED’s XLamp XP-L Photo Red S Line LEDs (Source: Cree LED, a Penguin Solutions brand) Empower Semiconductor Power: Crescendo vertical power delivery

Empower Semiconductor describes Crescendo as the industry’s first true vertical power delivery platform designed for AI and high-performance-computing processors. The Crescendo chipset sets a new industry benchmark with 20× faster response and breakthrough sustainability and enables gigawatt-hours in energy savings per year for a typical AI data center.

The vertical architecture achieves multi-megahertz bandwidth, 5× higher power density, and over 20% lower delivery losses while minimizing voltage droop and accelerating transient response. The result is up to 15% lower xPU power consumption and a significant boost in performance per watt, claiming a new benchmark for efficiency and scalability in AI data center systems.

The Crescendo platform is powered by Empower’s patented FinFast architecture. Scalable beyond 3,000 A, Crescendo integrates the regulators, magnetics, and capacitors into a single, ultra-thin package that enables direct placement underneath the SoC. This relocates power conversion to where it’s needed most for optimum energy and performance, according to the company.

Empower said the Crescendo platform is priced to be on par with existing power delivery solutions while offering greater performance, energy savings, and lower total cost of ownership for data centers.

Empower’s Crescendo vertical power delivery.Empower’s Crescendo vertical power delivery (Source: Empower Semiconductor) Ferric Corp. Power: Fe1766 DC/DC step-down power converter

Ferric’s Fe1766 160-A DC/DC step-down power converter offers industry-leading power density and performance in an ultra-compact, 35-mm2 package with just 1-mm height. The Fe1766 is a game-changer for high-performance computing, AI accelerators, and data center processors with its extremely compact form factor, high power density, and 100× faster switching speeds for precise, high-bandwidth regulation, Ferric said.

Integrating inductors, capacitors, FETs, and a controller into a single module, the Fe1766 offers 4.5-A/mm2 power density, which makes it 25× smaller than traditional alternatives, according to the company. The integrated design translates into a board area reduction of up to 83%.

The FE1766 switches at 30 to 100 MHz, ensuring extremely fast power conversion with high-bandwidth regulation and 30% better efficiency than conventional solutions and 20% reduced cost compared with existing designs. Other features include real-time telemetry (input voltage, output voltage, current, and temperature) and comprehensive fault protection (UVLO, OVP, UVP, OCP, OTP, etc.), providing both reliability and performance.

However, the most significant feature is its scalability, with gang operation of up to 64 devices in parallel for a power delivery exceeding 10 kA directly to the processor core. This makes it suited for next-generation multi-core processors, GPUs, FPGAs, and ASICs in high-density and high-performance systems, keeping pace with growth in computing power and core counts, particularly in AI, machine learning, and data centers.

Ferric’s Fe1766 DC/DC step-down power converter.Ferric’s Fe1766 DC/DC step-down power converter (Source: Ferric Corp.) Littelfuse Inc. Passives: Nano2 415 SMD fuse

The Littelfuse Nano2 415 SMD fuse is the industry’s first 277-VAC surface-mount fuse rated for a 1,500-A interrupting current. Previously, this was achievable only with larger through-hole fuses, according to the company. It allows designers to upgrade protection and transition to automated reflow processes, reducing assembly costs while improving reliability and surge-withstand capability.

The Nano2 415 SMD fuse bridges the gap between legacy cartridge and compact SMD solutions while advancing both performance and manufacturability, Littelfuse said. Its compact, 15 × 5-mm footprint and time-lag characteristic protect high-voltage, high-fault-current circuits while enabling reflow-solder assembly. It is compliant with UL/CSA/NMX 248-1/-14 and EN 60127-1/-7.

The Nano2 415 SMD Series offers high I2t performance. It is halogen-free and RoHS-compliant. Applications include industrial power supplies, inverters, and converters; appliances and HVAC systems; EV chargers and lighting control; and smart building and automation systems.

Littelfuse’s Nano2 415 SMD Fuse.Littelfuse’s Nano2 415 SMD Fuse (Source: Littelfuse Inc.) Marvell Technology Inc. Interconnects: 3-nm 1.6-Tbits/s PAM4 Interconnect Platform

The Marvell 3-nm 1.6-Tbits/s PAM4 Interconnect Platform claims the industry’s first 3-nm process node optical digital-signal processor (DSP) architecture, targeting bandwidth, power efficiency, and integration for AI and cloud infrastructure. The platform integrates eight 200G electrical lanes and eight 200G optical lanes in a compact, standardized module form factor.

The new platform sets a new standard in optical interconnect technology by integrating advanced laser drivers and signal processing in a single, compact device, Marvell said. This reduces power per bit and simplifies system design across the entire AI data center network stack.

The 3-nm PAM4 platform addresses the I/O bandwidth bottleneck by combining next-generation SerDes technology and laser driver integration to achieve higher bandwidth and power performance. It leverages 200-Gbits/s SerDes and integrated optical modulator drivers to reduce 1.6-Tbits/s optical module power by over 20%. The energy-efficiency improvement reduces operational costs and enables new AI server and networking architectures to meet the requirements for higher bandwidth and performance for AI workloads, within the significant power constraints of the data center, Marvell said.

The 1.6-Tbits/s PAM4 DSP enables low-power, high-speed optical interconnects that support scale-out architectures across racks, rows, and multi-site fabrics. Applications include high-bandwidth optical interconnects in AI and cloud data centers, GPU-to-GPU and server interconnects, rack-to-rack and campus-scale optical networking, and Ethernet and InfiniBand scale-out AI fabrics.

The DSP platform reduces module design complexity and power consumption for denser optical connectivity and faster deployment of AI clusters. With a modular architecture that supports 1.6 Tbits/s in both Ethernet and InfiniBand environments, this platform allows hyperscalers to future-proof their infrastructure for the transition to 200G-per-lane signaling, Marvell said.

Morse Micro Pty. Ltd. IoT Platforms: MM8108 Wi-Fi HaLow SoC

Morse Micro claims that the MM8108 Wi-Fi HaLow SoC is the smallest, fastest, lowest-power, and farthest-reaching Wi-Fi chip. The MM8108, built on the IEEE 802.11ah standard, establishes a new benchmark for performance, efficiency, and scalability in IoT connectivity. It delivers data rates up to 43.33 Mbits/s using the industry’s first sub-gigahertz, 256-QAM modulation, combining long-range operation with true broadband throughput.

The MM8108 Wi-Fi HaLow extends Wi-Fi’s reach into the sub-1-GHz spectrum, enabling multi-kilometer connectivity, deep penetration through obstacles, and support for 8,000+ devices per access point. Outperforming proprietary LPWAN and cellular alternatives while maintaining full IP compatibility and WPA3 enterprise security, the wireless platform reduces deployment cost and power consumption by up to 70%, accelerates certification, and expands Wi-Fi’s use beyond homes and offices to cities, farms, and factories, Morse Micro said.

The MM8108 SoC’s integrated 26-dBm power amplifier and low-noise amplifier achieve “outstanding” link budgets and global regulatory compliance without external SAW filters. It also simplifies system design and reduces power draw with a 5 × 5-mm BGA package, USB/SDIO/SPI interfaces, and host-offload capabilities. This allows devices to run for years on a coin-cell or solar battery, Morse Micro said.

The MM8108-RD09 USB dongle complements the SoC, enabling fast HaLow integration with existing Wi-Fi 4/5/6/7 infrastructure. It demonstrates plug-and-play Wi-Fi HaLow deployment for industrial, agricultural, smart city, and consumer applications. The dongle is fully IEEE 802.11ah–compliant and Wi-Fi CERTIFIED HaLow-ready, allowing developers to test and commercialize Wi-Fi HaLow solutions quickly.

Together, the MM8108 and RD09 combine kilometer-scale range, 100× lower power consumption, and 10× higher capacity than conventional Wi-Fi while maintaining the simplicity, interoperability, and security of the wireless standard, the company said.

Applications range from smart cities (lighting, surveillance, and environmental monitoring networks spanning kilometers) and industrial IoT (predictive maintenance, robotics, and asset tracking in factories and warehouses) to agriculture (solar-powered sensors for crop, irrigation, and livestock management), retail and logistics (smart shelves, POS terminals, and real-time inventory tracking), and healthcare (long-range, low-power connectivity for remote patient monitoring and smart appliances).

Morse Micro’s MM8108 Wi-Fi HaLow SoC.Morse Micro’s MM8108 Wi-Fi HaLow SoC (Source: Morse Micro Pty. Ltd.) Renesas Electronics Corp. Digital ICs: RA8P1 MCUs

Renesas’s RA8P1 group is the first group of 32-bit AI-accelerated microcontrollers (MCUs) powered by the high-performance Arm Cortex-M85 (CM85) with Helium MVE and Ethos-U55 neural processing unit (NPU). With advanced AI, it enables voice, vision, and real-time-analytics AI applications on a single chip. The NPU supports commonly used networks, including DS-CNN, ResNet, Mobilenet, and TinyYolo. Depending on the neural network used, the Ethos-U55 provides up to 35× more inferences per second than the Cortex-M85 processor on its own, according to the company.

The RA8P1, optimized for edge and endpoint AI applications, uses the Ethos-U55 NPU to offload the CPU for compute-intensive operations in convolutional and recurrent neural networks to deliver up to 256 MACs per cycle, delivering 256 GOPS of AI performance at 500 MHz and breakthrough CPU performance of over 7,300 CoreMarks, Renesas said.

The RA8P1 MCUs integrate high-performance CPU cores with large memory, multiple external memory interfaces, and a rich peripheral set optimized for AI applications.

The MCUs, built on the advanced, 22-nm ultra-low-leakage process, are available in single- and dual-core options, with a Cortex-M33 core embedded on the dual-core MCUs. Single- and dual-core devices in 224- and 289-BGA packages address diverse use cases across broad markets. This process also enables the use of embedded magnetoresistive RAM, which offers faster write speeds, in the new MCUs.

The MCUs also provide advanced security. Secure Element–like functionality, along with Arm TrustZone, is built in with advanced cryptographic security IP, immutable storage, and tamper protection to enable secure edge AI and IoT applications.

The RA8P1 MCUs are supported by Renesas’s Flexible Software Package, a comprehensive set of hardware and software development tools, and RUHMI (Renesas Unified Heterogenous Model Integration), a highly optimized AI software platform providing all necessary tools for AI development, model optimization, and conversion, which is fully integrated with the company’s e2 studio integrated design environment.

Renesas Electronics’ RA8P1 MCU group.Renesas Electronics’ RA8P1 MCU group (Source: Renesas Electronics Corp.) Rohde & Schwarz Test & Measurement: FSWX signal and spectrum analyzer

The Rohde & Schwarz FSWX is the first signal and spectrum analyzer with multichannel spectrum analysis, cross-correlation, and I/Q preselection. It features an internal multi-path architecture and high RF performance, with an internal bandwidth of 8 GHz, allowing for comprehensive analysis even of complex waveforms and modulation schemes.

According to Rohde & Schwarz, this represents a fundamental paradigm shift in signal-analysis technology. Cross-correlation cancels the inherent noise of the analyzer and gives a clear view of the device under test, pushing the noise level down to the physical limit for higher dynamic range in noise, phase noise, and EVM measurements.

By eliminating its own noise contribution (a big challenge in measurement science), the FSWX reveals signals 20–30 dB below what was previously measurable, enabling measurements that were impossible with traditional analyzers, the company said.

Addressing critical challenges across multiple industries, the multichannel FSWX offers the ability to measure two signal sources simultaneously with synchronous input ports, each featuring 4-GHz analysis bandwidth, opening phase-coherent measurements of antenna arrays used in beamforming for wireless communications, as well as in radar sensors and electronic warfare systems. For 5G and 6G development, the cross-correlation feature enables accurate EVM measurements below –50 dB that traditional analyzers cannot achieve, according to Rohde & Schwarz.

In radar and electronic warfare applications, the dual channels can simultaneously measure radar signals and potential interference from 5G/Wi-Fi systems. In addition, for RF component makers, the FSWX performs traditional spectrum analyzer measurements, enabling Third Order Intercept measurements near the thermal noise floor without any internal or external amplification.

The FSWX uses broadband ADCs with filter banks spanning the entire operating frequency range, allowing for pre-selected signal analysis while eliminating the need for YIG filters. This solves “a 50-year-old compromise between bandwidth and selectivity in spectrum analyzer design,” according to the company, while providing improved level-measurement accuracy and much faster sweep times.

No other manufacturer offers dual synchronous RF inputs with phase coherence, cross-correlation for general signals, 8-GHz preselected bandwidth, and multi-domain triggering across channels, according to Rohde & Schwarz. This makes it an architectural innovation rather than an incremental improvement.

Rohde & Schwarz’s FSWX signal and spectrum analyzer.Rohde & Schwarz’s FSWX signal and spectrum analyzer (Source: Rohde & Schwarz) Semtech Corp. RF/Microwave: LR2021 RF transceiver

The LR2021 is the first transceiver chip in Semtech’s LoRa Plus family, leveraging its fourth-generation LoRa IP that supports both terrestrial and SATCOM across sub-gigahertz, 2.4-GHz ISM bands, and licensed S-band. The transceiver is designed to be backward-compatible with previous LoRa devices for seamless LoRaWAN compatibility while featuring expanded physical-layer modulations for fast, long-range communication.

The LR2021 is the first transceiver to unify terrestrial (sub-gigahertz, 2.4-GHz ISM) and satellite (licensed S-band) communications on a single chip, eliminating the traditional requirement for separate radio platforms. This enables manufacturers to deploy hybrid terrestrial-satellite IoT solutions with single hardware designs, reducing development complexity and inventory costs for global deployments.

The LR2021 also delivers a high data rate of up to 2.6 Mbits/s, enabling the transmission of higher data-rate content with outstanding link budget and efficiency. The transceiver enables the use of sensor-collected data to train AI models, resulting in better control of industrial applications and support of new applications.

This represents a 13× improvement over Gen 3 LoRa transceivers (Gen 3 SX1262: maximum 200-kbits/s LoRa data rate), opening up new application categories previously impossible with LPWAN technology, including real-time audio classification, high-resolution image recognition, and edge AI model training from battery-powered sensors.

It also offers enhanced sensitivity down to –142 dBm @ SF12/125 kHz, representing a 6-dBm improvement over Gen 3 devices (Gen 3 SX1262: –148-dBm maximum sensitivity at lower spreading factors, typically –133-dBm operational sensitivity). The enhanced sensitivity extends coverage range and improves deep-indoor penetration for challenging deployment environments.

Simplifying global deployment, the transceiver supports multi-region deployment via a single-SKU design. The integration reduces bill-of-material costs, PCB footprint, and power consumption compared with previous LoRa transceivers. The increased frequency offset tolerance eliminates TCXO requirements and large thermal requirements, eliminating components that traditionally added cost and complexity to multi-region designs.

The device is compatible with various low-power wireless protocols, including Amazon Sidewalk, Meshtastic, W-MBUS, Wi-SUN FSK, and Z-Wave when integrated with third-party stack offerings.

Semtech’s LR2021 RF transceiver.Semtech’s LR2021 RF transceiver (Source: Semtech Corp.) Sensata Technologies Inc. Electromechanical: High Efficiency Contactor

Sensata claims a breakthrough electromechanical solution with its High Efficiency Contactor (HEC), designed to accelerate the transition to next-generation EVs by enabling seamless compatibility between 400-V and 800-V battery architectures. As the automotive industry moves toward ultra-fast charging and higher efficiency, the HEC targets vehicles that can charge rapidly at both legacy and next-generation charging stations.

By enabling the seamless reconfiguration between 400-V and 800-V battery systems, this capability allows EVs to charge efficiently at both legacy 400-V charging stations and emerging 800-V ultra-fast chargers, ensuring compatibility and eliminating infrastructure barriers for OEMs and end users.

A key differentiator is its ability to dramatically reduce system complexity and cost. By integrating three high-voltage switches into a single, compact device, the HEC achieves up to a 50% reduction in component count compared with traditional battery-switching solutions, according to Sensata, simplifying system integration and lowering costs.

The HEC withstands short-circuit events up to 25 kA and mechanical shocks greater than 90 g while maintaining ultra-low contact resistance (~50 μΩ) for minimal energy loss.

The HEC features a unique mechanical synchronization that ensures safer operation by eliminating the risk of short-circuit events (a critical safety advancement for high-voltage EV systems). It also offers a bi-stable design and ultra-low contact resistance that contribute to greater energy efficiency during both charging and driving.

The bi-stable design eliminates the need for holding power, further improving energy efficiency, Sensata said.

 

The HEC targets automotive, truck, and bus applications including vehicle-to-grid, autonomous driving, and megawatt charging scenarios. It is rated to ASIL-D.

Sensata’s High Efficiency Contactor.Sensata’s High Efficiency Contactor (Source: Sensata Technologies) SensiBel Sensors: SBM100B MEMS microphone

SensiBel’s SBM100B optical MEMS digital output microphone delivers 80-dBA signal-to-noise ratio (SNR) and 146-dB SPL acoustic overload point (AOP). Leveraging its patented optical sensing technology, the SBM100B achieves performance significantly surpassing anything that is available on the market today, according to the company. It delivers the same audio recording quality that users experience with professional studio microphones but in a small-form-factor microphone.

The 80-dB SNR delivers cleaner audio, reducing hiss and preserving clarity in quiet recordings. It is a significant achievement in noise and dynamic range performance for MEMS microphones, and it’s a level of audio performance that capacitive and piezo MEMS microphone technologies cannot match, the company said.

The SBM100B is also distortion-proof in high-noise environments. Offering an AOP of up to 146-dB SPL, the SBM100B delivers high performance, even in very loud environments, which often have high transient peaks that easily exceed the overload point of competitive microphones, SensiBel said.

The microphone offers studio-quality performance in a compact MEMS package (6 × 3.8 × 2.5-mm, surface-mount, reflow-solderable, bottom-port). With a dynamic range of 132 dB, it prevents distortion in loud environments while still capturing subtle audio details. It supports standard PDM, I2S, and TDM digital interfaces.

The SBM100B also supports multiple operational modes, which optimizes performance and battery life. This allows designers to choose between the highest performance or optimized power while still operating with exceptional SNR. It also supports sleep mode with very low current consumption. An optional I2C interface is available for customization of built-in microphone functions, including bi-quad filters and digital gain.

Applications include general conferencing systems, industrial sound detection, microphone arrays, over-the-ear and true wireless stereo headsets and earbuds, pro audio devices, and spatial audio, including VR/AR headsets, 3D soundbars, and field recorders.

SensiBel’s SBM100B MEMS microphone.SensiBel’s SBM100B MEMS microphone (Source: sensiBel) Stathera Inc. Analog/Mixed-Signal ICs: STA320 DualMode MEMS oscillator

Stathera’s ST320 DualMode MEMS oscillator, in a 2.5 × 2.0 × 0.95-mm package, is a timing solution that generates both kilohertz and megahertz signals from a single resonator. It is claimed to be the first DualMode MEMS timing device capable of replacing two traditional oscillators.

The DualMode capability provides both the kilohertz clock (32.768 kHz) for low-power mode and megahertz (configurable 1–40 MHz) clock for control and communication. This simplifies embedded system design and enhances performance and robustness, along with an extended battery life and a reduction of PCB footprint space and system costs.

Key specifications include a frequency stability of ±20 ppm, a voltage range of 1.62 to 3.63 V, and an operating temperature of –40°C to 85°C. Other features include LVCMOS output and four configurable power modes. This device can be used in consumer, wearables, IoT, edge AI, and industrial applications.

Stathera’s ST320 DualMode MEMS oscillator.Stathera’s ST320 DualMode MEMS oscillator (Source: Stathera Inc.)

The post EDN announces winners of the 2025 Product of the Year Awards appeared first on EDN.

Short push, long push for sequential operation of multiple power supplies

Tue, 02/03/2026 - 15:00

Industrial systems normally use both analog and digital circuits. While digital circuits include microcontrollers that operate at 5 VDC, analog circuits operate generally at either 12 or 15 VDC. In some systems, it may be necessary to switch on power supplies in sequence, first 5 VDC to digital circuits and then 15 VDC to analog circuits.

Wow the engineering world with your unique design: Design Ideas Submission Guide

During switch-off, first 15 VDC and then 5 VDC. In such requirements, Figure 1’s circuit comes in handy.

Figure 1 Single pushbutton switches on or off 5 V and 15 V supplies sequentially. LEDs D1, D2 indicate the presence of 5 V and 15 V supplies. Adequate heat sinks may be provided for Q2 and Q3, depending upon the load currents. Suitable capacitors may be added at the outputs of 5 V and 15 V.

A video explanation of this circuit can be found below:

When you push the button momentarily once, 5 VDC is applied to digital circuits, including microcontroller circuits, and then 15 VDC to analog circuits after a preset delay. When you push the button SW1 for a long time, say 2 seconds, the 15-V supply is withdrawn first, and then the 5-V supply. Hence, one push button does both (sequential) ON and OFF functions.

This Design Idea (DI) is intended for MCU-based projects. No additional components/circuitry are needed to implement this function. When you push SW1 (2-pole push button) momentarily, 5 VDC is extended to the digital circuit through the closure of the first pole of SW1. The microcontroller code should now load HIGH to the output port bit PB0. Due to this, Q1 conducts, pulling the gate of Q2 to LOW. Hence, Q2 now conducts and holds 5 VDC to the digital circuit even after releasing SW1.

Next, the code should be to load HIGH to the output port bit PB1 after a preset delay. This will make Q4 conduct and pull the gate of Q3 to LOW. Hence, Q3 is conducted, and 15 VDC is extended to the analog circuit. Now, the MCU can do its other intended functions.

To switch off the supplies in sequence, push SW1 for a long time, say 2 seconds. Through the second pole of SW1, input port line PB2 is pulled LOW. This 2+ seconds LOW must be detected by the microcontroller code, either by interrupt or by polling, and start the switch-off sequence by loading LOW to the port bit PB1, which switches off Q4 and hence Q3, removing the 15-V supply to the analog circuit. Next, the code should load LOW to PB0 after a preset delay.  This will switch off Q1 and hence Q2, so that 5 VDC is disconnected from the digital/microcontroller circuit.

Thus, a single push button switches on and switches off 5-V and 15-V supplies in sequence. This idea can be extended to any number of circuits and sequences, as needed. This idea is intended for use in MCU-based projects without introducing extra components/circuitry. In this design, ATMEGA 328P MCU and IRF4435 P-channel MOSFETs are used.  For circuits without an MCU, I will offer a scheme to do this function in my next DI.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

The post Short push, long push for sequential operation of multiple power supplies appeared first on EDN.

Why power delivery is becoming the limiting factor for AI

Tue, 02/03/2026 - 11:50

The sheer amount of power needed to support the expansion of artificial intelligence (AI) is unprecedented. Goldman Sachs Research suggests that AI alone will drive a 165% increase in data center power demand by 2030. While power demands continue to escalate, delivering power to next-generation AI processors is becoming more difficult.

Today, designers are scaling AI accelerators faster than the power systems that support them. Each new processor generation increases compute density and current demand while decreasing rail voltages and tolerances.

The net result? Power delivery architectures from even five years ago are quickly becoming antiquated. Solutions that once scaled predictably with CPUs and early GPUs are now reaching their physical limits and cannot sustain the industry’s roadmap.

If the industry wants to keep up with the exploding demand for AI, the only way forward is to completely reconsider how we architect power delivery systems.

Conventional lateral power architectures break down

Most AI platforms today still rely on lateral power delivery schemes where designers place power stages at the periphery of the processor and route current across the PCB to reach the load. At modest current levels, this approach works well. At the thousands of amps characteristic of AI workloads, it does not.

As engineers push more current through longer copper traces, distribution losses rise sharply. PCB resistance does not scale down fast enough to offset the increase. Designers therefore lose power to I2R heating before energy ever reaches the die, which forces higher input power and complicates thermal management (Figure 1). As current demands continue to grow, this challenge only compounds.

Figure 1 Conventional lateral power delivery architectures are wasteful of power and area. Source: Empower Semiconductor

Switching speed exacerbates the problem. Conventional regulators operate in the hundreds of kilohertz range, which requires engineers to use large inductors and bulky power stages. While these components are necessary for reliable operation, they impose placement constraints that keep conversion circuitry far from the processor.

Then, to maintain voltage stability during fast load steps, designers must surround the die with dense capacitor networks that occupy the closest real estate to the power ingress point to the processor: the space directly underneath it on the backside of the board. These constraints lock engineers into architectures that scale inadequately in size, efficiency, and layout flexibility.

Bandwidth, not efficiency, sets the ceiling

Engineers often frame power delivery challenges around efficiency. But, in AI systems, control bandwidth is starting to define the real limit.

When a regulator cannot respond fast enough to sudden load changes, voltage droop follows. To ensure reliable performance, designers raise the voltage so that the upcoming droop does not create unreliable operations. That margin preserves performance but wastes extra power continuously and erodes thermal headroom that could otherwise support higher compute throughput.

Capacitors act as a band aid to the problem rather than fix it. They act as local energy reservoirs that mitigate the slow regulator response, but they do so at the cost of space and parasitic complexity. As AI workloads become more dynamic and burst-driven, this trade-off becomes harder to justify, as enormous magnitudes of capacitance (often in tens of mF) are required.

Higher control bandwidth changes the relationship and addresses the root-cause. Faster switching allows designers to simultaneously shrink inductors, reduce capacitor dependence, and tighten voltage regulation. At that point, engineers can stop treating power delivery as a static energy problem and start treating it as a high-speed control problem closely tied to signal integrity.

High-frequency conversion reshapes power architecture

Once designers push switching frequencies into the tens or hundreds of megahertz, the geometry of power delivery changes.

For starters, magnetic components shrink dramatically, to the point where engineers can integrate inductors directly into the package or substrate. The same power stages that used to be bulky can now fit into ultra-thin profiles as low as hundreds of microns (µm).

Figure 2 An ultra-high frequency IVR-based PDN results in a near elimination of traditional PCB level bulk capacitors. Source: Empower Semiconductor

At the same time, higher switching frequencies mean control loops can react orders of magnitude faster, achieving nanosecond-scale response times. With such a fast transient response, high-frequency conversion completely removes the need for external capacitor banks, freeing up a significant area on the backside of the board.

Together, these space-saving changes make entirely new architectures possible. With ultra-thin power stages and dramatically reduced peripheral circuitry, engineers no longer need to place power stages beside the processor. Instead, for the first time, they can place them directly underneath it.

Vertical power delivery and system-level impacts

By placing power stages directly beneath the processor, engineers can achieve vertical power-delivery (VPD) architectures with unprecedented technical and economic benefits.

First, VPD shortens the power path, so high current only travels millimeters to reach the load (Figure 3). As power delivery distance drops, parasitic distribution losses fall sharply, often by as much as 3-5x. Lower loss reduces waste heat, which expands the usable thermal envelope of the processor and lowers the burden placed on heatsinks, cold plates, and facility-level cooling infrastructure.

Figure 3 Vertical power delivery unlocks more space and power-efficient power architecture. Source: Empower Semiconductor

At the same time, eliminating large capacitor banks and relocating the complete power stages in their place, frees topside board area that designers can repurpose for memory, interconnect, or additional compute resources, thereby increasing performance.

Higher functional density lets engineers extract more compute from the same board footprint, which improves silicon utilization and system-level return on hardware investment. Meanwhile, layout density improves, routing complexity drops, and tighter voltage regulation is achievable.

These combined effects translate directly into usable performance and lower operating cost, or simply put, higher performance-per-watt. Engineers can recover headroom previously consumed by lateral architectures through loss, voltage margining, and cooling overhead. At data-center scale, even incremental gains compound across thousands of processors to save megawatts of power and maximize compute output per rack, per watt, and per dollar.

Hope for the next generation of AI infrastructure

AI roadmaps point toward denser packaging, chiplet-based architectures, and increasing current density. To reach this future, power delivery needs to scale along the same curve as compute.

Architectures built around slow, board-level regulators will struggle to keep up as passive networks grow larger and parasitics dominate behavior. Instead, the future will depend on high-frequency, vertical-power delivery solutions.

Mukund Krishna is senior manager for product marketing at Empower Semiconductor.

Special Section: AI Design

The post Why power delivery is becoming the limiting factor for AI appeared first on EDN.

A hard-life Tile Mate goes under the knife

Mon, 02/02/2026 - 20:34

This engineer was curious to figure out why the Bluetooth tracker for his keys had abruptly gone deceased. Then he remembered a few-year-back mishap…

My various Tile trackers—a Mate attached to my keychain (along with several others hidden in vehicles)—and a Slim in my wallet, have “saved my bacon” multiple times over my years of using them, in helping me locate misplaced important items.

But they’ve been irritants as well, specifically in relation to the activation buttons and speakers built into them. Press the button, and the device loudly plays a little ditty…by default, it also rings whatever smartphone(s) it’s currently paired with. All of which is OK, I guess, as long as pressing the button was an intentional action.

However, when the keychain and/or wallet are in my pockets, the buttons sometimes also get pressed, as well…by keys or other objects in my front pocket, credit cards in my wallet, or sometimes just my body in combination with the pants or shorts fabric. That this often happens often when I’m unable to easily silence the din (while I’m driving, for example) or at an awkward moment (while I’m in the midst of a conversation, for example), is…like I said, irritating.

Silence isn’t always blessed

I eventually figured out how to disable the “Find Your Phone” feature, since I have other ways of determining a misplaced mobile device’s location. So my smartphone doesn’t incessantly ring any more, at least. But the tracker’s own ringtone can’t be disabled, as far as I can tell. And none of the other available options for it are any less annoying than the “Bionic Birdie” default (IMHO):

 

That said, as it turns out, the random activations have at least one unforeseen upside. I realized a while back that I hadn’t heard the tune unintentionally coming from the Tile Mate on my keychain in a while. After an initial sigh of relief, I realized that this likely meant something was wrong. Indeed, in checking the app I saw that the Tile Mate was no longer found.

My first thought (reasonable, I hope you’ll agree) was that I had a dead CR1632 battery on my hands. But to the best of my recollection, I hadn’t gotten the preparatory “low battery” notification beforehand. Indeed, when I pulled the coin cell out of the device and connected it to my multimeter’s leads, it still read a reasonable approximation of the original 3V level. And in fact, when I then dropped the battery into another Tile Mate, it worked fine.

A rough-and-tumble past

So, something inside the tracker had presumably died instead. I’d actually tore down a same-model-year (2020) Tile Mate several years back, that one brand new, so I thought it’d be fun to take this one apart, too, to see if I could discern the failure mechanism via a visual comparison to the earlier device.

At this point, I need to confess to a bout of apparent “senioritis”. This latest Tile Mate teardown candidate has been sitting on my bookshelf, queued up for attention for a number of months now. But it wasn’t until I grabbed it a couple of days ago, in preparation for the dissection, that I remembered/realized what had probably initiated its eventual demise.

Nearly four years back, I documented this very same Tile Mate’s inadvertent travel through the bowels of my snowblower, along with its subsequent ejection and deposit in a pile of moist snow and overnight slumber outside and to the side of my driveway. The Tile Mate had seemingly survived intact, as did my keys. My Volvo fob, on the other hand, wasn’t so lucky

Fast-forward to today, and the Tile Mate (as usual, and as with successive photos, accompanied by a 0.75″/19.1 mm diameter U.S. penny for size comparison purposes) still looks reasonably robust, at least from the front:

Compromised environmental barriers

Turn it around, on the other hand…see that chip in the case above the battery compartment lid? I’d admittedly not noticed that now-missing piece of plastic before:

Arguably, at least theoretically, the lid’s flipside gasket should still preclude moisture intrusion:

But as I started to separate the two case halves:

I also noticed cracks at both battery compartment ends:

Again, they’re limited to the battery area, not intruding into the glue-reinforced main inner compartment where the PCB is located. But still…

And what’s with that additional sliver of grey plastic that got ejected during the separation?

As you may have already figured out, it originated at the keyring “hole”:

After it initially cracked (again, presumably as a result of the early-2022 snowblower debacle) it remained in place, since the two case halves were still attached. But the resultant fracture provided yet another environmental moisture/dirt/etc. intrusion point, albeit once again still seemingly counteracted by the internal glue barrier (perhaps explaining why it impressively kept working for four more years).

Here’s a reenactment of what the tracker would have looked like if the piece had completely fallen out back then:

See, it fits perfectly!

Non-obvious defects (at least to my eyes)

Here’s what this device’s PCB topside looks like, flush with test points:

Compared to its brand-new, same-model-year predecessor, I tore down nearly five years ago:

Same goes for this device’s PCB underside, notably showcasing the Nordic Semiconductor nRF52810 Bluetooth 5.2/BLE control SoC, based on an Arm Cortex-M4, and the associated PCB-embedded antenna along one corner:

versus the pristine one I’d dissected previously:

I don’t see a blatant failure point. Do you? I’m therefore guessing that moisture eventually worked its way inside and invisibly did its damage to a component (or few). As always, I welcome your theories (and/or other thoughts) in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A hard-life Tile Mate goes under the knife appeared first on EDN.

Bridging the gap: Being an AI developer in a firmware world

Mon, 02/02/2026 - 14:14

AI model developers—those who create neural networks to power AI features—are a different breed. They think in terms of latent spaces, embeddings, and loss functions. Their tools of the trade are Python, Numpy, and AI frameworks, and the fruit of their efforts is operation graphs capable of learning how to transform an input into an insight.

A typical AI developer spends months, if not years, without ever considering how memory is allocated, whether a loop fits in a cache line, or even loops at all. Such concerns are the domain of software engineers and kernel developers. They generally don’t think about memory footprints, execution times, or energy consumption. Instead, they focus, correctly, on one main goal: ensuring the AI model accurately derives the desired insights from the available data.

This division of labor functions well in the cloud AI space, where machine learning and inference utilize the same frameworks, hardware, storage, and tools. If an AI developer can run one instance of their model, scaling it to millions of instances becomes a matter of MLOps (and money, of course).

 

Firmware in edge AI

In the edge AI domain, especially in the embedded AI space, AI developers have no such luxury. Edge AI models are highly constrained by memory, latency, and power. If a cloud AI developer runs up against these constraints, it’s a matter of cost: they can always throw more servers into the pool. In edge AI, these constraints are existential. If the model doesn’t meet them, it isn’t viable.

Figure 1 Edge AI developers must be keenly aware of firmware-related constraints such as memory space and CPU cycles. Source: Ambiq

Edge AI developers must, therefore, be firmware-adjacent: keenly aware of how much memory their model needs, how many CPU cycles it uses, how quickly it must produce a result, and how much energy it uses. Such questions are usually the domain of firmware engineers, who are known to argue over mega-cycles-per-second (MCPS) budgets, tightly coupled memory (TCM) share, and milliwatts of battery use.

For the AI developer, figuring out the answer to these questions isn’t a simple process; they must convert their Python-based TensorFlow (or PyTorch) model into firmware, flash it onto an embedded device, and then measure its latency, memory requirements, CPU usage, and energy consumption. With this often-overwhelming amount of data, they then modify their model and try again.

Since much of this process requires firmware expertise, the development cycle usually involves the firmware team, and a lot of tossing balls over fences, and all that leads to slow iteration.

In tech, slow iteration is a bad thing.

Edge AI development tools

Fortunately, all these steps can be automated. With the right tools, a candidate model can be converted into firmware, flashed onto a development board, profiled and characterized, and the results analyzed in a matter of minutes, all while reducing or eliminating the need to involve the firmware folks.

Take the case of Ambiq’s neuralSPOT AutoDeploy, a tool that takes a TensorFlow Lite model, a widely used standard format for embedded AI, converts it into firmware, fine-tunes that firmware, thoroughly characterizes the performance on real hardware (down to the microscopic detail an AI developer finds useful), compares the output of the firmware model to the Python implementation, and measures latency and power for a variety of AI runtime engines. All automatically, and all in the time it takes to fetch a cup of coffee.

Figure 2 AutoDeploy speeds up the AI/embedded iteration cycle by automating most of the tedious bits. Source: Ambiq

By dramatically shortening the optimization loop, AI development is accelerated. Less time is spent on the mechanics, and more time can be spent getting the model right, making it faster, making it smaller, and making it more efficient.

A recent experience highlights how effective this can be: one of our AI developers was working on a speech synthesis model. The results sounded natural and pleasing, and the model ran smoothly on a laptop. However, when the the developer used AutoDeploy to profile the model, he discovered it took two minutes to synthesize just 3 seconds of speech—so slow that he initially thought the model had crashed.

A quick look at the profile data showed that all that time was spent on just two operations—specifically, Transcode Convolutions—out of the 60 or so operations the model used. These two operations were not optimized for the 16-bit integer numeric format required by the model, so they defaulted to a slower, reference version of the code.

The AI developer had two options: either avoid using those operations or optimize the kernel. Ultimately, he opted for both; he rewrote the kernel to use other equivalent operations and asked Ambiq’s kernel team to create an optimized kernel for future runs. All of this was accomplished in about an hour, instead of the week it would normally take.

Edge AI, especially embedded AI, faces its own unique challenges. Bridging the gap between AI developers and firmware engineers is one of those challenges, but it’s a vital one. Here, edge AI system-on-chip (SoC) providers play an essential role by developing tools that connect these two worlds for their customers and partners—making AI development smooth and effortless.

Scott Hanson, founder and CTO of Ambiq, is an expert in ultra-low energy and variation-tolerant circuits.

Special Section: AI Design

The post Bridging the gap: Being an AI developer in a firmware world appeared first on EDN.

Understanding remote sense in today’s power supplies

Mon, 02/02/2026 - 10:16

In today’s power-supply designs, even small wiring and connector resistances can distort the voltage that actually reaches the load. As systems push tighter tolerances and higher currents, these drops become harder to ignore.

Remote sense provides a straightforward way to correct them by letting the supply monitor the voltage at the load itself and adjust its output accordingly. Understanding how this mechanism works—and how to apply it properly—is essential for maintaining stable, accurate rails in modern designs.

Local sense vs remote sense: Where you measure matters

Most power supplies regulate their output using local sense—monitoring voltage at the supply’s own output terminals. This works fine in ideal conditions, but in real systems, the path from supply to load includes resistance from wires, connectors, and circuit-board traces. As current increases, even small resistances can cause significant voltage drop, meaning the load receives less than intended.

Remote sense solves this by relocating the feedback point to the load itself. Instead of trusting the voltage at the supply’s output, it uses a separate pair of sense wires to measure the voltage at the load terminals. The supply then adjusts its output to compensate for any drop along the way, ensuring the load sees the correct voltage—even under dynamic or high-current conditions.

This simple shift in measurement point can dramatically improve regulation accuracy, especially in systems with long cables, high currents, or sensitive loads. Many benchtop and lab-grade power supplies now include this feature, often with a front-panel or software-selectable option to toggle between local and remote sense. When testing precision circuits or powering remote loads, enabling remote sense can make all the difference.

Figure 1 Simplified schematic illustrates a remote-sense setup with external output and sense wires. Source: Author

As a sidenote on what local sense really does, it seems many benchtop power supplies now include a simple switch—or sometimes local-sense jumpers—to select between local and remote sense. In local-sense mode, the supply regulates using the voltage at its own output terminals.

Switching to remote sense hands regulation to the separate sense leads, allowing the supply to track the voltage at the load instead. This selectable mechanism lets you match the regulation method to the setup—local sense for short leads and quick tests and remote sense when wiring losses matter.

Figure 2 Wiring diagram shows a power supply with local-sense jumpers installed. Source: Author

Put simply, for a local-sense configuration, you install the local-sense jumpers so that the Sense + and Sense – terminals are tied directly to the corresponding + and – output terminals on the power supply’s output connector. For a remote-sense configuration, all local sense jumpers are removed, and the Sense + and Sense – terminals are routed externally to the matching + and – points at the load or device under test (DUT).

Note at this point that power supplies with a local/remote sense selector switch don’t require separate local sense jumpers. That is, power supplies equipped with a physical or electronic local/remote sense switch (or a digital configuration setting) utilize internal circuitry to bridge the sense lines to the output terminals. This eliminates the need for the external metal jumpers or wire loops typically found on the barrier strips of older or simpler power supplies.

4-wire sensing: More sensible pointers on remote sense

Starting this session with a cautionary note, always verify the selector switch position and all sensing connections before enabling the output. Setting the switch to Remote without sense wires attached can trigger the feedback loop to detect zero voltage and attempt to compensate. This often forces the power supply to its maximum voltage, potentially damaging your equipment even if physical jumpers are absent.

Furthermore, any noise captured by the sense leads will be reflected at the output terminals, potentially degrading load regulation. To minimize electromagnetic interference (EMI) from external sources, use twisted-pair wiring or ribbon cables for the sense connections.

Because these high-impedance leads carry negligible current, thin-gauge wire is sufficient for this purpose. In high-noise environments, shielded cabling may be necessary; if used, ensure the shield is grounded at the power supply end only and never utilized as a current-carrying sensing conductor.

As a quick aside, it appears that many power supplies now implement some form of smart sense detection as a fail-safe. Since a floating sense connection can create a hazardous open-loop state, these systems protect the hardware by shutting down if the leads are disconnected—whether that happens during live use or at initial startup.

In practice, many modern programmable power supplies use auto-sense technology to monitor sense terminals and automatically engage remote sensing when external leads are detected. To ensure stability, these units include internal protection resistors—often called fallback resistors—connecting the output and sense terminals.

These resistors provide a secondary feedback path that allows the supply to default safely to local sensing if leads are missing or accidentally disconnected. This hardware redundancy prevents a dangerous open-loop overvoltage condition, protecting the load from upsurges caused by wiring failure or human error.

Just a sidewalk, ordinary yet essential, becomes a metaphor for design simplicity. On a workbench scattered with piles of discrete electronic components, it’s equally instructive and rewarding to attempt the design of an entry-level remote-sense power supply.

Experimenting with various operational amplifier configurations—specifically differential and error amplifier circuits—alongside voltage references demonstrates how feedback loops maintain precise regulation under dynamic loads.

Such a hands-on approach not only highlights the critical aspects of stability and compensation but also provides valuable insight into the trade-offs between component selection, circuit topology, and overall performance. These complexities are left for the reader to explore intentionally.

Virtual remote sense in practice

Jumping to a quick coffee break, let us touch on virtual remote sense (VRS). This clever technique emulates the benefits of true remote sensing without the extra wiring, helping designers maintain regulation accuracy while simplifying layouts.

Several well-known ICs in the Analog Devices’ portfolio—originally developed by Linear Technology—have embraced VRS to make implementation straightforward: LT4180, LT8697, and LT6110 are prime examples. Each integrates features that reduce voltage drops across traces and connectors, ensuring stable supply rails even in demanding applications.

Because these devices employ different methods to achieve VRS, a thorough review of their datasheets is strongly recommended to understand the nuances and select the right fit for your design. Exploring these solutions hands-on could be the key to unlocking cleaner, more reliable power delivery in your next project.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Understanding remote sense in today’s power supplies appeared first on EDN.

Touch ICs scale across automotive display sizes

Fri, 01/30/2026 - 20:31

Two touchscreen controllers join Microchip’s maXTouch M1 family, expanding support for automotive displays over a wider range of form factors. The ATMXT3072M1-HC and ATMXT288M1 cover free-form widescreen displays up to 42 in., as well as compact screens in the 2- to 5-in. range. Both devices are compatible with display technologies such as OLED and microLED.

The AEC-Q100-qualified controllers leverage Smart Mutual acquisition technology to boost SNR by up to 15 dB compared to previous generations. They deliver reliable touch detection even for on-cell OLEDs, where embedded touch electrodes are subjected to high capacitive loads and increased noise coupling.

The ATMXT3072M1-HC targets large, continuous touch sensor designs that span both the cluster and center information display, enabling a single hardware design for left-hand and right-hand drive vehicles. For smaller screens, the ATMXT288M1 is available in a TFBGA60 package, reducing PCB area by 20% compared to the previous smallest automotive-qualified maXTouch product.

For pricing and sample orders, contact a Microchip sales representative or authorized dealer.

ATMXT3072M1-HC product page 

ATMXT288M1 product page 

Microchip Technology 

The post Touch ICs scale across automotive display sizes appeared first on EDN.

Keysight automates complex coexistence testing

Fri, 01/30/2026 - 20:31

Keysight’s Wireless Coexistence Test Solution (WCTS) is a scalable platform for validating wireless device performance in crowded RF environments. This automated, standards-aligned approach reduces manual setup, improves test repeatability, and enables earlier identification of coexistence risks during development.

To replicate real-world RF conditions, WCTS integrates a wideband vector signal generator. It covers 9 kHz to 8.5 GHz—scalable to 110 GHz—with modulation bandwidths up to 250 MHz (expandable to 2.5 GHz). A single RF port can generate up to eight virtual signals, enabling complex interference scenarios without additional hardware. Nearly 100 predefined, ANSI C63.27-compliant test scenarios are included, covering all three coexistence tiers.

Built on OpenTAP, an open-source, cross-platform test sequencer, WCTS delivers scalable and configurable testing through a user-friendly GUI and open architecture. Engineers can upload custom waveforms and validate test plans offline using simulation mode, accelerating test development and reducing lab time.

More information about the Keysight Wireless Coexistence Test Solution can be found here.

Keysight Technologies 

The post Keysight automates complex coexistence testing appeared first on EDN.

600-V MOSFET enables efficient, reliable power conversion

Fri, 01/30/2026 - 20:31

The first device in AOS’ αMOS E2 high-voltage Super Junction MOSFET platform is the AOTL037V60DE2, a 600-V N-channel MOSFET. It offers high efficiency and power density for mid- to high-power applications such as servers and workstations, telecom rectifiers, solar inverters, motor drives, and other industrial power systems.

Optimized for soft-switching topologies, the AOTL037V60DE2 delivers low switching losses and is well suited for Totem Pole PFC, LLC and PSFB converters, as well as CrCM H-4 and cyclo-inverter applications. The device is available in a TOLL package and features a maximum RDS(on) of 37 mΩ.

AOS engineered the αMOS E2 high-voltage Super Junction MOSFET platform with a robust intrinsic body diode to handle hard commutation events, such as reverse recovery during short-circuits or start-up transients. Evaluations by AOS showed that the body diode can withstand a di/dt of 1300 A/µs under specific forward current conditions at a junction temperature of 150 °C. Testing also confirmed strong Avalanche Unclamped Inductive Switching (UIS) capability and a long Short-Circuit Withstanding Time (SCWT), supporting reliable operation under abnormal conditions.

The AOTL037V60DE2 is available in production quantities at a unit price of $5.58 for 1000-piece orders.

AOTL037V60DE2 product page

Alpha & Omega Semiconductor 

The post 600-V MOSFET enables efficient, reliable power conversion appeared first on EDN.

Stable LDOs use small-output caps

Fri, 01/30/2026 - 20:31

Based on Rohm’s Nano Cap ultra-stable control technology, the BD9xxN5 series of LDO regulator ICs delivers 500 mA of output current. The series is intended for 12-V and 24-V primary power supply applications in automotive, industrial, and communication systems.

The BD9xxN5 series builds on the earlier BD9xxN1 series, increasing the output current from 150 mA to 500 mA while maintaining stability with small-output capacitors. The ICs provide low output voltage ripple (~250 mV) for load current changes from 1 mA to 500 mA within 1 µs. Using a typical output capacitance of 470 nF, they enable compact designs and flexible component selection.

All six new variants in the BD9xxN5 series are AEC-Q100 qualified and operate over a temperature range of –40°C to +125°C. Each device provides a single output of 3.3 V, 5 V, or an adjustable voltage from 1 V to 18 V, accurate to within ±2.0%. The absolute maximum input voltage rating is 45 V.

The BD9xxN5 LDO regulators are available now from Rohm’s authorized distributors. Datasheets for each variant can be accessed here.

Rohm Semiconductor 

The post Stable LDOs use small-output caps appeared first on EDN.

1200-V SiC modules enable direct upgrades

Fri, 01/30/2026 - 20:31

Five 1200-V SiC power modules in SOT-227 packages from Vishay serve as drop-in replacements for competing solutions. Based on the company’s latest generation of SiC MOSFETs, the modules deliver higher efficiency in medium- to high-frequency automotive, energy, industrial, and telecom applications.

The VS-SF50LA120, VS-SF50SA120, VS-SF100SA120, VS-SF150SA120, and VS-SF200SA120 power modules are available in single-switch and low-side chopper configurations. Each module’s SiC MOSFET integrates a soft body diode with low reverse recovery. This reduces switching losses and improves efficiency in solar inverters and EV chargers, as well as server, telecom, and industrial power supplies.

The modules support drain currents from 50 A to 200 A. The VS-SF50LA120 is a 50-A low-side chopper with 43-mΩ RDS(on), while the VS-SF50SA120 is a 50-A single-switch device rated at 47 mΩ. Single-switch options scale to 100 A, 150 A, and 200 A with RDS(on) values of 23 mΩ, 16.8 mΩ, and 12.1 mΩ, respectively.

Samples and production quantities of the VS-SF50LA120, VS-SF50SA120, VS-SF100SA120, VS-SF150SA120, and VS-SF200SA120 are available now, with lead times of 13 weeks.

Vishay Intertechnology 

The post 1200-V SiC modules enable direct upgrades appeared first on EDN.

Chandra X-Ray Mirror

Fri, 01/30/2026 - 15:00

There is a Neil deGrasse Tyson video covering the topic of the Chandra X-ray Observatory. This essay is in part derived from that video. I suggest that you view the discussion. It will be sixty-five minutes well spent.

This device doesn’t look anything like a planar mirror because X-ray photons cannot be reflected by any known surface in the way you see your reflection above your bathroom sink.

If you aim a stream of X-ray photons directly toward any particular surface, either a silvered mirror or some kind of intended lens, those photons will either pass right on through (which is what your medical X-rays do) or they will be absorbed. You will not be able alter the trajectory of an X-ray photon stream, at least not with any device like that.

However, X-ray photons can be grazed off a reflective surface to achieve a slight trajectory change if their initial angle of approach to the mirror surface is kept very small. With the surface of the Chandra X-ray mirror made extremely smooth, almost down to the atomic level, repeated grazing permits X-ray focus to be achieved. This is the operating principle of the Chandra X-ray Telescope’s mirror, as shown in Figure 1.

Figure 1 The Chandra X-Ray Observatory mirrors showing a perspective view, a cut-away view, and x-ray photon trajectories. (Source: StarTalk Podcast)

The Chandra Observatory was launched on July 23, 1999, and has been doing great things ever since. Regrettably, however, its continued operation is in some jeopardy. Please see the following Google search result.

Figure 2 Google search result of the Chandra Telescope showing science funding budget cuts for the Chandra X-ray Observatory going from $69 million to zero. (Source: Google, 2026)

I’m keeping my fingers crossed.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Chandra X-Ray Mirror appeared first on EDN.

Pages