EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 49 хв 1 секунда тому

A battery charger that loudly hums: Dump it or just make it dumb?

Пн, 03/16/2026 - 19:37

An archaic DieHard device has seemingly died hard; is hacking it to resurrect a portion of its original function a worthwhile endeavor?

A decade-plus ago, shortly after moving (part-time, at the time) to Colorado, I came across a smoking (no pun intended…keep reading) deal at Sears: a 12V vehicle battery charger supporting both standard SLA (sealed lead acid) and AGM (absorbed glass mat) cells, along with 2A, 10A and 50A (!!!) charging current options, for $32.99. I bought two, one for me and the other for my then-girlfriend (and now-wife), since we had separate residences at the time.

I’ve held onto both—in spite of the fact that I also now own several newer microprocessor-controlled (versus this transformer-based model) chargers, not only significantly more compact but offering enhanced features such as desulfication support—primarily due to the 50A jump-start capability that only the old-school DieHard charger seemingly delivers.

Geriatric degradation

When I fired one of them up a few months back after not using either of them for a while, though, I noticed that it was making a loud humming sound—incrementally louder at the 2A, then 10A, and finally 50A settings, as I’d recollected from the past—but much louder at each output option than I’d remembered. To confirm, I pulled the other charger out of its box, which also hummed but at the noticeably lower din that I’d recalled. Plus, the first charger didn’t seem to be doing anything charging-wise, whereas the second still seemingly worked fine.

Here’s the first (loud humming) charger, which I re-hooked up just yesterday to my 2001 Volkswagen Eurovan Camper (which uses a standard SLA, not AGM, battery), at the 2A setting:

10A setting:

and 50A setting:

The gauge readings don’t seem to make sense in any of these cases. As background, I top off the charge (normally using one of my more modern chargers) on the battery in the in-storage van once a month at the beginning of the month. I took those photos a bit more than halfway through the month, after a small amount of leakage discharge had inevitably occurred (less than at the end of the month, but still not nothing). So, the full-charge indication doesn’t seemingly reflect reality. Compared to them, the 2A- and 10A-setting displays when using the second (lower humming) charger are more in line with my expectations:

as is the second charger’s 50A-setting display, which I’ve shot as a video because this time, unlike previously, the LED is rapid-blinking as expected:

A 0V output isn’t always bad news

Just prior to taking the prior photos yesterday, I’d actually begun my investigation by hooking both chargers up to my multimeter to see what they were outputting. Here’s the first (loud humming) charger at its 2A, 10A, and 50A settings, first configured for use with an AGM battery:

and then set for a standard SLA battery:

The output levels were, I initially (albeit incorrectly) ascertained, in the ballpark of what one would expect for a 12V battery charging target, although perhaps a bit low. Now look at what happened when I hooked the second (lower humming) charger up, again at its 2A, 10A and 50A settings, first configured for use with an AGM battery and then a standard SLA battery:

I’ve saved you from looking at six consecutive images of the multimeter displaying the exact same thing: 0V. This initial outcome actually had me wondering whether the second (lower humming) charger was the one that had “gone south”, until I did a bit of online research and learned that this behavior is to be expected. Unless the charger detects that it’s connected to a correct-polarity battery that isn’t already drained (hold that thought), it will disable its output, among other reasons, to prevent sparking in the presence of hydrogen and other off-gassing.

Some amount of transformer hum is to be expected, of course, as many folks reading this already realize; the root-cause phenomenon is known as magnetostriction and results in a generated tone at twice the mains AC frequency (i.e., at 120 Hz in the U.S., for example):

Additional hum sources, quoting Wikipedia, are “stray magnetic fields causing the enclosure and accessories to vibrate.” And it’s also normal for the hum volume to increase somewhat under higher load. Abnormally loud hum and other noise, however, is the result of other, degradation-induced factors, such as progressive disintegration of the transformer’s core adhesive, resulting in separation of the laminated layers, or a rattle caused by loose component mounting bolts.

(Sorta-) twin sons of different mothers

At this point, I’ll point out something else interesting (at least to me) that my research uncovered: there were (at least) two different internal designs that reached production for this particular DieHard charger. It’s the model 71222; as you can see from this closeup of the outer box, mine’s specifically a model 28.71222 (here’s a link to the user manual):

But in searching around, I also came across references to another version, the model 200.71222, including another user manual link (this time even including a parts list and wiring diagram!). The two variants seem functionally identical from a high-level description standpoint and look similar from the outside, too, aside from a multicolor front panel motif in the model 200.71222:

versus my more monochrome model 28.71222. But the insides are a different matter…

At this point, I’ll point out another “information” (I’m using the term somewhat loosely) source that I came across during my research: this video:

Bonus points to Jason Hemphill, the video creator, for knowing (for example) the difference between the transformer’s primary and secondary sides, as well as for (sorta) explaining the purpose of two diodes connected to the transformer’s center tap secondary. But when, in pointing out what he called the “little smart board”, he voiced the following elucidation:

These wires over here…they’re just control…they don’t do anything…

I admittedly started shaking my head. And when, with the charger still powered up, he then yanked the “little smart board’s” fourth (black) wire out of what it was plugged into at its other end (item 7, the 35A circuit breaker, if you’ve already cross-referenced the parts list and wiring diagram in the user manual I pointed out to you earlier), I about fell out of my chair. And then I realized that although his charger was also a DieHard model 71222, it didn’t look like mine on the inside; I hadn’t yet noticed the front-panel motif variance between the two.

Looking “under the hood”

At this point, I’ll transition to the teardown portion of my write-up, before returning and concluding. Beginning with the obligatory outer box shots:

Can’t forget this all-important one…😂

I next opened it up:

and then pulled out the contents (I later found the paper user manual in my filing cabinet):

Convenient carry handle:

Only after connecting the charger to the battery and selecting the desired settings should you, and I quote, “Plug the charger into a live AC power outlet”. Further, “Unplug the AC cord before disconnecting the battery clips”….as well as prior to unplugging internal cabling, yes?

Back off my soapbox

and back to the backside to uncoil the power cord:

Don’t worry, I won’t ascend the soapbox again. That said…

And now to dive inside. You may have noticed the four screw heads on the sides, two per. Guess what comes next?

That got me partway there:

Oh yeah, there’s another screw head on the underside:

But grandma, what a big transformer you have!

At this point, I was still clinging to the delusion that this charger might be working (I hadn’t yet found Jason Hemphill’s video), so I didn’t disassemble it further. Still, I hope the photos of the internals of my model 28.71222 will be educational for you, not only standalone but also in comparison to Hemphill’s presumed model 200.71222. Here, first off, is the rear-located internal PCB, both much larger than the one in Hemphill’s charger and with an integrated circuit breaker (more accurately stated: fuse pair):

Check out the sizeable SCRs (silicon-controlled rectifiers) and discrete transistors bolted to metal heat-transfer plates on either side of the PCB!

An inner view of the front panel, with the charging current switch at lower left, the gauge at upper right and the AGM-vs-standard SLA switch below it:

And, last but definitely not least, the predominant contributor to the unit’s ~11 lb weight, the transformer. Here’s the primary winding:

And the secondary:

and finally, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, perspectives of the top (primary at left, secondary at right):

and one side (ditto):

Old-school pros, cons and conclusions

Note that the output voltage Jason Hemphill was getting out of his charger prior to his “hack” is a close approximation of what I’m seeing with mine at its 50A setting. He indicated in his video that he exclusively uses his “fixed” unit at 50A, so I’m assuming his entire video was also shot with it configured that way (I couldn’t find a sufficiently clear video frame of the front panel to confirm). And by the way, in scrolling through the comments and his responses, I realized I owed him more credit than the little I’d initially allocated (with minor grammar tweaks by yours truly):

You are exactly right. It is not fixed. And you’re also right; it’s likely to be a 10-cent transistor. But most people will not be able to fix the transistor issue. They won’t spend the time to find it, order it and replace it. The solution I’m offering is to turn it into an old school charger. It takes out the safety technology. This solution is an option for people who are old school and are used to working with things that way. As I stated in the video, this isn’t an option to hook up to a battery and walk away. So, if you’re someone who can’t hook up a battery right or doesn’t understand the idea of overcharging and needs idiot-proof technology to do that for you, this isn’t your option: go buy a new one. But if you’re old school, this will do the job.

Further perusing the 100+ comments (resulting from 115,000+ views to date!) of Jason Hemphill’s video was not only educational but also entertaining. I learned, for example, that the DieHard model 200.71222 is internally identical to the Schumacher Electric (the original developer, I’m assuming) SE5212A charger. No idea who originally developed my DieHard model 28.71222, however. And even if I did, I’m not going to try to resurrect this one, no matter that plenty of other folks seemingly prefer ones of a fully manual fashion.

Sure, by bypassing the “little smart board,” the now-manual charger might attempt to resurrect a fully drained battery, but my more modern chargers already do the same thing. They, plus the still-working sibling to my malfunctioning model 28.71222, will also automatically shut off at the end of the charging cycle, versus overcharging and potentially ruining the battery (not to mention causing other potential broader problems).  And they’ll also save me from calamity should I distractingly hook up the charger to the battery in a reverse polarity state.

Thoughts on the topics discussed and internal circuitry revealed in today’s piece? Let me know in the comments! If you’re interested in inheriting this charger and converting it to a manual version yourself (note that I take no financial or other responsibility for any subsequent calamities), send me an email! And by the way, if you’re interested in finding out more about how car battery testers work, head here!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A battery charger that loudly hums: Dump it or just make it dumb? appeared first on EDN.

From gap to signal: Non-contact capacitive displacement sensors

Пн, 03/16/2026 - 13:21

Capacitive displacement sensors turn tiny gaps into actionable signals. By measuring changes in capacitance as target moves, these devices deliver precise, non-contact readings of position and motion. Their touch-free nature makes them ideal for fragile surfaces, high-speed machinery, and environments where mechanical wear is unacceptable.

From tuning dials to nanometers: The capacitive lineage

Historically, the lineage from the vintage “gang condenser” to modern capacitive displacement sensors is surprisingly direct. In early radio receivers, variable air capacitors translated a knob twist into resonance tuning by modulating plate overlap. Modern sensors exploit this same fundamental relationship—geometry and permittivity—but invert the objective.

Rather than adjusting capacitance to achieve resonance, they elevate infinitesimal ∆C into the measurement itself, quantifying motion with great fidelity. What was once a utility for frequency selection has become the primary metric of precision measurement, a century-old tuning trick reborn as precision instrumentation.

Figure 1 Rotating plates in a gang condenser modulated capacitance to tune resonance in early radio receivers. Source: Author

Capacitive sensing in everyday tools

A familiar example of this principle at work is the digital caliper. Most mainstream models utilize capacitive linear encoding: as the sliding jaw moves across a scale patterned with fixed conductive tracks, the shifting electrode geometry produces periodic variations in capacitance. The caliper’s onboard electronics digitize these differential phase shifts, translating them into precise position readouts with resolutions typically reaching 0.01 mm.

This method effectively mitigates errors from minor gaps in the slider’s fit. In essence, the tool leverages the same fundamental physics as the gang condenser—the interplay of electrode overlap and dielectric spacing—but adapts that variable capacitance into a robust, high-resolution incremental measurement system.

Figure 2 A teardown reveals the underlying sensing mechanism of a digital caliper. Source: Author

Capacitive displacement: The secret to frictionless precision

When mechanical gears grow too bulky and optical sensors prove too fragile, capacitive displacement sensing steps in. By detecting subtle shifts in an electric field—changes invisible to the eye—these sensors achieve fine accuracy. From high-end CNC machines to scientific instruments, they raise the bar for measurement—precision delivered without the drag of friction.

Capacitive displacement sensors are high-precision, non-contact instruments that measure position or distance by detecting changes in electrical capacitance. The system functions as a parallel plate capacitor, where the sensor probe serves as one conductive plate and the target object acts as the other.

As the gap (dielectric space) between the probe and the target fluctuates, the capacitance shifts in inverse proportion to the distance. By monitoring these minute variations, the sensor provides exceptionally accurate, sub-nanometer resolution measurements without ever making physical contact with the target.

In real-world practice, a capacitive displacement sensor system is not just a single probe but a complete measurement chain that typically includes a sensor head, a controller, a power supply, and cabling. The sensor head (probe) is the capacitive element that interacts with the target surface, while the controller provides excitation, interprets the capacitance changes, and outputs a usable displacement signal.

A power supply—either integrated into the controller or external—ensures stable operation, and shielded cables and connectors maintain signal integrity. For example, systems like the Lion Precision CPL series or Micro-Epsilon capaNCDT sensors use this modular setup: a probe head for sensing, a controller for signal processing, and a power supply to stabilize the system. Some controllers are designed for a single probe input, while others can accommodate multiple probes, enabling multi-point measurements when required.

Figure 3 This capacitive displacement sensor delivers single-channel, noncontact measurement for precision position and displacement applications. Source: Lion Precision

Guard ring and active guarding: Ensuring measurement integrity

On paper, the principle of capacitive displacement measurement relies on the operation of an ideal parallel-plate capacitor. When the distance between the sensor and the measurement object changes, the total capacitance varies accordingly.

If an alternating current of constant frequency and amplitude flows through the sensor capacitor, the resulting alternating voltage becomes directly proportional to the distance to the target (or ground electrode). This variation in distance is detected and processed by the controller, which then outputs a value representing the measured displacement through its designated channels.

However, since the sensor (sensing element) acts as one conductive plate and the target object as the other, accurate measurement requires that the electric field remain confined to the space between them. If the field extends to nearby objects or surfaces, any movement of those items may be misinterpreted as a displacement of the target.

To prevent such interference, a guard ring with active guarding is mostly employed, a technique that ensures the sensing field is restricted to the intended measurement zone, thereby maintaining measurement integrity. In practice, the guard ring—conductive shield around the sensing element—is energized with an alternating voltage.

Figure 4 Guard ring energizes with AC voltage, confines field, and ensures accurate sensing. Source: Author

When simply putting it all together, the capacitive displacement measurement process begins with the sensor generating a controlled electric field between the probe and target, followed by detecting capacitance changes as the gap distance varies, then processing the signal by converting capacitance variation into a proportional voltage output, and finally calculating distance based on the direct correlation between voltage and displacement.

The capacitive displacement sensor circuit integrates several essential elements, including a high-frequency oscillator, capacitance-to-voltage converter, signal conditioning amplifier, guard drive circuitry for noise reduction, a temperature compensation network, and an output linearization circuit.

To ensure accuracy, the guard ring surrounding the sensing element is actively driven at the same potential and phase as the sensor signal, suppressing stray capacitance and preserving uniformity of the electric field.

Wrap-up: Forking up the gaps for refinement

Capacitive displacement sensors are prized first and foremost for precision positioning—keeping machine tools, assemblies, and instruments aligned to exact tolerances. Yet their talent does not stop there. The same principle that tracks motion can also measure thickness, detect vibration, or monitor material expansion.

And while they excel with conductive targets, clever designs enable them to sense non-conductive materials as well, broadening their reach across manufacturing, research, and quality-control applications.

Similarly, capacitive displacement sensors share much with eddy-current sensors; both excel at non-contact measurement and precise positioning. The key difference lies in their physics: one reads electric field shifts, the other tracks magnetic field interactions.

Moving forward, as usual when handling a complex topic, some key pieces may slip through the narrow gaps. Those will be forked up, revisited, and refined. One such area worth expanding later is the role of knob-on-display (KoD), a practical human-machine interface (HMI) element that bridges the gap between tactile mechanical control and dynamic visual feedback.

Interestingly, KoD is often overlooked in broader displacement sensing discussions, despite its sophisticated use of capacitive grids to track angular position. By re-contextualizing the rotary dial as a specialized coordinate-shifting sensor, we move beyond simple HMI aesthetics into the realm of high-reliability, closed-loop feedback systems.

Your insights or questions on KoD, or on any other aspect, are welcome to help sharpen the refinement process.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post From gap to signal: Non-contact capacitive displacement sensors appeared first on EDN.

Balun transformers: Linking balanced to unbalanced

Птн, 03/13/2026 - 14:48

Balun transformers remain indispensable in RF and high-frequency design, serving as the quiet interface between balanced transmission lines and unbalanced circuits. By enabling impedance matching, minimizing signal distortion, and suppressing common-mode noise, they provide the foundation for reliable connectivity in applications ranging from antennas to amplifiers to broadband communication systems.

As wireless technologies push toward higher frequencies and tighter integration, understanding the principles and practical nuances of balun transformers is key to optimizing performance and ensuring design resilience.

The term “balun” itself comes from balanced to unbalanced. While many implementations use transformer coupling, not all baluns are transformer-based—some rely on transmission line techniques. Using “balun transformer” specifies the transformer-type design, distinguishing it from coaxial sleeve or other non-transformer baluns.

 

Historic note: The iconic TV balun adapter

Before digital tuners and streaming boxes took over, this compact 300 Ω to 75 Ω matching transformer was a fixture in analog television setups. Designed to reconcile the impedance and mode mismatch between twin-lead ribbon antennas and coaxial inputs, it featured screw terminals for the antenna wire and a standard coaxial plug for the TV’s antenna input socket.

Connected at the final stage of the antenna lead and plugged directly into the tuner, it quietly performed its dual role—impedance transformation and balanced-to-unbalanced conversion. This ensured that rooftop signals reached living rooms with minimal distortion. In the analog broadcast era, this unassuming adapter was the last link in the RF chain, faithfully bridging generations of antenna technology.

Figure 1 Screwing the 300 Ω ribbon cable into the balun terminals and plugging its coaxial end into the TV’s antenna input socket completes the balanced-to-unbalanced transition. Source: Author

Video balun transformers: Bridging coax and twisted pair

Video balun transformers—more commonly referred to simply as video baluns in industry parlance—extend the utility of balun technology beyond RF and audio domains into the realm of video signal transmission. These devices convert unbalanced coaxial signals (such as composite video) into balanced signals suitable for twisted-pair cabling, and vice versa.

This conversion not only reduces susceptibility to electromagnetic interference (EMI) but also enables cost-effective long-distance video distribution using standard Cat5/Cat6 cabling. Passive video baluns rely on transformer coupling to maintain signal integrity without external power, while active baluns incorporate amplification and equalization to support higher resolutions or longer cable runs.

In surveillance and broadcast applications, video baluns have become indispensable for bridging legacy coaxial infrastructure with modern structured cabling, ensuring clean signal delivery and simplified installation.

Figure 2 Video baluns connect coaxial BNC interfaces to twisted-pair cabling and deliver HD CCTV signals over long distances with reduced interference. Source: Author

As a quick aside, it’s worth noting that the K and MP ratings of a video balun both denote its supported resolution class. The MP rating specifies the maximum camera resolution in megapixels, while the K rating expresses the same capability in terms of horizontal pixel count.

In practice, both ratings reflect the balun’s bandwidth and signal-handling capacity for HD CCTV. For example, a 4K balun supports roughly 8 megapixels of resolution, since 3840 × 2160 pixels equals about 8.3MP (8.3 million pixels).

Baluns in practice: Theory meets application

Balun transformers are invaluable not only for converting between balanced and unbalanced signals but also for performing impedance transformations with minimal loss. Unlike LC circuits, many balun designs can operate effectively across very wide frequency ranges.

In RF applications, baluns are commonly used to interface antennas with transmitters and receivers, ensuring that as much power as practically possible is delivered. This session blends accessible theory—without heavy mathematics—with a few practical pointers and real-world implementations.

Among the fundamental designs, the balun transformer is the most widely recognized. Using magnetic coupling, it converts between balanced and unbalanced signals while providing excellent isolation and impedance matching. Transmission-line baluns achieve balance through carefully arranged lengths of coaxial or twisted-pair lines, making them well-suited for wideband RF applications.

Hybrid baluns combine transformers and transmission-line techniques, offering flexibility across frequency ranges. Together, these basic types form the foundation for more advanced designs, and understanding their principles helps engineers and experimenters select the right balun for applications ranging from antenna systems to CCTV.

In practice, the terms “balun transformer” and “transformer balun” both refer to the same device: a balun realized through transformer coupling. The difference is mostly in emphasis. Balun transformer highlights the function first—balanced-to-unbalanced conversion—while noting that it’s implemented as a transformer.

Transformer balun highlights the construction first, pointing out that it’s a transformer adapted to serve as a balun. Both usages are common, but in technical writing “balun transformer” is often preferred because it stresses the primary role of the device.

A further distinction often made is between voltage baluns and current baluns. A voltage balun enforces equal voltages on the balanced output terminals, which can work well in many cases but may allow unequal currents if the load is not perfectly symmetrical. In contrast, a current balun enforces equal and opposite currents in the balanced lines, often providing better suppression of common-mode currents on antenna feedlines.

Both approaches have their place: voltage baluns are straightforward and widely used, while current baluns are often preferred in RF antenna systems where minimizing feedline radiation and maintaining balance are critical.

Also essential to audio systems, baluns form the core of passive direct injection (DI) boxes. A passive DI employs a transformer—acting as a voltage balun—to convert an unbalanced, high-impedance instrument signal into a balanced, low-impedance output. This conversion is vital for interfacing high-Z sources such as electric guitars with low-Z mixing console inputs over long cable runs.

By enforcing equal and opposite voltages on the balanced lines, the transformer achieves high common-mode rejection, suppressing noise and ensuring transparent signal transfer. This application demonstrates how the balancing principles fundamental to RF and CCTV extend seamlessly into professional audio, underscoring the cross-domain versatility of balun technology.

Figure 3 A passive DI box handles extreme signal levels without introducing any distortion. Source: Radial Engineering

Seemingly, instead of diving straight into balun transformer–based RF or video projects, makers may find it easier—and just as rewarding—to begin with a closely related audio build: the passive DI box. Ready-to-use direct box transformers are widely available, and their simplicity makes them an ideal starting point for a fun and accessible DIY project.

Notable part numbers include JT-DB-EPC and A187A10C, both excellent examples of components that make this project approachable for beginners. The Hammond 1140-DB-A is another great catch, offering a versatile option for those eager to experiment with high-quality audio designs.

Figure 4 The 1140-DB-A direct box transformer delivers a balanced microphone output from an unbalanced line-level signal, enabling long cable runs with minimal high-frequency loss. Source: Hammond

From first steps to deeper layers

As is often the case, we have only just wetted our feet—there is still a vast ocean of balun transformer theory, design variations, and application nuances left to explore. From specialized wideband implementations to creative DIY builds, each path opens new insights into how these deceptively simple devices shape signal integrity across RF, audio, and video domains.

This overview is meant as a starting point, a foundation for deeper dives into the many layers of balun transformer technology that await.

Your turn: If this sparked your curiosity, take the next step—experiment with a simple antenna balun build, revisit your audio gear with fresh eyes, or explore advanced designs in RF literature. Share your experiences, questions, or even your own schematics, because the best way to deepen understanding is to connect theory with practice.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Balun transformers: Linking balanced to unbalanced appeared first on EDN.

Designing the voice AI stack: Integrating spatial hearing AI with edge-based intent gating

Птн, 03/13/2026 - 14:00

We’re past the point where voice can be treated as just another feature.

For more than a decade, the smart home has operated under a flawed assumption: that voice is optional. It’s not. As homes grow more complex and connected, voice is the only interface that aligns with how people actually live.

Traditional interfaces don’t scale: touchscreens fail when your hands are full, apps demand too much attention, and remotes are always missing when you need them. Voice is the only input that works across rooms, contexts, and users, if it works reliably.

And yet, we’re still tethered to physical buttons and remote controls, because we don’t fully trust voice interfaces. They miss commands, struggle in noisy environments, and break the moment connectivity becomes unstable. That’s not a UI flaw. It’s an architectural one.

To replace the light switch, voice needs to be always available, always accurate, and always in context. That means rethinking where intelligence lives and how decisions are made.

Hybrid Voice AI architecture is not an incremental upgrade, it’s an engineering breakthrough that transforms the smart home from a scattered set of reactive gadgets into a cohesive, proactive system. By separating real-time, on-device reflexes from deep, cloud-based reasoning, this architecture is designed to make voice a trusted, primary interface, every time, in every room.

Making voice work in the real world

The flaw in current voice technology isn’t a lack of data; it’s a lack of clarity

Real homes are acoustically chaotic. They’re full of overlapping conversations, background music, household noise, and hard surfaces that introduce echo and reverb. Users speak from different rooms, distances, and angles. Commands are often ambiguous or incomplete. These aren’t edge cases. They’re the default operating conditions.

Current cloud-only models are powerful but slow, while legacy on-device models are fast but dim-witted. Neither alone can deliver the “Star Trek” experience users crave. To achieve the non-negotiable standard of 100% reliability, we need a system that mimics the human brain’s ability to process reflexes locally and complex thoughts deeply

In that context, today’s voice interfaces consistently fall short. Not because of a lack of data or model size, but because of fundamental architecture-level decisions about where processing happens, how quickly systems respond, and how they handle failure.

A symbiotic two-tier architecture

The innovation lies in splitting the intelligence. By decoupling immediate execution from deep reasoning, we create a system that is both instant and intelligent.

  1. The Reflex Layer – Edge AI (Supports Instant Response):
    1. Definition: Think of this as the smart home’s autonomic nervous system.
    2. Innovation: High-performance, always-on SLM embedded directly on the device’s silicon.
    3. Function: Handles the “here and now.” Commands like “Lights on” or “Volume down” are processed locally with near-zero latency.
    4. Impact: Delivers absolute privacy and instant responsiveness. No data leaves the room, and the experience feels as immediate as flipping a physical switch.
  2. The Reasoning Layer – Cloud AI (Intelligent Coordination):
    1. Definition: This acts as the system’s prefrontal cortex—responsible for reasoning.
    2. Innovation: Leverages large language models (LLMs) to manage long-term state, memory, and complex logic across devices and use cases.
    3. Function: Handles the “what if” and “what next.” It manages household routines, coordinates multiple devices, and draws inferences from incomplete inputs (e.g., “Order dinner for whoever is home tonight.”)
    4. Impact: Enables devices to go beyond command execution—they begin to understand intent, anticipate user needs, and adapt over time (Figure 1).

Figure 1 A hybrid voice stack routes audio through on-device perception (AEC, spatial analysis, separation, intent gating) and escalates only complex requests to cloud reasoning. (Source: Kardome)

Differentiation for the decade ahead

For OEMs and Tier 1 suppliers, architecture, not features, is emerging as the defining battleground for the next generation of smart home systems.

The market is saturated with devices that can set timers, play music, or toggle lights. These capabilities are now commodity. What will set future systems apart is their ability to demonstrate true Auditory Intelligence—to perceive, localize, and interpret human speech reliably, even in noisy, multi-speaker, real-world environments.

By integrating spatial hearing AI and cognition technologies into a hybrid architecture, manufacturers can go beyond individual product features and instead build the auditory nervous system of the modern home.

We are past the era of voice assistants that require users to repeat themselves or speak in rigid syntax. Hybrid Voice AI enables a different class of experience—one where technology is felt, but rarely seen.

Figure 2 Spatial processing turns a mixed audio scene (TV + two speakers + reverb) into separated target streams suitable for intent detection and command execution. (Source: Kardome)

What “reflex vs. reasoning” means

In a production voice system, “hybrid” isn’t simply “ASR on-device and an LLM in the cloud.” It’s a routing architecture with a continuously running perception pipeline that decides:

  • Is anyone speaking?
  • Who is speaking (and where)?
  • Is it directed at the device?
  • Can we execute locally, or do we need cloud reasoning?

A practical edge “reflex” stack typically includes:

  1. Acoustic front end (always-on): microphone capture → gain control / denoise → echo cancellation (to remove the device’s own playback).
  2. Spatial scene analysis: estimate how many sources exist and where they are relative to the device (near/far, left/right, different rooms).
  3. Source separation + target selection: isolate the intended speaker stream(s) and suppress competing sources (TV, music, second speaker).
  4. Speech activity detection + endpointing: stable detection of speech start/stop to avoid clipped commands and reduce false triggers.
  5. Device-directed intent gating (SLM): a lightweight model answers: “Is this speech for the device?” using spatial cues + conversational flow + linguistic signals.
  6. Execution vs. escalation:
    1. Local path: deterministic actions and short commands (“lights on,” “stop,” “volume down”) with minimal latency.
    2. Cloud path: long-horizon reasoning, multi-device planning, and tasks requiring external knowledge—only when needed.

 The engineering advantage is that the system can stay fast and predictable for everyday commands while still enabling deeper capabilities when appropriate.

Why spatial audio is the “make or break” layer

Most failures in today’s voice assistants begin before language: the system is fed garbage audio (mixed speakers, reverberation, background media), then asked to “understand” it. Hybrid architectures push the hard work earlier: fix the audio scene first, then do language.

Spatial processing matters because it enables three foundational capabilities:

  • Localization: determine where speech is coming from and whether it’s in the same room.
  • Separation: isolate a voice even with overlapping speakers and media noise.
  • Attribution: reduce wrong-room actions and improve “who said what” reliability.

This is also where direction of arrival (DOA)-only approaches struggle in real homes: reflective surfaces create strong echoes and multiple delayed arrivals. A “flat” directional estimate can become unstable under reverb, causing separation and attribution errors. A more robust approach treats each source as having a unique spatial signature (an “acoustic fingerprint”) and uses that signature to stabilize separation and tracking over time.

Latency, offline behavior, failure modes

If voice is going to replace physical controls, reliability can’t be an aspiration—it has to be engineered with explicit budgets and test matrices.

Latency budget

Humans pause roughly ~200ms between conversational turns, while cloud round trips often land in the 1–3 second range—good enough for Q&A, not good enough for control.

The reflex path should therefore be designed so the most common commands complete without waiting on the network.

Offline and “brownout” modes

Define tiers of capability that remain functional without connectivity:

  • Tier A (must work offline): lights, volume, stop/quiet, timers, basic routines.
  • Tier B (cloud-required): deep reasoning, external services.

This avoids a binary “voice works / voice is dead” experience and increases user trust.

Failure modes that must be tested (not treated as edge cases)

  • overlapping speakers (barge-in, crosstalk)
  • competing media (TV/music)
  • far-field speech + occlusion (speaker in hallway / adjacent room)
  • changing echo paths (content and volume changes)
  • reverberant rooms (kitchen tile, open-plan living spaces)

 Metrics that map to trust (beyond WER):

  • end-to-end command success rate by scenario class
  • false accept / false reject rates for device-directed intent gating
  • speaker attribution / room attribution accuracy
  • P95 latency (not just average) for Tier A commands
  • recovery time after connectivity loss
Why privacy and economics often improve in a hybrid design

A counterintuitive benefit of edge-first reflex layers is that they can be more private and more cost-stable than cloud-streaming approaches—because a large fraction of everyday interactions can be processed locally, and the cloud is invoked only when deeper reasoning is necessary.

On the economics side, cloud inference costs scale with usage, while edge compute is amortized with silicon volume and can reduce the need for continuous cloud processing for trivial requests.

One example of this architectural direction is Kardome, which focuses on combining spatial hearing (to separate and localize voices) with an on-device context-aware SLM (to decide whether speech is directed at the system), escalating to the cloud only when deeper reasoning is needed.

Dr. Alon Slapak is the co-founder and CTO of Kardome, a voice AI startup pioneering Spatial Hearing and Cognition AI technology that enables seamless, natural voice interaction in real-world noisy environments. He holds a Ph.D. from Tel Aviv University and brings deep expertise in acoustics, signal processing, and machine learning. Alon and co-founder and CEO Dr. Dani Cherkassky launched Kardome out of a shared passion for solving end-user frustrations with voice devices, combining their expertise in acoustics and advanced machine learning to build leading-edge voice user interface technology. Kardome has raised $10M in Series A funding.

Related Content

The post Designing the voice AI stack: Integrating spatial hearing AI with edge-based intent gating appeared first on EDN.

Low-cost MCUs enable smarter embedded devices

Чтв, 03/12/2026 - 19:43

Leveraging ST’s 40-nm process and an Arm Cortex-M33 core, STM32C5 MCUs deliver increased speed for cost-sensitive embedded devices. The microcontrollers run faster than many entry-level chips, improving the capabilities of compact smart devices in factories, homes, cities, and infrastructure while keeping dynamic power consumption low (<80 µA/MHz).

Running at 144 MHz and achieving a CoreMark score of 593, the Cortex-M33 offers up to three times the performance of typical Cortex-M0+ devices. ST’s 40-nm cost-efficient manufacturing process supports higher clock speeds and larger on-chip memory. The STM32C5 series integrates 128 KB to 1024 KB of flash and 64 KB to 256 KB of RAM.

The MCUs are designed to meet SESIP3 and PSA Level 3 security requirements, with memory protection, tamper protection, cryptographic engines, and temporal isolation to protect processes such as secure boot and firmware updates. Variants with additional security provide hardware unique key support, secure key storage, and hardware cryptographic accelerators for symmetric and asymmetric operations.

The STM32C5 MCUs are entering production now and are available in packages ranging from 20 to 144 pins. Pricing starts at $0.64 each in 10,000-unit quantities.

STM32C5 product page 

STMicroelectronics

The post Low-cost MCUs enable smarter embedded devices appeared first on EDN.

TinyEngine NPU powers AI in TI MCUs

Чтв, 03/12/2026 - 19:43

TI’s MSPM0G5187 and AM13E23019 MCUs integrate the TinyEngine NPU, enabling efficient edge AI in systems ranging from simple to complex. These latest additions to TI’s portfolio of AI-enabled hardware, software, and tools allow engineers to deploy intelligence anywhere. This announcement moves TI closer to its goal of integrating the TinyEngine NPU across its entire microcontroller lineup.

The MSPM0G5187 is powered by an Arm Cortex-M0+ 32-bit core operating at up to 80 MHz and includes 128 KB of flash. Its TinyEngine NPU is capable of running AI models with up to 90× lower latency and more than 120× less energy per inference than comparable MCUs without an accelerator. By performing neural-network computation locally, the NPU operates in parallel with the primary CPU running application code. Priced at under $1 in 1,000-unit quantities, the MSPM0G5187 brings edge AI to simpler, smaller, and lower-cost applications.

Aimed at real-time motor control, the AM13E23019 leverages an Arm Cortex-M33 32-bit core operating at up to 200 MHz and includes 512 KB of flash. It maintains precise real-time control loops for up to four motors while the TinyEngine NPU runs adaptive control algorithms. An integrated trigonometric math accelerator performs calculations 10× faster than coordinate rotation digital computer (CORDIC) implementations, enabling more responsive motor control.

The MSPM0G5187 is available now in production quantities on TI.com, while the AM13E23019 is currently available in preproduction quantities.

Texas Instruments 

The post TinyEngine NPU powers AI in TI MCUs appeared first on EDN.

Edge AI SoC integrates tri-radio

Чтв, 03/12/2026 - 19:43

The i.MX 93W applications processor from NXP combines a dedicated AI NPU with secure tri-radio wireless connectivity in a single package. By eliminating the need for up to 60 discrete components, the SoC reduces board area, design complexity, and system-level costs.

Purpose-built to accelerate physical AI deployment, the i.MX 93W is supported by NXP’s software stack, eIQ AI enablement tools, and precertified reference designs that simplify RF integration. The device integrates a dual-core Arm Cortex-A55 processor and an Arm Ethos NPU capable of up to 1.8 eTOPS. Wireless connectivity is provided by the IW610 tri-radio, supporting Wi-Fi 6, Bluetooth Low Energy, and IEEE 802.15.4 for Matter and Thread.

The i.MX 93W SoC integrates an EdgeLock Secure Enclave (Advanced Profile) to support device security and regulatory frameworks such as the European Cyber Resilience Act. The enclave provides a hardware root of trust for secure boot, updates, device attestation, and device access. With NXP’s EdgeLock 2GO key management service, devices can be provisioned during manufacturing or in the field.

The i.MX 93W is slated to begin sampling in the second half of 2026.

i.MX 93W product page 

NXP Semiconductors 

The post Edge AI SoC integrates tri-radio appeared first on EDN.

200-V MOSFETs cut conduction losses

Чтв, 03/12/2026 - 19:43

Two devices have joined iDEAL Semiconductor’s SuperQ 200-V MOSFET portfolio, offering very low RDS(on) in standard power packages. These two SuperQ devices are designed for demanding motor-drive applications that require high efficiency, robustness, and fault tolerance.

The iS20M5R5S1T achieves a maximum RDS(on) of just 5.5 mΩ in the compact TOLL package, enabling higher power density and reduced conduction losses in space-constrained designs. Similarly, the iS20M6R3S1P delivers a maximum RDS(on) of 6.3 mΩ in the rugged TO-220 package, providing high efficiency for applications that favor through-hole assembly, mechanical mounting, or direct heatsinking.

The new SuperQ MOSFETs feature high short-circuit withstand current and closely matched gate thresholds, with a variation of ±0.5 V, for easier paralleling. They are rated for 175 °C and can handle currents up to 151 A in the TOLL package and 172 A in the TO-220 package. Both devices are avalanche-rated and undergo 100% UIS testing in production.

In addition to motor drives, these MOSFETs are also suitable for switched-mode power supplies, secondary-side synchronous rectification, and other high-current industrial or battery-powered systems. 

The iS20M5R5S1T and iS20M6R3S1P are in volume production and available through iDEAL’s global distribution channels.

iDEAL Semiconductor 

The post 200-V MOSFETs cut conduction losses appeared first on EDN.

Sfera Labs debuts industrial Raspberry Pi edge systems

Чтв, 03/12/2026 - 19:43

Sfera Labs has introduced an industrial Raspberry Pi-based edge server and PLC for industrial IoT and edge applications. The Strato Pi Plus server and Iono Pi v3 controller come in DIN-rail enclosures with an embedded Raspberry Pi 4 or 5 single-board computer (SBC), delivering industrial-grade systems for automation, field communications, and IoT edge deployments that require continuous, unattended operation.

The Strato Pi Plus features a hybrid architecture that pairs the Raspberry Pi SBC with an RP2354 MCU. The RP2354 operates independently of the main processor to manage critical real-time functions and system supervision, including an independent hardware watchdog. In-field firmware updates for the RP2354 are supported via OTA, managed directly through the Raspberry Pi. Serial connectivity includes four individually opto-isolated RS-485 ports and one CAN FD interface. The Strato Pi Plus operates from an integrated 10–50 V DC supply with surge and reverse-polarity protection and a 3.3 A resettable fuse.

The Iono Pi v3 industrial PLC integrates a 9–28 V DC power supply, four power relays, high-resolution analog voltage and current inputs, and seven configurable GPIO pins. Like the Strato Pi Plus, it implements a hardware watchdog in the RP2354 MCU that operates independently of the Raspberry Pi SBC. The device also includes a real-time clock with a temperature-compensated oscillator and replaceable backup battery. An embedded Microchip ATECC608 secure element enables hardware-based authentication and cryptographic key storage.

A timeline for availability of the Strato Pi Plus and Iono Pi v3 was not provided at the time of this announcement.

Strato Pi Plus product page

Iono Pi v3 product page 

Sfera Labs 

The post Sfera Labs debuts industrial Raspberry Pi edge systems appeared first on EDN.

A long-ago blow leads to water overflow: Who could know?

Чтв, 03/12/2026 - 14:00

Mechanical analogies to electronics symbols are common in other engineering disciplines. We might refer to this one, then, as akin to a battery with an internal short circuit?

I’ll warn you upfront that this particular blog post has nothing specific to do with electronics (aside, I suppose, from the potential for electrocution caused by a water-soaked calamity). That said, I’ll also postulate upfront that (IMHO, at least) it has a great deal to do with engineering in general, specifically as it exemplifies the edge and corner cases that were the subject of a 2.5+ year back previous post from yours truly. Read on or not, as you wish. That said, I hope you’ll proceed!

I kicked off that prior writeup with the following prose:

Whether or not (and if so, how) to account for rarely encountered implementation variables and combinations in hardware and/or software development projects is a key (albeit often minimized, if not completely overlooked) “bread and butter” aspect of the engineering skill set… I’ve always found case studies about such anomalies and errors fascinating, no matter that I’ve also found them maddening when I’m personally immersed in them!

Speaking of the personal angle…and immersion, for that matter…😂

At our peak, my wife and I have had (several times so far…blame me, not her) up to five four-legged mammal companions concurrently sharing our residence with us. Therein explaining the sizeable (4-gallon/15-liter reservoir) Petmate Aspen Pet Lebistro Cat and Dog Water Dispenser that we bought through Amazon at the beginning of 2020:

Amazon’s packaging robustness can be hit-and-miss; when this particular order arrived at our front door, the reservoir and base were detached and loose. And the outer box contained no packing material, far from inner boxes for either/both constituent piece(s). Unsurprisingly, therefore, the reservoir tank had a dent in one corner (the below is a more recent picture…keep reading):

I pushed it back into place as best I could:

and then filled-and-tested the tank, which still seemed to be watertight. And then, driven by a broader longstanding abhorrence for sending functionally sound albeit cosmetically compromised stuff to the landfill, I decided to keep it and press it into service, accompanied by a successful partial-refund request made to Amazon customer service.

Fast-forward six years. We’re down (for the moment, at least) to only one (canine) companion, a factoid which as you’ll soon see likely ended up being key. And we started finding puddles of standing water in proximity to the water dispenser on the (watertight vinyl, thankfully) laundry room floor. Did we initially accuse the dog of bumping into the dispenser, causing spills? Yes, we did. Did subsequent observation convince us that our initial theory was off base? Yes, it did. And did we then feel badly for unjustly initially blaming the dog? Yes…we did. Bad humans. Bad!

In-depth painstaking engineering analysis (cough) eventually led to the realization that the water spills were preceded by slow-but-sure filling of the bowl all the way to the lip (and then beyond, therefore the puddles), versus the inch-below-the-lip level that the dispenser traditionally stuck to. But what had changed? Figuring this out required that I first learn about how gravity water bowls function in the first place. How do they initially fill only to the inch-below-the-lip level, and how do they then automatically maintain this level as the water is consumed by canine and feline companions, until drained (if one of the humans had forgotten to refill it, that is)?

I learned the answer from, as I’m more generally finding of late, Reddit. Specifically, from a post in the cleverly named “Explain Like I’m Five” subreddit (I’m doing my best not to take offense) titled “How do self-filling/gravity fed pet water bowls not overflow and spill everywhere?”. The entire discussion thread is fascinating, again IMHO, containing exchanges such as the following:

  • ender42y: This works for a stack up to 32 ft or 9 meters tall (at standard atmospheric pressure) at which point the top of the water tank would actually start to form a vacuum.
  • bloc97: It is a bit shorter in practice as the water will start to boil at ~2 kpa (assuming 20c).
  • MindStalker: That’s exactly why you are limited to a column that’s about 9 meters tall, anything above that boils away.
  • bloc97: Yes, as there are two processes that determines the column height (density and vapor pressure of the fluid), we just need to make sure not to confuse the two.

Again: 😂. That said, have I yet admitted what a devoted follower of the TV personality Mr. Wizard I was as a wee lad (we didn’t have YouTube back then)?

That admission explains (more than) a few things, yes? Speaking of vacuums, here’s the “money quote” from that Reddit post thread, with kudos to Redditor nestcto:

Recapping the basics, the opening acts as both the exit for the water, and the entrance for air. The air is obviously needed because under normal circumstances, you can’t just have nothing in the bottle. The water must be replaced with something. That’s where the air comes in.

So making the water leave the bottle is easy. You just have to make sure the water is creating more outward pressure to leave the bottle, than the vacuum inside trying to replace it with air.

To keep the water in, you have to make sure the water can’t create more pressure to leave the bottle than the vacuum trying to suck in air. This is more difficult because water is heavy, so gravity pushes it down a lot. The more water, the more pressure. The more pressure, the easier to overcome the vacuum.

Viscosity is a factor here as well, that I won’t go into too much. Basically, the thicker something is, the harder it is to get through a small opening.

Water isn’t very thick, but it’s much thicker than air. So there’s a point where the opening is small enough that water has trouble getting through it without some pressure behind it. The force of gravity isn’t strong enough to push the water through the small opening, and the internal vacuum is too weak to suck air in since no water has left yet to create a vacuum. So there’s a standstill.

When this happens, you may notice that you can actually make the water flow outwards by agitating the bottle. Take a needle or toothpick, and swish it around the opening. You’ll notice that some water leaves the bottle. This causes a small vacuum to replace the water. Which sucks in air. The air displacing the water to rise upwards can destabilize gravitational pressure towards the opening, causing more water to leave, and more air to come in, and next thing you know, the whole thing is emptying due to a cycle of pressure. Water out. Vacuum created. Air in. Vacuum satisfied. Water out. Vacuum created…and so on.

Now, the point at which you reach that standstill depends on a LOT of factors. But it’s pretty much always a lot easier to accomplish with less water, because you have less downward pressure to fight against due to gravity.

Ok, that’s all well and good. But it still doesn’t explain why my “self-filling/gravity fed pet water bowl” “destabilized”, as nestcto referred to it. For that, keep scrolling through the thread (admittedly, I particularly resonated, for different reasons, with the last two):

  • MrBulletPoints: Yeah if you were to jam something sharp into the top of that setup and make a hole, it would allow air in and the bowl would definitely overflow
  • verronbc: Yeah… I like to call myself smart sometimes. The first bowl like this we had our dumb 70 lb dog was scared of the bubbles it made after he drank a bit out of the bowl. My simple solution, “Oh, I know I’ll just drill a hole in the top then the air will fill from the top” yeah in a moment of weakness I forgot exactly why and how these work and caused a lot of water to drain on the floor. My girlfriend still teases me about it.
  • [deleted]: That’s why you have to punch a hole in the beer can before you shotgun it. It’s the same concept.
  • stoic_amoeba: As an engineer, I’ve genuinely had this idea cross my mind because the bowl is HEAVY when you fill it all the way. Also, as engineer, I’m a bit ashamed I didn’t immediately realize how bad an idea that’d be. I haven’t done it, thank goodness, but I should know better.

Ahem. At least some of you likely realize what happened next. I went back and looked at that reservoir again, eventually realizing that after six years, the dent had finally become slightly compromised. If I filled the reservoir, turned upside-down from its normal position (so it was resting on its top), left it in the sink and waited a long time, I’d eventually find that a few drops of water had leaked out of it.

The same compromise was true (in reverse) for air, of course, when it was oriented correctly and in place. And it very well might have been like this for a while, counterbalanced by the frequent water intake of multiple pets. Drop the count down to one dog, though, and…puddles. We replaced it with a smaller dispenser from the same manufacturer, the first example of which also arrived from Amazon dented, believe it or not (I shipped it back for replacement this time):

and we’re happily back to an always-dry floor again.

I’ll close with a few photos of the original base, both initially intact and then disassembled:

The way these things work is that, after filling the reservoir with water, you screw on the lid:

then turn it upside down and quickly rotate it to lock it in place here:

The lid’s hole diameter is also key, by the way, as I learned one time when I put the reservoir in place without remembering to screw on the lid first (speaking of water all over the floor)…

Note the gap around the repository where the lid fits. I’m guessing this is where the air comes from to replace the displaced water in the reservoir, but I haven’t come across another Reddit thread to supplement my ignorance on this nuance. Reader insights on this, or anything else which my case study has stimulated, are as-always welcome in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A long-ago blow leads to water overflow: Who could know? appeared first on EDN.

GaN fundamentals: 2DEG, crystal structure, and figure of merit

Чтв, 03/12/2026 - 11:33

Gallium nitride (GaN) power devices are redefining the limits of switching converters by combining wide bandgap physics with lateral HEMT structures optimized for fast, low-loss operation. This article describes GaN as the natural successor to silicon MOSFETs in the 100–650 V class, showing how material figures of merit directly translate into lower on-resistance, higher switching frequency, and much higher power density at competitive cost.

Silicon power MOSFETs have driven the evolution of switch-mode power conversion since the late 1970s, replacing bipolar transistors, thanks to majority-carrier operation, ruggedness, and ease of drive. For decades, continuous structural improvements—cell pitch, trench, and superjunction—pushed RDS(on) down while keeping breakdown capability and manufacturability. However, silicon is now essentially at its theoretical limit for unipolar devices in the 100–600 V range.

The bandgap of a semiconductor is related to the strength of the chemical bonds between the atoms in the lattice. Stronger bonds make it more difficult for electrons to transition between atomic sites. This leads to several important consequences, including lower intrinsic leakage currents and the ability to operate at higher temperatures. Based on scientific data, both GaN and silicon carbide (SiC) exhibit significantly wider bandgaps than silicon.

The theoretical specific on-resistance RDS(on) of a majority-carrier device is constrained by the material’s critical electric field, permittivity, and mobility. For a one square millimeter device area, the drift region controls the trade-off between breakdown voltage and conduction loss.

The approximate breakdown voltage can be written as:

VBR = ½ wdrift Ecrit

wdrift is the drift region thickness and Ecrit is the material’s critical electric field.

The number of electrons available in the drift region between two terminals is calibrated by a simplified, one-dimensional Poisson relation:

q ND = ε0 εr Ecrit wdrift

Where q is the electron charge, ND the doping concentration (or equivalent electron density), ε0 the vacuum permittivity, and εr the relative permittivity.

Combining this with the usual expression for resistance of the drift region (for area = 1 mm²) results in the below equation:

RDS(on) = w_drift / (q μn ND)

It yields the well-known relation between specific on-resistance and breakdown voltage:

RDS(on) = 4 VBR² / (ε0 εr μn Ecrit³)

This equation shows the dominant role of the critical field: RDS(on) scales as VBR² but inversely as Ecrit³. A material that can withstand a much higher electric field and maintain good mobility will deliver orders of magnitude lower specific resistance at the same breakdown voltage.

Figure 1 See theoretical on-resistance for a one square millimeter device versus blocking voltage capability for Si-, SiC-, and GaN-based power devices. Source: Efficient Power Conversion (EPC)

In Figure 1, silicon, 4H-SiC, and GaN theoretical limits diverge dramatically as breakdown voltage increases. At 600 V, GaN’s theoretical specific RDS(on) is roughly two orders of magnitude lower than silicon, and significantly better than SiC, highlighting why GaN is particularly attractive in the 100–650 V class.

Crystal structure and 2DEG formation

GaN’s crystal structure is a key enabler for these performance gains. Crystalline GaN adopts a wurtzite hexagonal structure, while 4H-SiC also has a hexagonal lattice but with different stacking. Both materials are mechanically robust, chemically stable, and tolerant of high operating temperatures, but GaN additionally exhibits strong piezoelectric effects due to the asymmetry of the wurtzite lattice. This effect brings GaN to achieve very high conductivity compared with either silicon or silicon carbide.

When a thin layer of AlGaN is grown on top of GaN, lattice mismatch and spontaneous polarization create strain at the interface. This strain, combined with the intrinsic polarization of the wurtzite structure, generates a strong internal electric field. To compensate this, a two-dimensional electron gas (2DEG) forms at the AlGaN/GaN interface with sheet carrier density on the order of 1013 cm-2 and electron mobility significantly higher than bulk GaN (up to 1500–2000 cm²/V·s versus ~1000 cm²/V·s). This ultra-thin, highly conductive channel is at the heart of the GaN HEMT.

Figure 2 Simplified cross section of a GaN/AlGaN heterostructure shows the formation of a 2DEG created due to the strain-induced polarization at the interface between the two materials. Source: Efficient Power Conversion (EPC)

From an electrical standpoint, this 2DEG behaves like a very low-resistance sheet: the product of carrier density and mobility (ns μn) is much higher than in a doped silicon drift region, while the conduction path is extremely short and lateral. This combination is what allows GaN devices to reach very low RDS(on) for a given chip area and breakdown rating. In addition, the wide bandgap (3.39 eV vs. 1.12 eV for silicon) yields much lower intrinsic leakage and supports higher operating temperatures.

GaN, SiC, and silicon: Material figures of merit

Let’s compare key material parameters for Si, GaN, and 4H-SiC: bandgap, critical field, electron mobility, permittivity, and thermal conductivity. Both SiC and GaN have wider bandgap and much higher critical fields than silicon. In addition to its wide bandgap, GaN exhibits significantly higher electron mobility than both silicon and silicon carbide, enabling faster carrier transport, higher current density, and superior high-frequency performance.

Moreover, GaN’s Ecrit is about 3.3 MV/cm, compared to 0.23 MV/cm for silicon, allowing a much thinner drift region for the same breakdown voltage. The previous RDS(on)–VBR equation directly shows that increasing Ecrit reduces the specific on-resistance by orders of magnitude.

Silicon carbide has even better thermal conductivity than GaN, which is an advantage for very high-power densities and high-voltage systems (>1 kV). However, in the mid-voltage range up to a few hundred volts, GaN’s combination of lateral HEMT structure, very high Ecrit, and 2DEG conduction gives it a superior theoretical figure of merit compared to both silicon and SiC. This positions GaN as the primary technology for replacing MOSFETs in most 40–650 V applications.

From depletion-mode to enhancement-mode GaN HEMTs

The native GaN HEMT is a depletion-mode device: at zero gate bias the 2DEG under the AlGaN barrier provides a low-resistance channel between source and drain, and a negative gate voltage is required to pinch it off. Source and drain contacts reach the 2DEG through the AlGaN layer, while the gate sits on top and modulates the channel by depleting or restoring that electron gas.

This normally-on behavior is acceptable in RF power amplifiers, but it’s problematic in switching converters, where a device that conducts at VGS = 0 V can cause shoot-through during startup or fault conditions.

For power conversion, enhancement-mode operation (normally-off) is therefore essential. With an enhancement-mode HEMT, the 2DEG is suppressed at zero gate bias and re-formed only when a positive gate voltage is applied, making its behavior similar to a power MOSFET.

Several device architectures implement this transition from depletion- to enhancement-mode:

  • In recessed-gate structures, the AlGaN barrier is locally thinned beneath the gate. Reducing the barrier thickness lowers the internal polarization-induced field to the point where the 2DEG vanishes at VGS = 0 V. A positive gate voltage then recreates the channel and allows current to flow.
  • Fluorine-implanted gates introduce negative charge into the AlGaN barrier by ion implantation. The fixed negative charge depletes the 2DEG under the gate at zero bias, shifting the threshold into the positive range. Applying a positive gate voltage compensates this charge and restores conduction.
  • In p‑GaN gate HEMTs, a thin p-type GaN layer is grown on top of the AlGaN barrier. The positive charge in this p-GaN region creates a built‑in potential that overcomes the polarization field and depletes the 2DEG at zero gate bias. When a positive voltage is applied to the gate, electrons are again attracted to the interface and the 2DEG reforms, turning the device on.
  • Hybrid solutions combine a low-voltage enhancement-mode silicon MOSFET with a depletion-mode GaN HEMT in series. In the cascode configuration, the MOSFET gate becomes the external control terminal. When the MOSFET turns on, the GaN gate is effectively driven to a voltage that enables the HEMT; when the MOSFET turns off, the GaN gate is driven negative, and the composite behaves as a normally‑off device.

All these approaches pursue the same goal: eliminate conduction at VGS = 0 V using an architecture that remains compatible with practical gate‑drive levels and offers stable threshold voltage. In practice, p‑GaN gate devices have become the most widely used in commercial power conversion, while cascode hybrids are attractive at higher voltages where the on‑resistance of the silicon MOSFET adds only a small penalty to the GaN device.

Figure 3 An enhancement-mode (e-mode) device depletes the 2DEG with zero volts on the gate (a). By applying a positive voltage to the gate, the electrons are attracted to the surface, re-establishing the 2DEG (b). Source: Efficient Power Conversion (EPC)

The second and final part of this article series about GaN technology fundamentals will explain hybrid structures and RDS(on) penalty as well as vertical GaNs and how to build a GaN HEMT transistor.

Maurizio Di Paolo Emilio is director of global marketing communications at Efficient Power Conversion (EPC), where he manages worldwide initiatives to showcase the company’s GaN innovations. He is a prolific technical author of books on GaN, SiC, energy harvesting and data acquisition and control systems, and has extensive experience as editor of technical publications for power electronics, wide bandgap semiconductors, and embedded systems.

Editor’s Note:

The content in this article uses references and technical data from the book GaN Power Devices for Efficient Power Conversion (Fourth Edition) authored by Alex Lidow, Michael de Rooij, John Glaser, Alejandro Pozo Arribas, Shengke Zhang, Marco Palma, David Reusch, Johan Strydom.

Related Content

The post GaN fundamentals: 2DEG, crystal structure, and figure of merit appeared first on EDN.

Custom DIY LCR SMD fixture for low-Z components

Срд, 03/11/2026 - 16:00

Many folks have bench type LCR meters available and employ the usual general purpose Kelvin type clips or direct connect fixtures for most measurements. When encountering SMD components these measurement tools/methods can become difficult and frustrating for quality repeatable measurements, especially true for low Z components.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The specialized SMD fixtures which utilize horizontal “plungers” perform well except with very low-Z components. The difficulty is due to the force and sense Kelvin connections being made at the small brass bolt that threads into the bottom of each plated brass plunger.

This leave the path from the small brass bolt through the plunger to the device under test (DUT) surface connection uncompensated and relies on the zero/short calibration for correction, which often leaves more measurement uncertainty than desired. Which is especially noticeable with very-low-Z SMD components such as resistive current sensing shunts, where one desires an accurate and repeatable low resistance measurement.

Having suffered with low-Z SMD components and measurement issues often, this created an opportunity to investigate other approaches outside the usual expensive OEM solutions (well out of budget). One idea came to mind was the technique of utilizing a lever toggle arm that worked well in the SMD adapter created for the Tek 577 Curve Tracer.

A custom PCB was developed to directly connect via 4 BNC connectors to the benchtop LCR meter, similar to the way OEM LCR meter fixtures behave. The SMD DUT would be held in place against the PCB exposed surface with the lever toggle arm similar to the concept with the Tek 577 adapter.

Both sides of the PCB were originally utilized to allow the lever arm to be located on the left or right and not interfere with the LCR meter controls and display as shown in Figure 1. This also shows the Tek 577 adapter along with another PCB version which doesn’t have direct BNC connections. 

Figure 1 A custom PCB developed to directly connect to the benchtop LCR meter via 4 BNC connectors. 

Figure 2 shows the LCR meter connection on a Tonghui TH2830 bench LCR meter.

Figure 2 The custom PCB connected to a Tonghui TH2830 benchtop LCR meter.

The LCR meter and SMD DUT fixture connections for meter Hcur (force) and Lcur (sense) contribute significant impedances, some meters can deliver over 100 mA, which can produce errors in the meters sense terminal Hpot and Lpot affecting results.

Using the PCB split-Kelvin technique developed where the DUT SMD makes contact with the exposed PCB surface but the force and sense connections are made separately by the DUT end conductive terminals because the PCB contact area is “split” between force and sense on the high and low sides. This allows the impedances “looking back towards the meter” and the highly variable DUT contact impedance to be within the meter Kelvin control grasp and significantly reducing DUT measurement uncertainty. 

Another PCB version was also developed where the PCB doesn’t host the BNC connectors, but is smaller and fits onto a smaller supporting case with the BNC connectors that is a repurposed cheap LCR meter Kelvin cable/clips case as shown in Figure 3 and Figure 4.

Figure 3 Layout of another custom PCB version developed to directly connect to the benchtop LCR meter via 4 BNC connectors. 

Figure 4 The alternative fixture developed, where the PCB does not host the BNC connectors. Instead, it is smaller and fits onto a smaller supporting case with the BNC connector.

Note in Figure 5 where a shield was added in the case between the high- and low-side BNC connectors to improve isolation.

Figure 5 Wiring inside the smaller, alternative fixture that directly connects to the benchtop LCR meter via 4 BNC connectors. 

Various PCBs were investigated over the span of a few months and it was observed that the PCB surface contact with the SMD DUT could be improved by having gold-plated contact areas and/or by increasing surface contact roughness.

Surface roughness was improved by adding copper filings mixed with solder paste and flux and reflowed onto the DUT contact, see Figure 6.

Figure 6  Surface roughness was improved by adding copper filings mixed with solder paste and flux and reflowed onto the DUT contact. 

A small piece of thin copper sheet cut to 2512 size makes a good zero/short calibration reference device. Caution with the so called “zero-ohm” SMD components, these were found to have significant impedance for most all sizes, and the thin copper custom cut proved the better reference.

Operation with the Hioki IM3536 LCR meter is shown in Figure 7.

Figure 7 Testing “zero-ohm” SMD components on the custom fixture for low-z components with the Hioki IM3536 LCR meter.

Anyway, these various custom fixtures have proven beneficial in daily LCR SMD measurements, and the latter version with the repurposed case (Figures 4, 5 and 6) especially useful and highly stable and repeatable. Hopefully others will find these custom DIY LCR Meter fixtures useful.

Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Ex-elis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989. All posts by Mike Wyatt below:

Related Content

The post Custom DIY LCR SMD fixture for low-Z components appeared first on EDN.

5 decade antilogarithmic PWM current source

Срд, 03/11/2026 - 14:00

The pages of Design Ideas (DIs) have recently been awash in a veritable cascade of designs for variable frequency oscillators with frequency ranges tunable over multiple decades:

But despite the size of this crowd, a notable feature missing from all is provision for digital control (e.g., from an MCU GPIO pin) of the oscillation frequency. This DI will address that topic.

Wow the engineering world with your unique design: Design Ideas Submission Guide

When starting the design of any digital to analog interface, the first question to be answered is how much resolution (bits) do we need? For the applications listed above, the answer isn’t obvious. That’s because of the extremely wide range  of  the analog quantity (frequency) involved, e.g., 100,000:1 for Christopher Paul’s 5-decade 10 Hz to 1 MHz sawtooth generator.

5 decimal decades = 10ppm and is equivalent to a linear binary resolution of 16.6 bits.  So even if we went with the overkill choice of 16bits (1/65536 = 15ppm), we’d still lose resolution at the bottom end. The first least significant bit (lsbit) increment up from 10 Hz would comprise a 15 ppm of 1 MHz = 15 Hz jump to 25 Hz, nearly trebling the output frequency. 

Figure 1’s circuit takes an approach very different from linear conversion. Working from mere 8bit PWM, it makes lsbit incremental resolution constant and uniformly distributed at ~5% of output.  Here’s how it works.

Figure 1 Antilogarithmic 8-bit PWM gives s constant incremental ~5% per lsbit. Asterisked parts are 1% or better precision (metal film or C0G).

Antilog conversion occurs in a four step ~1ms cycle defined by the combined states of the GPIO PWM bit and D flip/flop decoded by the 4052 analog switch as shown in Figure 2.

Figure 2 Tw = antilog RtCt timeout = 1 to 250 counts = 2 to 500 µs, where
PWM = 1 + 21.63*Ln(Imax/Iout)

The antilog conversion sequence is as follows: 

  • BA = 3. duration 12 µs. Timing capacitor Ct charged to Vdd – 1.24 V.
  • BA = 2. duration Tw = 2 µs to 500 µs. Ct exponentially discharged toward Vdd with time-constant RtCt = 43.4 µs.
  • BA = 1: duration 0 to 498 µs. Ct  residual charge transferred to Csh sample and hold cap.
  • BA = 0: duration 2 µs to 500 µs. Ct  residual charge continues to transfer to Csh.

At the end of each 4-step, 1024-µs cycle, Csh will converge toward a charge relative to Vdd between 12 µV and 1.2 V, determined by the antilog of the 2 µs to 500 µs duration of phase 2 of the conversion sequence. The 1-µV typical input offset of the LT2066 makes this adequate for (reasonably) accurate digital to analog conversion. Convergence of Vcsh to 8-bit precision takes a maximum of 8 cycles = 8.2 ms.

Final conversion of the resulting 5-decade current source to a 5-decade frequency output (the point of the exercise) can be done simply (if admittedly kind of crudely) with the circuit in Figure 3.

Figure 3 A minimal 5-decade sawtooth oscillator that enables final conversion of the resulting 5-decade current source to a 5-decade frequency output.

Or it can be done much more precisely with Christopher Paul’s DI by substituting Figure 1 for his original resistor-programmed current source (highlighted in yellow), as shown in Figure 4.

Figure 4 Maximal 5-decade sawtooth oscillator, using Christopher Paul’s DI.

Figure 5 Log (red) and linear (black) plot of source current versus PWM.

Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974.  They have included best Design Idea of the year in 1974 and 2001.

 Related Content

The post 5 decade antilogarithmic PWM current source appeared first on EDN.

Fuel cell sensors: From breath to benchmark

Срд, 03/11/2026 - 11:38

Fuel cell sensors are electrochemical devices designed for precise measurement. In measurement applications, they have become the gold standard for breath alcohol concentration detection, valued for their ethanol specificity, stability, and courtroom-grade accuracy. Compact and low power, they form the backbone of law enforcement breathalyzers, workplace safety programs, and consumer devices, consistently outperforming semiconductor and infrared (IR) alternatives.

Their proven reliability in complex breath matrices has made them indispensable for safety and compliance, while ongoing innovation is extending their reach into broader analytical domains. And while fuel cells generate clean energy, fuel cell sensors generate precise measurements—a distinction that defines their unique role in modern technology.

Applications and history

Before we get into the basics of how fuel cell sensors work, it’s worth noting their application landscape. While research has explored microbial fuel cell biosensors for environmental monitoring and niche industrial uses, the overwhelming commercial reality today is breath alcohol concentration (BAC) measurement.

Fuel cell sensors have become synonymous with BAC detection because of their unmatched ethanol specificity, stability, and courtroom-grade accuracy. Although BAC formally refers to blood alcohol concentration, in practice it is estimated through breath alcohol analysis. This singular focus has defined their role in law enforcement, workplace safety, and consumer devices, making BAC not just their flagship application but essentially their identity in the marketplace.

Technology itself traces its roots to the 1960s, when early electrochemical cells were adapted to detect ethanol in breath samples. By the late 1970s and early 1980s, law enforcement agencies began adopting fuel cell-based breathalyzers, recognizing their superior specificity compared to semiconductor sensors.

Over time, improvements in miniaturization, catalyst stability, and calibration protocols transformed them from bulky instruments into compact, portable devices. This evolution cemented fuel cell sensors as the trusted backbone of alcohol detection, setting the stage for their enduring role in safety and compliance.

Figure 1 A compact breathalyzer with a fuel cell breath alcohol sensor—Alcotest 4000—simplifies portable BAC measurement. Source: Dräger

As a quick aside, while fuel cells rely on chemical reactions, IR spectroscopy uses light to identify alcohol’s unique spectral fingerprint. By directing an IR beam through a breath sample, the instrument measures the specific wavelengths absorbed by ethanol molecules.

This physics-based method is non-destructive and highly precise, enabling real-time detection of “mouth alcohol” that could otherwise distort results. Because of their sophistication, accuracy, and long-term stability, IR units are reserved as definitive, desktop-based instruments in police stations, providing the courtroom-grade evidence required for testimony.

Fuel cell breath alcohol sensors

Now is the time for a gentle dive into a bit of theory and practice. At their core, these sensors operate on an electrochemical principle: ethanol molecules in exhaled breath are oxidized at a platinum electrode, producing an electrical current directly proportional to concentration. This reaction is simple yet elegant, converting chemical energy into a measurable signal that reflects blood alcohol concentration (BAC).

In practice, this design delivers a combination of portability, stability, and specificity that has made fuel cell sensors the dominant choice for breath alcohol testing. Unlike semiconductor sensors, which can be affected by other volatile compounds, fuel cells respond almost exclusively to ethanol.

Their compact form factor allows integration into handheld devices, while their long-term consistency ensures reliable results in roadside, workplace, and consumer contexts. This balance of theory and application explains why fuel cell sensors remain the benchmark technology for BAC measurement today.

In a nutshell, a fuel cell breath alcohol sensor is essentially a pair of platinum electrodes immersed in a dilute acid electrolyte. When a trace amount of ethanol from exhaled breath reaches the electrodes, it undergoes oxidation, releasing electrons that flow as current. The magnitude of this current is directly proportional to ethanol concentration, providing a simple yet highly reliable way to quantify blood alcohol concentration.

And fundamentally, the fuel cell breath alcohol sensor consists of a porous, chemically inert layer coated on both sides with finely divided platinum black. The porous layer is impregnated with an acidic electrolyte solution, and platinum wire connections are attached to the platinum black surfaces. The assembly is mounted in a plastic case with a gas inlet for introducing a breath sample. While manufacturers add proprietary refinements to this design, the basic configuration is shown in Figure 2.

Figure 2 Drawing illustrates the basic construction of a fuel cell breath alcohol sensor. Source: Author

Hands-on with fuel cell alcohol detection

For those eager to explore fuel cell alcohol sensors, the FS00702 electrochemical ethanol content module offers a robust solution. This fuel cell–type sensor operates through oxidation and reduction reactions at the working and counter electrodes, generating charges that form a measurable current. Current’s magnitude is directly proportional to alcohol concentration, in accordance with Faraday’s law, enabling accurate determination of ethanol levels.

Equipped with a high-stability gas sensor and a high-performance microprocessor, the module supports both UART and analog signal outputs for seamless integration. Its precise automatic calibration and advanced detection systems minimize human interference, ensuring consistent accuracy and reliability in large-scale production environments.

Figure 3 Highlighting FS00702 key specs: enabling makers to detect ethanol with precision, rapid updates, and easy microcontroller integration. Source: Henan Fosen Electronics Technology

As a side note worth mentioning, ethanol is one specific type of alcohol—the compound found in beverages and fuels—whereas “alcohol” broadly refers to a family of related molecules such as methanol, propanol, and isopropanol.

Fuel cell sensors like FS00702 are calibrated for ethanol detection since it’s the relevant analyte for intoxication measurement and fuel monitoring. While the sensor may respond to other alcohols, its accuracy is optimized for ethanol, making precise terminology important in technical contexts.

Practically speaking, sourcing high-quality fuel cell alcohol sensors for hobbyist projects is challenging, since most manufacturers prioritize finished breathalyzer units or bulk industrial modules.

Still, there are accessible alternatives to FS00702 for makers who value the accuracy and specificity of fuel cell technology. The Dart Sensors 2-Electrode fuel cell is considered a gold standard for precision, though it requires a custom amplifier circuit.

Fosensor’s FS00701 provides a smaller footprint than FS00702, ideal for portable builds. Meanwhile, FS00702 itself remains versatile, offering both raw analog output for custom conditioning and a built-in UART option for straightforward microcontroller integration.

Winsen’s ZE321 automotive alcohol module offers a compact design with a convenient UART interface, making it more user-friendly for DIY integration. The ZE321 module operates on the fuel cell electrochemical principle. When the built-in pressure sensor detects exhaled air flowing through the sampling tube at the required rate, the solenoid valve quickly opens to admit a measured volume of breath.

Within the sensor, alcohol and oxygen undergo a redox reaction, generating an electrical current proportional to ethanol concentration. The module’s circuitry measures this current and, after algorithmic processing, outputs an accurate determination of breath alcohol content.

Figure 4 The ZE321 automotive alcohol module monitors exhaled breath flow, samples a fixed volume of gas, and actively detects alcohol content through its fuel cell electrochemical reaction. The onboard circuitry processes the resulting current signal to deliver accurate breath alcohol measurements. Source: Winsen

Accuracy today, innovation ahead

In practical terms, fuel cell–based alcohol testing devices deliver the highest accuracy in measuring breath alcohol content, leaving little room for error. Even so, it’s wise to allow for a small margin of discrepancy. When evaluating any alcohol detection instrument—whether for personal safety, workplace compliance, or automotive use—the sensor type is critical. If precision matters most, fuel cell sensor technology remains the benchmark to aim for.

For makers and engineers, the challenge is clear: fuel cell sensors are not confined to alcohol testing; they are gateways to precision sensing, sustainable energy, and inventive applications across domains. Experiment boldly, share your builds, and push the boundaries of what these devices can achieve. The next breakthrough could start on your workbench.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Fuel cell sensors: From breath to benchmark appeared first on EDN.

Newer, shinier DMM RTDs—part 1

Втр, 03/10/2026 - 14:00

This two-part Design Idea (DI) follows on from a couple of previous articles relating to 100-Ω platinum resistance temperature detectors (Pt100 RTDs). The first of those (which we’ll call Ref 1) used a simple current-driven bridge to give an output of 1 mV/°C (or /K, if you prefer) that could be read directly on a DMM, while the second (Ref 2) had a ratiometric output to emulate an NTC thermistor but with greater range and accuracy.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Ref 1 was useful but too simple: it was precise at its calibration temperatures of 0 and 100°C, but had an inherent error of nearly 0.4° at 50°C because an RTD’s resistance is not quite linear with temperature. Ref 2 compensated for that with good precision—and discussed how the Callender–Van Dusen (CVD) equations are key to doing so—but was rather specialized.

Finalizing and extending the circuit must wait for the second part of this DI. Its first part will use the heart of Ref 2 to implement the function of Ref 1 and find out what else needs fixing.

That heart is the fairly conventional circuit shown in Figure 1.

Figure 1 A simple circuit feeds the RTD, amplifies the resulting voltage, and uses some positive feedback to compensate for the sensor’s non-linear response to temperature.

Fairly obviously, Vref and Rfeed drive current through the RTD producing a voltage that is amplified by 1 + Rgain1 / Rgain2. That voltage is only nearly proportional to absolute temperature, so Rpfbk adds a little positive feedback to (almost) linearize the output. Its value is critically dependent on Rfeed and the gain, and, as described in Ref 2, is best found by iterated simulation. (Though later, we’ll see a useful shortcut.) Figure 2 shows the resulting error curve, which scarcely changes for gains above ~3 once Rpfbk has been optimized.

Figure 2 With compensation, the circuit’s output can be very close to ideal. (Real-world components may modify this somewhat.)

Our aim is to make a box that will give a DMM-useful 1 mV/°C output, but properly compensated. Figure 1’s circuit was a good starting point; now Figure 3 shows the end point.

Figure 3 Compensated gain stage A1a gives an output of just over 1 mV/°C, with an offset. A1b generates a voltage corresponding to that offset at 0°C. Tracking of the two op-amp halves should minimize errors.

A1a works just like Figure 1, using a 1.24 V reference. Using 3k3 for Rfeed and a gain of 6.6, its output sits close to 258 mV for an RTD resistance of 100Ω (0°C) and increases by ~1.05 mV/°C, which is dropped to a precise 1 mV/°C by R6 and R7. Keeping the gain trim passive and away from A1a’s feedback loops avoids any interactions. R5, our former Rpfbk, was calculated—or rather, homed in on—in the same way as its counterpart in Ref 2, using successive approximations in the graphical sim until the error curve was flattest.

A2b provides an offset reference at that ~258 mV level, so that 0°C at the sensor will give 0 mV across the outputs. It’s basically a clone of A1a to ensure good thermal matching. Calibration is easy: set the 0°C/0 mV point with R14, then trim R6 for an exact 100 mV at 100°C.

Even easier calibration

Ice-buckets and kettles are not really needed yet and are best saved for the final calibration with the actual sensor connected. For experimenting and troubleshooting, make up a gadget involving a carefully-selected 100-Ω resistor, a nominal 39 Ω with something in parallel to give 38.5 Ω, and a decent switch to short out the latter pair. Now you can easily flick between simulated 0 and 100°C inputs: easier and quicker than my original pot-based kludge.

Nicely balanced?

As noted above, A1b’s circuit is very similar to A1a’s. A trimmable resistive network could provide the reference, but this active approach ensures that any thermal effects in A1a will be balanced by those in A1b. After all, if two identical op-amps are sat side-by-side on a sub-squillimeter speck of silicon, they will behave identically, especially where temperature drifts are concerned, right?

Wrong!

Figure 3’s circuit worked perfectly, but for one thing: it wasn’t temperature-stable—not a good thing in a thermometer. Checking half-a-dozen MCP6002s mostly showed bad input-offset mismatches between the two halves. Those could be trimmed out, but unbalanced temperature drifts couldn’t—and they predominated, leading to reading errors of up to 1° for a 10° change in the circuit’s temperature. I did find one IC that was okay, but making this idea work properly called for a slightly different approach. All will be revealed in Part 2.

That feedback resistor

For the greatest accuracy at and around the 0 to 100°C calibration points, the value for Rpfbk is critical. (Those points are also used to define the slope against which the response is checked.)

For a wider range but with a different balance of errors, Rpfbk needs to be reduced. (Again, we’ll explore that further in Part 2.) Throughout, R5 or Rpfbk is shown to 4 or 5 places; it needs a little work to find the best series/parallel combinations. All other components were chosen from E12/24 values, though some need to be closely toleranced.

Now for that shortcut to determine Rpfbk. Some work with the simulator gave optimized values for Rpfbk with gain values from 4 to 100, with ad hoc curve-fitting suggesting equations giving good approximations to the target values. Here they are:

                Rpfbk = Rfeed × Gain × k
                where   Rfeed = 3k3
                                k = (2.19 – exp(1 / Gain)) / 2.70 [for Gains from 4 to 10]
                                or (2.035 – exp(1 / Gain)) / 2.31 [—from 8 to 100]

“Good approximations” means that the errors are always <0.05°C and mostly around 0.01°C, giving a good fit for temperatures from below -55 to above +125°C. If R3–5 are within 0.1%, errors due to their tolerances will be in the same range. All these circuits use a 3k3 feed resistor; I’ve not checked these equations with other values.

Await Part 2

We now have a basic circuit capable of decent performance, apart from its own tempco. The second part of this DI will fix that flaw, show some interesting variants—hence the plural in the title—and even add some bells and whistles. Think of this part as the theme, with Part 2 exploring the variations.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and a technical security kit. He has at last retired. Mostly. Sort of.

Related Content

The post Newer, shinier DMM RTDs—part 1 appeared first on EDN.

TP-Link’s Kasa EP10: If at first it doesn’t connect, buy, buy again

Пн, 03/09/2026 - 17:55

How visibly different (if at all) inside are two generations of smart plugs, and is the more recent device’s comparative connectivity issue due to hardware, software, or a combination of the two?

Back in early December, EDN published my initial write-up in a planned series of posts covering experiences setting up and using devices from TP-Link’s two somewhat-overlapping smart home hardware, software, and service ecosystems, Kasa and Tapo. The first two products I’ve tried out (I’ve since added several more to the stable; stand by for additional details in future blog posts and teardowns) were both Kasa-branded and were also both smart plugs: the HS103, which I subsequently dissected here:

and its more diminutive successor, the EP10:

Newer is not necessarily better

My so-far sample set is small, so conclusions should be accordingly calibrated. That said, I’ve had no issues with any of the multiple HS103 devices I’ve so far activated here at the residence, whether in the initial setup steps or during subsequent usage. The same can’t be said, however, for the EP10. None of the devices I tried in either of the first two four-packs I purchased would successfully setup-connect to my Wi-Fi network. But both devices in the third two-pack worked fine…at least until I subsequently disassembled one of them. Meet today’s teardown candidate, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

This particular two-pack was sourced from Amazon’s Resale (formerly Warehouse) sub-site, therefore rationalizing the non-TP-Link sticker stuck to the top of the box:

And since, as I’d mentioned previously, I got the idea to do a comparative teardown between the HS103 and EP10 after sending back for refund the original two four-packs of the latter, I can’t say whether their hardware versions matched this device’s v1.6 ID. v1.0 and v1.8 EP10 designs have also been shipped by the company (all three with multiple firmware releases):

Inside…

and underneath a sliver of literature, along with a bit of protective foam:

is our patient:

whose sibling, I’ve already noted, is in active use:

Chassis compaction and invasion

Some as-usual overview shots to start; the EP10 has dimensions of 2.36 x 1.50 x 1.21 in (60 x 38 x 33 mm) and weighs 0.13 lb (59 g) versus its slightly heftier HS103 predecessor at 2.62 x 1.57 x 1.5 in (66.5 x 40 x 38 mm) and 0.25 lb (113 g):

The LED-augmented on/off, pairing and reset switch is on the left side this time:

Theoretically, at least, the visible presence of a screw head implies a potentially simpler disassembly process as compared to the HS103 of the past. We shall see…

Once again, there’s a seam-inclusive topside, suggestive of the pathway inside:

And, last but not least, the bottom-side stamped specification suite, including the always-insightful FCC ID (2AXJ4EP10):

Speaking of pathways inside, let’s take the first step in the journey, shall we?

I wish I could say the two halves of the case then separated straightaway…but that’d be a lie:

Still, the mission was eventually accomplished, this time with an added bonus: no blood loss!

This YouTuber’s video (which, although it claims to be of an HS103, is actually of an EP10; note the switch location, along with glimpses of the bottom-side markings) bolsters my opinion as to the device’s lingering disassembly difficulty. Alas, I didn’t come across it until afterwards:

Comparatively boring front half first:

including a closeup of the left-side mechanical switch’s translucent insides:

Hardware commonality and variation

Now for the (rear) half I suspect you all mostly care about:

The relay on the right side is, at least in my v1.6 hardware version of the design, a Hongfa HF32FV-16, the exact same component I found a month back in my HS103 teardown:

However, the one in the video I just showed you, complete with a convenient “v1.8” hardware version sticker atop it, is blue in color, therefore presumably from a different manufacturer. As is the one shown in the FCC certification internal photos, which is sticker-less, but I’m assuming it references the initial v1.0 hardware design. And now for the other end, containing the digital and RF (control and wireless communications) sections, of which I’m most interested, both in an absolute sense and functionally relative to the HS103 predecessor:

Once again, there’s the on/off, pairing, and reset switch, this time right next to the LED, and with both now surrounded by the previously encountered LED-only light leak-preventing foam. The embedded antenna runs along the PCB’s right edge. And the “brains” of the operation at the end of the antenna are seemingly also the same as in the HS103: Realtek’s RTL8710, which, as I noted before, supports a complete TCP/IP “stack” and integrates a 166 MHz Arm Cortex M3 processor core, 512 Kbytes of RAM, and 1 Mbyte of flash memory. The only differences, perhaps reflective of a silicon revision, are in the IC’s bottom two marking lines. The IC in the HS103 says:

08F01H3
G038A2

while the Realtek RTL87210 in the EP10 design is marked as follows for the 2nd and 3rd lines:

08EL0C1
G031A2

The rest of the story

Alas, and as with the HS103 precursor, I was unsuccessful in my attempt to free the EP10’s PCB from the rear-half case within which it was ensconced. I’ll alternatively attempt to pacify your curiosity by first pointing out that a scattering-of-passive PCB backside image is included in the FCC certification internal photo set. And I’ll also point you toward another video, this one also showing both PCB sides but also more broadly of interest to me (and you as well, I suspect):

I found it within a Reddit post I stumbled across while doing my initial research. The OP (original poster, for those of you not yet familiar with frequently used Reddit verbiage) had an EP10 whose relay had developed perpetually clicking behavior. Turns out one of the “can” capacitors on the board had gone bad; replacing it restored normal functionality (not to mention ending the din). Note that the relay in the version of the hardware shown in this video (which I think also says v1.8, although the video-frame images aren’t clear) is also blue in color.

(Not-) working theories

This internal information is all well and good, I hope you agree, but it still doesn’t answer my fundamental question: why was I successful in using only a subset of the EP10s I tried setting up? I’ll first reiterate something I said in my initial December 2025 coverage:

I wondered if these particular smart plugs, which, like their seemingly more reliable HS103 precursors, are 2.4 GHz Wi-Fi-only, were somehow getting confused by one or more of the several relatively unique quirks of my Google Nest Wifi wireless network:

  1. The 2.4 GHz and 5 GHz Wi-Fi SSIDs broadcast by any node are the same name, and
  2. Being a mesh configuration, all nodes (both stronger-signal nearby and weaker, more distant, to which clients sometimes connect instead) also have the exact same SSID.

If I was right, the issue might have been caused by an EP10 software shortcoming, which a newer version of the firmware could conceivably resolve. But this leads to a chicken-and-egg situation. Downloading and installing the latest firmware to the device requires that I first connect the EP10 to TP-Link’s “cloud” firmware repository via my smartphone intermediary. But absent a sufficiently functional initial firmware version, I can’t get the device online in the first place. To wit, note that the TP-Link devices’ lack of Bluetooth support precludes using this alternative wireless communications interface to get them updated; it’s Wi-Fi or nothing.

A fundamental hardware limitation is also a possibility, of course. Via both documented and pictorial evidence, I’m aware (as, now, are you as well) of at least three different hardware versions of the EP10. For that matter, TP-Link’s website currently lists six different hardware versions of the HS103 “in the wild”, ranging from v1.0 to v5.8. All five of the HS103s currently active in my home are v5 units, the Kasa app conveniently tells me via the Device Info screen in each device’s advanced settings. Again, the sample sizes are small and therefore statistically suspect: did I just get lucky with the HS103s, and unlucky with the first two batches of EP10s?

With that, I’ll wrap up and refer you to the comments section below for any answers you might be willing to publicly posit for my closing questions, and/or any other thoughts you might have! Stay tuned, as I alluded to earlier both in this post and a prior one in the series, for additional teardowns to come of products from both TP-Link’s Kasa and Tapo smart plug families, along with other, potentially even more interesting, smart home ecosystem devices.

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post TP-Link’s Kasa EP10: If at first it doesn’t connect, buy, buy again appeared first on EDN.

Analog IC longevity is an underappreciated reality

Пн, 03/09/2026 - 11:06

I recently saw an announcement from a major IC vendor, posted in September 2025, letting users know that “STMicroelectronics sets 20-year availability for popular automotive microcontrollers.” The news is that ST was committed to maintaining the cited parts for 20 years instead of their present 15-year assurance.

“Good for them” was my first thought, as that’s the right thig to do for both their OEM and actual vehicle customers. After all, with the average age of cars on the road in the United States approaching 15 years and with little sign of slowing down or even leveling off, that makes sense.

There are two presumed reasons for the longer lifetime. First, cars are built better; the “rust-bucket” and “fall apart” tendencies of many of those pre-1980/90 cars have greatly diminished due to better design, materials, paints, tests, and processes. Second, the cost of a new car is so high that even costly repairs make sense for many.

Ironically, those less reliable, mostly mechanical cars did have one major virtue: they were repairable then and can generally be repaired/restored even today. Many of their old parts are available via specialty sources either as “new old stock” (NOS) or slightly used. And those that can’t be sourced can be machined or 3D printed if the owner has time and resources.

The issue is not limited solely to cars; unavailable mechanical assemblies are a very different case than electronic ones. In 2022, a team at Verisurf was contracted by the U.S. Air Force to reverse engineer and recreate a 300-piece “throttle quadrant” from the E-3 Airborne Early Warning and Control System (AWACS), by disassembling an existing unit piece-by-piece (Figure 1). See “Reverse Engineering the Boeing E-3 Sentry’s Secondary Flight Controls”.

Figure 1 This throttle quadrant from an E-3 AWACS radar aircraft was recreated via precise piece-by-piece measurement and fabrication of each of its 300 pieces. Source: Verisurf

They used a combination of tools, including basic calipers, advanced metrology systems, CAD/CAM software, close-up photographs, and more to capture and then recreate this control unit-top tolerances of better than 0.005 inches.

For the computers-on-wheels electronics of today’s cars, it’s a very different reality. Will you be able to get an engine control module, or one of the other hundred or so MCU-based modules, even 15 years from now? I’m betting the answer is “no” or “very unlikely,” but we’ll have to wait and see how that story unfolds.

The issue of unavailable parts is not limited solely to automobiles, although that is the largest and most visible application. Unlike most consumer products, there are many areas where useful lives of 20, 30, and more years are expected. Among these are industrial applications, railways, mil/aero, critical infrastructure, and even some home systems such as HVACs.

The challenge of replacement parts and their relatively low volume is not being ignored, as the ST announcement shows. The U.S. Defense Microelectronics Agency (DMEA) has instituted an Advanced Technology Supplier Program V (ATSP V) with 13 companies that, among other objectives, includes approaches to developing and creating components in ultra-low volumes for repair and replacement.

What about “analog”?

With all these legitimate concerns about long-term component availability, there’s one interesting aspect. One fact does stand out: unlike digital ICs and processors, the analog world has a different mindset. Analog-circuit designers tend to stick with a component that they have used successfully, even if it’s a few years old and could easily be replaced by a nominally better part.

Ther are several reasons for this tactic. Once an analog part is in the signal chain meeting specs, there’s a reluctance to take change on a new part and design which may have unknown issues and idiosyncrasies. Factors such as parasitics, layout, and power-supply sensitivity (to cite a few) likely will affect design validation, in contrast to the field experience with the existing design.

There are classic analog parts that have been available for decades, and while not recommended for new designs, they are still available if needed for repair, replacement, or even a newer design. Even better, if they are not available, there is often a drop-in replacement with superior performance; this is especially the case for basic 8-pin op amps.

I can think of three “ancient” analog components as examples:

  • The AD574 “complete” 12-bit A/D converter from Analog Devices, introduced in the 1978–1980, became the industry-standard ADC for microprocessor interfacing (Figure 2). It was notable for integrating a buried Zener reference clock and 3-state output buffers for direct 8/16-bit bus interfacing. While its die and process have been upgraded and it’s now available in other packages, you can still get it in the original 28-pin housing.

Figure 2 The 12-bit ADC was the first complete unit with “tight” specifications and is still offered 45 years after its initial release. Source: Analog Devices

  • The INA133 instrumentation amplifier from Burr-Brown was introduced around 1998 (Burr-Brown was acquired by Texas Instruments in 2000), and it’s still offered in a variety of packages and grades by TI (Figure 3). Like AD574, it’s not recommended for new designs; you can see its top-tier specifications on page 40 of the 2000 Burr-Brown Product Selection Guide.

Figure 3 Burr-Brown’s INA133 instrumentation amplifier provided excellent performance with modest power requirements and has been continuously available since its introduction in 1998. Source: Texas Instruments

  • Finally, we can’t look at the 555 timer-oscillator-multivibrator, a clear contender as one of the most classic components of all time and the longest-lived along with the 74 op amp (Figure 4). Devised by Hans Camenzind and marketed as an 8-pin DIP by Signetics in 1971, it’s still available in many versions, including duals and quads as well as CMOS variations. Despite its age, it’s often used to solve annoying timing and oscillator problems at low cost, and there are many “cookbooks” showing innovative ways in which it can be used.

Figure 4 It’s very likely that no IC has spawned more creative and clever design ideas and handbooks and solved as many circuit problems as the 555 timer-oscillator-multivibrator. Source: Wikipedia

There are others, of course, such as the 60-year-old 2N3905 or 2N2222 transistors—it doesn’t get more basic than that.

While many analog components have a long and viable life with their original or descendent vendors, there is even a solution for the many cases where that source does not want to manufacture or support that IC forever. Companies such as Rochester Electronics work out a formal arrangement and license to take over the rights, tooling, support, and test procedures for the parts. Users who need the part don’t need to consider grey-market or even counterfeit products; instead, they get ICs which are 100% legitimate but via a different supplier.

ST’s announcement is welcome, of course. I wish that more vendors would make that sort of commitment, difficult as it may be, or at least commit to licensing unwanted products to non-competing vendors. For now, if you want long-term continuity, stick with analog parts as much as possible.

Have you ever had to deal with repairing a product having electronic components that were no longer available, or even doing regular production on a long-lived product where you needed more than just a few? Did you find parts, or did you have to do a full redesign? How painful was that process?

Related Content

The post Analog IC longevity is an underappreciated reality appeared first on EDN.

Risk assessment in the workplace

Птн, 03/06/2026 - 15:00

Risks come in more than one form. There are risks that arise from science and technology, and there are risks that arise from human motivations, which are not always of an obvious sort. This is about the latter.

I had a client company that was owned by a husband and wife for whom I had once solved a power supply thermal runaway problem. I had measured temperature rise versus time and temperature fall versus time, and of course, the two were not exactly the same. Their difference was quite pronounced when I first looked at the issue, but they were almost identical to each other after I had solved their problem. If you’re curious about that, please see the How2Power article here

A couple of years went by, and I got a call from that same company about a different power supply that also seemed to have a thermal runaway problem. By then, sadly, the husband had passed away, and only the wife remained to run the business.

During the first time frame, the wife had displayed a hair-trigger temper. Any moment of uncertainty as events unfolded would result in a raging torrent from her, to which her husband would make great efforts to calm her down. I would hear lines like “It’s okay. It’s oh-kay! Please relax. Things are going well.” to which she would then go silent, but now she didn’t have anyone to give her any assurance when it was needed.

An employee who had been promoted to Chief Engineer was my new point of contact. I explained to him that I would examine the thermal rise and thermal fall traits of this new power supply to see if indeed the same situation pertained as it did in the first case or not.

“There’s no need for that. I’ve already made those measurements.” He handed me a sheet of paper with columns of numbers, purportedly the data I had planned to acquire. That night, I examined those numbers and discovered that if you plotted the thermal rise and inverted a plot of the thermal fall, the two curves precisely lined up and were EXACTLY the same!! There was absolutely zero difference. They were totally spot on, no ifs, ands, buts, hows, whys, or wherefores, exactly the same, which meant that the rising and falling curves given to me were not the results of actual testing. They were false.

Confronted with a Chief Engineer whom I then knew to be dishonest and confronted with the woman whom I knew to be extremely volatile and prone to bursts of rage, I assessed the risk of dealing with it all to be unacceptable.

I made up some excuse (I don’t remember what it was) and declined to offer my services.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Risk assessment in the workplace appeared first on EDN.

Last-level cache has become a critical SoC design element

Птн, 03/06/2026 - 11:50

As AI workloads extend across nearly every technology sector, systems must move more data, use memory more efficiently, and respond more predictably than traditional design methodologies allow. These pressures are exposing limitations in conventional system-on-chip (SoC) architectures as compute becomes increasingly heterogeneous and traffic patterns become more complex.

Modern SoCs integrate CPUs, GPUs, NPUs, and specialized accelerators that must operate concurrently, placing unprecedented strain on memory hierarchies and interconnects. To Keep processing units fully utilized requires high-bandwidth, low-latency access to data, making the memory hierarchy as critical to overall system effectiveness as raw performance.

On-chip interconnects move data quickly and predictably, but once requests reach external memory, latency increases, and timing becomes less consistent. As more data accesses go off chip, the gap between compute throughput and data availability widens. In these conditions, processing engines stall while waiting for memory transactions to complete, creating data starvation.

 

The role of last-level cache

To mitigate this imbalance, SoC designers are increasingly turning to last-level cache (LLC). Positioned between external memory and internal subsystems, LLC stores frequently accessed data close to compute resources, allowing requests to be served with significantly lower latency.

Unlike static buffers, an LLC dynamically fetches and evicts cache lines based on runtime behavior without direct CPU intervention. When deployed effectively, this architectural layer delivers measurable benefits, including substantial reductions in external memory traffic and power consumption.

Simply including an LLC does not guarantee improved performance. Configuring the cache correctly is a complex task that must account for workload characteristics, compute-unit behavior, and real-time constraints. Poorly chosen parameters can waste area without meaningful gains, while under-provisioned configurations may fail to alleviate memory bottlenecks.

Architects must carefully determine cache capacity, the number of cache instances, and internal banking structures to support sufficient parallelism. Partitioning strategies must also be defined to ensure that individual IP blocks receive the bandwidth and predictability they require. While some settings can be adjusted later through software, foundational decisions on cache size, banking, and associativity must be finalized early in the development cycle.

The role of last-level cache is shown in successful designs. Source: Arteris

Factors influencing cache behavior

Banking configuration illustrates this trade-off clearly. Increasing the number of cache banks improves internal parallelism and throughput, but it also increases silicon area. Workloads with largely sequential access patterns may see limited benefit from aggressive banking.

In contrast, highly parallel workloads, especially those driven by AI accelerators or GPUs, require substantial internal concurrency to maintain utilization. Because these characteristics vary by application, banking decisions must be informed by realistic workload analysis during the architectural phase.

Cache capacity is just as important. A cache that is too small struggles to achieve acceptable hit rates, pushing excessive traffic to external memory. Conversely, oversizing the cache often yields diminishing returns relative to the additional area consumed. The optimal balance depends on actual runtime behavior rather than theoretical assumptions.

In practice, acceptable hit rates vary widely. Some systems can tolerate moderate miss rates if latency and power reductions outweigh the cost, while real-time applications demand consistently high hit rates to maintain deterministic behavior.

This variability underscores why no single LLC configuration is universally optimal. Mobile devices may require only a few megabytes of cache to balance power efficiency and responsiveness. At the same time, servers and HPC platforms often deploy tens or hundreds of megabytes to reduce DRAM pressure. Despite these differences, successful designs rely on a common principle in which cache parameters are derived from the workloads the system will actually execute.

Managing shared caches

Diversity in system demands further complicates how an LLC must be structured. Automotive chips built around concurrent vision processing and strict timing requirements operate under very different constraints than data-center platforms optimized for accelerator-heavy inference at scale. Even within a single chip, CPUs, accelerators, and I/O subsystems generate distinct access patterns with different latency sensitivities.

The LLC must accommodate all of them without allowing one workload to interfere with another’s real-time guarantees. This makes early understanding of system-level access behavior essential, since cache configuration otherwise becomes speculative at best.

Partitioning provides a powerful mechanism for preserving determinism in such environments. By allocating portions of cache capacity to specific clients, architects can prevent high-bandwidth workloads from starving latency-sensitive subsystems. This capability is particularly critical in environments that must meet strict timing guarantees. Partition sizes must be tuned carefully, as oversizing wastes area while undersizing risks violating latency requirements.

Configuring a last-level cache is ultimately a multidimensional challenge shaped by workload demands, compute topology, latency requirements, and silicon constraints. Achieving the right balance between performance, determinism, power, and area depends on understanding how an SoC behaves under real operating conditions.

To address this, SoC teams increasingly rely on system-level simulation using realistic data flow profiles generated by multiple on-chip request sources. This approach allows teams to evaluate cache behavior before key architectural decisions are finalized. It helps identify bottlenecks, validate cache sizing, and determine when isolation mechanisms such as partitioning are required to preserve real-time guarantees.

Arteris developed its CodaCache IP, which operates as a configurable last-level cache between on-chip initiators and different types of external memories such as DDR-DRAM, HBM and even NVM for execution in place (EIP) use cases. With CodaCache, architects can equip their SoC fabric with the optimal configuration to address intelligent, scalable, and automated data management in a wide range of applications.

Andre Bonnardot is product marketing manager at Arteris.

Related Content

The post Last-level cache has become a critical SoC design element appeared first on EDN.

Apple’s spring 2026 soirée: The rest of the story

Птн, 03/06/2026 - 01:44

With smartphone and tablet news already discussed, what else did Apple unveil this week? Read on for all the goodies and their details.

As I teased at the end of my prior piece, computers and displays were also on the plate for Apple’s “big week of news” announcements suite. With today’s (as I write this on Wednesday in the late afternoon) New York, London, and Shanghai “Experience” in-person events now concluded:

(No, alas, I wasn’t invited)

I’m guessing that Apple’s wrapped up its rollouts for now, therefore compelling me to revisit my keyboard for concluding part 2. That said, I realized in retrospect that there was one additional earlier hardware announcement that, had I remembered at the time (and in time), I would have also included in part 1, since it also covered mobile devices. So, let’s start there.

AirTag 2

In late April 2021, Apple introduced its first-generation AirTag trackers, leveraging Bluetooth LE connectivity to mate them with owner-paired smartphones and tablets and more broadly (when a tagged device is lost) the broader Find My crowdsourced network ecosystem to assist in identifying their whereabouts and monitoring their movements. Integrated ultrawideband (UWB) support, when also comprehended by the paired mobile device, affords even more precise location discernment (i.e., not just somewhere in the living room, but having fallen between the sofa cushions). And built-in NFC support assists anyone who might find a tag (and whatever it’s attached to), to notify the person that it belongs to. Here’s my first-gen teardown.

Nearly five years later, and quoting Wikipedia:

An updated model with the U2 chip, upgraded Bluetooth, and a louder speaker was released in January 2026 [editor note: Monday the 26th, to be precise]. It has enhanced range for precision detection with iPhones equipped with a U2 chip such as the iPhone 15/Pro or later (excluding iPhone 16e), and also allows an Apple Watch with a U2 chip such as the Apple Watch Series 9 or later, or Apple Watch Ultra 2 or later (excluding Apple Watch SE), to precisely locate items.

Now fast-forwarding a month-plus to this week’s announcements…

The M5 Pro and Max SoCs

2.5 years back, within my coverage of Intel’s then leading-edge and first-time chiplet-implemented Meteor Lake CPU architecture:

I noted that the company was, to at least some degree, following in the footsteps of AMD and Apple, both having already productized chiplet-based designs. In AMD’s case, I was on solid footing with my stance, as the company had already been embedding and interconnecting discrete processors, graphics, and other logic circuits for several years. In Apple’s case, conversely, my definition of a chiplet implementation was a bit more loosey-goosey, at least at the time:

Above is a de-lidded photo of Apple’s M1 SoC. At left is the single-die implementation of the entirety of the logic circuitry, plus cache. And on the right are two DRAM memory chips. Admittedly, the “Ultra” variant of the eventual M1 product family, at far right:

upped the ante a bit more, “stitching together two distinct M1 Max die via a silicon interposer”. But I’ve long wondered when Apple would go “full monty” on disaggregation, mixing-and-matching various slivers of logic silicon attached to and interconnected via a shared packaging substrate, to keep each die’s dimensions to a reasonable manufacturing-yield size and to afford fuller implementation flexibility. To wit, the points I made back in September 2023 remain valid:

  • Leading-edge processes have become incredibly difficult and costly to develop and ramp into high-volume production,
  • That struggle and expense, coupled with the exponentially growing transistor counts on modern ICs, have negatively (and significantly so) impacted large-die manufacturing yields not only during initial semiconductor process ramps but also long-term, and
  • Desirable variability both in process technology (DRAM versus logic, for example), process optimization (low power consumption versus high performance) and IC sourcing (internal fab versus foundry), not to mention the attractiveness of being able to rapidly mix-and-match various feature set combinations to address different (and evolving) market needs, also enhance the appeal of a multi- vs monolithic-die IC implementation.

That time is now, branded as the “Fusion Architecture” and ironically foreshadowed by a then-subtle Apple online store tweak a month ago. Quoting from the press release subhead:

M5 Pro and M5 Max are built using the new Apple-designed Fusion Architecture that connects two dies with advanced IP blocks into a single SoC, delivering significant performance increases that push the limits of what’s possible…

In an interesting twist from the past, this time the two product proliferations seemingly share a common processor die, although the variety and number of guaranteed-functional cores varies both between the two devices and within a given device’s binning variants. Conversely, the graphics core counts diverge more substantially between the two devices. To some degree this is reflective of the high-end “Max” device’s professional content creator target demographic, although I’d wager that it more broadly affords more robust on-device deep learning inference capabilities in conjunction with the chips’ presumed-still-existent neural processing cores. And what of an “Ultra” variant of the M5…is it on the way? Maybe

Tomato, tomahto

Speaking of cores, by the way…sigh. Look back at my M5 SoC (and initial devices based on it) coverage from last October, and you’ll see that, just as with prior generations of both A- and M-based Apple-developed silicon, it contains a mix of both performance (speed- optimized) and efficiency (power consumption-tuned) cores. Here’s the specific press release quote again:

M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4.

All well and good; the Arm-developed architecture analogy is big.LITTLE. Revisiting that page on Arm’s website just now, however, I curiously noticed that whereas it historically called out two different types of cores, now there are apparently three. Check out the subhead:

Arm big.LITTLE technology is a heterogeneous processing architecture that uses up to three types of processors. LITTLE processors are designed for maximum power efficiency, while big processors are designed to provide efficient, sustained compute performance.

Keep in mind that Apple is an Arm architecture licensee, so it develops its own (still instruction set-compatible, of course) cores. That said, beginning with the M5 Pro/Max processing chiplet, Apple has also developed a third core, this one an intermediate half-step between the performance and efficiency endpoints. You might think that Apple would call this new one the “balanced” core, say. But alas, you’d be wrong. Here’s long-time Apple observer Jason Snell, quoted in a post from another Apple prognosticator, “graybeard,” John Gruber:

With every new generation of Apple’s Mac-series processors, I’ve gotten the impression from Apple execs that they’ve been a little frustrated with the perception that their “lesser” efficiency cores were weak sauce. I’ve lost count of the number of briefings and conversations I’ve had where they’ve had to go out of their way to point out that, actually, the lesser cores on an M-series chip are quite fast on their own, in addition to being very good at saving power! Clearly they’ve had enough of that, so they’re changing how those cores are marketed to emphasize their performance, rather than their efficiency.

What did Apple decide to do instead, including a retrofit of published M5 documentation?

  • The prior-named “Performance” core is now instead called, believe it or not, “Super.”
  • The “Efficiency” core retains its original name, for a brief moment of sanity
  • And the new in-between “balanced” core? It’s the recycled ”Performance” moniker.

The following summary table originated with another recent John Gruber post; I’ve simplified the SoC options, reordered the CPU core columns, and added a column for GPU core counts:

 

CPU (Super)

CPU (Performance)

CPU (Efficiency)

GPU

M5

3-4

N/A

6

8-10

M5 Pro

5-6

10-12

N/A

16-20

M5 Max

6

12

N/A

32-40

That’s just…super. Sigh.

(More) M5 MacBook Pros

(nifty video animation, eh?)

“Super” SoCs inside aside, the new 14” and 16” MacBook Pros are effectively identical to their M4-based forebears (note that the sole M5 version initially announced last fall was the 14” model). The only other items of particular note both involve memory. Baseline and upgraded DRAM capacity option prices remain the same as last time, despite current industry memory supply constraints; an upper-end 64 GByte option for the M5 Pro has even been added. And regarding flash memory, Apple has obsoleted last November’s entry-level 512 GByte SSD option for the baseline 14” M5 MacBook Pro, making the new capacity starting point for that product (1 TByte) more expensive than before. That said, it’s now $100 lower than the 1 TByte variant price at intro just a few months ago, and capacity-upgrade prices have also decreased.

The M5 MacBook Air(s)

Here’s another example of not being able to tell, based solely on external appearances, which generation of devices you’re looking at. Coming, as with its M3- and M4-based forebears, in both 13” and 15” versions, the M5 MacBook Air also upgrades to Apple’s N1 network connectivity chip. But, speaking once again of (flash, specifically) memory, and akin to the product line option slimming for the 14” M5 MacBook Pro mentioned in the prior section, the lowest-available capacity for the new devices is 512 GBytes, versus 256 GBytes in previous generation. I’m guessing that the reasoning is two-fold this time; as with the 14” M5 MacBook Pro’s option-culling, the company’s “hiding” its higher flash memory costs by only offering more profitable capacity choices to customers. Plus, by doing so, Apple can more clearly differentiate the MacBook Air from its other products. Speaking of which…

The MacBook Neo

I’ll kick off this section with a few history lessons. Back in 2015, Apple introduced the “new MacBook” (also commonly referred to as the “12” MacBook), with a Retina-resolution display and based on Intel m-series (and later, i-series) CPUs. It slotted between the then-non-Retina MacBook Air and the high-end MacBook Pro in Apple’s product portfolio from a pricing standpoint, even though its processing performance undershot that of the notably less expensive MacBook Air. Plus, it was hampered by the unreliable “butterfly” keyboard. It was discontinued after only three hardware iterations and four years of production.

In addition to its unfavorable price comparison to the MacBook Air, the “new MacBook” was also still competing to a degree against then-popular Windows-based “netbooks”, which were even lower priced. Back in late 2008, Former CEO Steve Jobs had (in)famously quipped re netbooks, “We don’t know how to make a $500 computer that’s not a piece of junk.” Hold that thought.

My last history lesson is, conversely, a Steve Jobs success story. Back in mid-1999, two years (and change) after Jobs’ return to Apple and less than a year after launching the consumer-tailored iMac desktop, Apple unveiled the iBook laptop:

which came in multiple eye-catching, intentionally non-“business” color options:

Quoting Wikipedia:

The line targeted entry-level, consumer and education markets, with lower specifications and prices than the PowerBook, Apple’s higher-end line of laptop computers. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort.

Look again at the image of the iBook’s color options. Now look at the photo at the beginning of this section. See where I’m going?

The newly unveiled MacBook Neo comes in two price tiers: $599 (with a further $100 discount for education customers; take that, Chromebooks) and $699. The higher-end variant gets you twice the SSD capacity—512 GBytes versus 256 GBytes—along with a Touch ID fingerprint reader built into the keyboard. That’s it. 8 GBytes of DRAM, with no upgrade option. No Thunderbolt, only two USB-C ports, one of them supporting only USB 2 speeds. The first-time use of an A-series processor, the (Apple Intelligence-capable) A18 Pro (albeit with one fewer graphics core enabled than the initial version in the iPhone 16 Pro series); that said, it seems to benchmark (at least) roughly on par with the M1 that until recently was still being sold by Walmart in the MacBook Air. And a networking subsystem rumored to come from MediaTek, versus developed internally.

In closing, at least for this section: what’s with the name? Some folks had forecasted that it’d just be called the “MacBook”, but as I’ve already noted, that particular name is now “damaged goods”. Others thought that an “iBook” resurrection was in the cards, but Apple stopped referring to devices via “i” monikers a while ago. That said, “Neo” was definitely not on my bingo card. Maybe someone in Cupertino is a fan of The Matrix, but thought that “MacBook Mr. Anderson” would be too ponderous?

Displays

Having already passed through 2,000 words, I’m going to keep this section short. Apple announced two new Studio Display models, its first updates to this particular product category in many years. They’re both 27” in size, with 5K Retina resolutions, although their refresh rates, dynamic ranges, and other image quality measures vary. The “inexpensive” one starts at $1,599, with its pricier sibling beginning at $3,299; both are available in standard or (upgrade) nano-texture glass options, and mounting and other accessories are also available. And interestingly, at least to me, they don’t work with legacy Intel-based Macs, even the scant few models (one of which I’m currently typing on) that are still supported by MacOS 26. For more details, check out the press release.

And what about…

The M5 Mac mini, whose possibility I alluded to yesterday? Didn’t happen, even though the current M4-based models are popular with the agentic AI enthusiast community (and others). That said, in revisiting my prognostication yesterday afternoon, I remembered that Apple had also skipped the M3 Mac mini generation, and that said, the time-consuming form factor redesign development from the M2 to the M4 might have at least partly explained that delay.

And what of the upgrade to the “vanilla” iPad that lots of folks were forecasting would happen this week? Another nope. The primary rationale here was that it was the only remaining member of Apple’s current product line whose CPU (the A16) doesn’t support Apple Intelligence. But there was no evidence of the telltale indicator of a new product’s arrival: depleted retail inventories of the current model. My guess: Apple will be happily talking about AI again at this year’s WWDC, now that Google’s on board as the company’s development partner, and that’d be a perfect time to announce the “iPad 12”…or maybe “iPad Neo”? I jest (I hope).

Time to put down my cyber-pen and turn it over to you for your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Apple’s spring 2026 soirée: The rest of the story appeared first on EDN.

Сторінки