Українською
  In English
EDN Network
Demystifying 3D ICs: A practical framework for heterogeneous integration

For decades, the semiconductor industry has relied on the relentless pursuit of Moore’s Law—the doubling of transistors on an IC every two years—to deliver ever-increasing performance and functionality. This traditional approach, primarily focused on scaling individual transistors and integrating more components onto a single, monolithic 2D die, has driven innovation across countless industries.
However, as we approach the physical limits of silicon, and the economic realities of advanced process nodes become increasingly prohibitive, the conventional path of monolithic scaling is facing significant roadblocks. Companies are encountering diminishing returns in terms of performance gains, escalating design and manufacturing costs, and challenges in integrating diverse functionalities onto a single chip without compromising yield or power efficiency.
In response to these growing pressures, a fundamental shift is occurring in chip design: the move toward 3D ICs and heterogeneous integration. This paradigm offers a compelling alternative, allowing companies to overcome the limitations of traditional 2D scaling by integrating multiple specialized chiplets—each potentially manufactured on different process technologies and optimized for specific tasks—into a single, advanced package.
Beyond raw performance, the shift to 3D IC offers benefits in design flexibility, manufacturing economics, and form factor by mixing dies manufactured on different process nodes. This modularity enables the use of cutting-edge processes only where absolutely necessary for performance, while leveraging more mature, cost-effective nodes for other functions. This approach also facilitates the creation of smaller, more integrated systems, crucial for devices where space is at a premium.
The unique challenges of advanced packaging
The shift to 3D IC advanced packaging isn’t without its complexities. Heterogeneous integration introduces a new set of design challenges that traditional monolithic approaches simply didn’t encounter. Existing design tools and methodologies are insufficient for the scale and complexity of heterogeneous integration.
With 3D IC design now featuring hundreds of thousands to millions of connections, it’s impractical to use manual methods like spreadsheets to manage the intricate connectivity and interactions between 3D layers.
3D IC designers also face the daunting task of managing a myriad of diverse IP and data formats. Source data for connectivity is supplied in a multitude of formats, including CSV files, LEF/DEF, GDS, Verilog RTL, and plain text files.
Integrating multi-vendor chiplets exacerbates the need for standardized, machine-readable design-models to ensure operability across different EDA tool design workflows. Furthermore, 3D IC designs typically include multiple dies from different foundries and processes, increasing the risk of failure and making them harder to identify and fix.
Because data is often dynamic, with updates received throughout the design process, incorporating new versions of design IP threatens to obliterate existing data, especially when IC and package designers work concurrently. So, designers must be able to accept input from various stakeholders—often designing their content concurrently—to create a design that is both electrically and physically correct.
Ensuring the integrity and functionality of these complex systems demands comprehensive system-level verification, not individual component checks. To truly harness the immense power of heterogeneous integration and confidently navigate these multifaceted challenges, a robust, systematic, and proven framework is not just beneficial—it’s foundational. Otherwise, without a clear roadmap, design teams risk costly iterations, delayed time-to-market, and sub-optimal product performance.
System technology co-optimization: The key to efficient 3D IC design
System technology co-optimization (STCO) is exactly that foundational framework: an advanced, holistic methodology that elevates optimization beyond the considerations of a single die. Instead of narrowly tuning devices at the wafer or chip level—a practice known as device technology co-optimization (DTCO)—STCO allows for the optimization of power, performance, area, cost, and reliability across various components as a unified whole, including silicon, packages, interposers, PCBs, and even mechanical components.
Thus, STCO provides the system-centric framework needed for organizations to stay ahead of the curve in 3D IC design, maximizing value, minimizing risk, and unlocking new levels of competitive differentiation.
STCO breaks down silos that historically separated silicon, package, and board design, and it leverages system-level analysis to guide critical decisions—such as chiplet partitioning, placement, interconnect planning, and assembly verification—early in the design flow. This integrated approach not only reveals downstream issues much sooner but also enables “shift-left” validation and optimization, preventing costly respins and delays.
The strategic benefits of STCO are profound for organizations embracing 3D IC design. Companies can realize shorter design cycles with fewer iterations and handoffs, thanks to continuous verification and ongoing feedback between domains.
Cross-functional teams—from system architects to packaging, DFT, and manufacturing engineers—can observe interdependencies and work together to resolve them proactively. This leads to faster time-to-market, improved first-pass yield, and the ability to confidently deliver innovative, heterogeneous products that meet aggressive performance requirements.
Mastering heterogeneous integration: Your expert guide
This is precisely where the Heterogeneous Integration eBook series becomes a handy guide. This eBook series doesn’t just describe the challenges, it provides a comprehensive, actionable methodology to overcome them.
This robust 10-step methodology for heterogeneous integration, formulated by author of this article, guides designers through the entire process: from the initial creation of the 3D digital twin and system-level planning to detailed design optimization, rigorous verification, and final sign-off. By following this methodology, designers are ensured a streamlined and predictable path to robust advanced package development.
Designers gain expert insights into building a complete digital model, optimizing physical layouts, ensuring robust verification, and preparing designs for successful manufacturing. The series is structured into four eBooks, each focusing on a critical stage of the heterogeneous integration journey—from initial 3D Digital Twin Creation and Assembly Floorplanning, through Scenario Completion, and finally to the crucial Signoff phase—empowering design teams with the knowledge and best practices to confidently lead the next wave of chip innovation.
If you’re ready to move beyond outdated methodologies and truly unlock the power of 3D IC and heterogeneous integration, now is the time to act. The Heterogeneous Integration eBook Series offers not just theory, but a proven framework to help conquer the formidable challenges of advanced packaging.
Don’t let complexity stand in the way—arm yourself with strategies for system-level optimization, cross-domain collaboration, and predictable first-pass success.
Keith Felton is marketing manager for Xpedition IC packaging solutions at Siemens EDA. Working extensively in IC package design since the late 1980s, Keith drove the launch of the industry’s first dedicated system-in-package design solution in the early 2000s and led the team that launched Siemens OSAT Alliance program.
Special Section: Chiplets Design
- What the special section on chiplets design has to offer
- Chiplet innovation isn’t waiting for perfect standards
- Scoping out the chiplet-based design flow
The post Demystifying 3D ICs: A practical framework for heterogeneous integration appeared first on EDN.
A scale that tells inconsistent-weight tales

When a bathroom scale gives you multiple different weight-measurement results from consecutive usage attempts, is it cheating if you pick the lowest outcome of the lot?
Two years ago (with publication following a few months later), I took apart my wife’s fancy bathroom scale, which measured not only weight but also body mass index and fat percentage:

but whose LCD had gone AWOL and had subsequently been replaced by a simpler successor. Speaking of simple, this time we’ll look at the insides of my first digital bathroom scale, which replaced a traditional mechanical forebear. It’s Innotech’s model ID-767, the black-colored variant to be exact, which I’d bought on sale for $14.99 from Amazon in spring 2018.
Simpler vs. betterStock images to start:






No, I didn’t keep mine next to the bed:

Hey loser, don’t you want to be a weight “losser” too?

About those “error-free readings within 0.2 lb” and “accurately weighs up to 400 lb” claims…

There was much to like about the Innotech model 767. It was svelte and light, with long battery life. It responded quickly when I stepped on it. And I liked its looks, too. Accuracy, on the other hand, was not its strong suite. I very well might have had a bad unit. But if I stepped on it, read the display, then stepped off and repeated the procedure, my second result would be consistently inconsistent, varying from the first by several pounds (albeit always down). And I never knew which reading to believe. The saying “you get what you pay for” perhaps applies?
And then it decided to take a spontaneous swan dive off the counter (where I’d placed it while cleaning the bathroom one day) to the tile floor below, resulting in my not liking its looks as much as before:

You’ll have to trust me when I tell you that its measurement inconsistency predated the dent!
So, I decided to retire it; more accurately, replace it (meh):

and turn it into a teardown candidate.
Incriminating reflectionsHere are some overview shots to start. I have no idea who that is reflected in the first one…and speaking of weight, I’d also appreciate no snide comments about that poor person’s bulbous soft waistline, please:


The short URL printed on this sticker, as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, is presumably intended to redirect here but no longer works, at least when I tried it:

This switch, when repeatedly pressed, toggles between the “3 weight units” featured in one of the earlier-seen stock photos: pounds, kilograms and rarely-seen stones:

A widely available AAA triple-battery power source (my kitchen scale, conversely, takes CR2032 coin cells, I was reminded the other night when I replaced one of the pair) is a nice touch:

Time to dive inside. Underneath each of the rubber “feet” is, to the “4 weighing sensors” highlight in one of the stock images, a strain gauge load cell. I discussed them in detail back in July 2024 so I’ll spare you the repetitive prose; check out my earlier teardown for all the details.

It’s delightfully wiggly ( and yes, admittedly, I’m easily amused):
But underneath…nope, no screw heads:

So, I redirected my attention to the scale’s sides, a decision which ended up leading to success:



Voilà:

Boring part first; here’s the inside of the lower half of the scale:
Next, the good stuff:
The first things you probably noticed were the four load cells in the corners (or maybe you saw the display-plus-PCB, in which case, please stand by; your patience is appreciated). Here they are in clockwise order, starting with the one in the upper left (upper right when the scale is in its normal usage orientation):
Here’s the first one again, being removed:
and now flipped upside down (the strain gauge structure is presumably underneath the glue):
Now for the stuff in the center (see, your patience was quickly rewarded!), the PCB, with this side showing nothing notable save for the weight-unit toggle switch:
and the next-door LCD:
Remove a few screws, and they’re free!
Now flip both 180°:
Dominating the landscape on this side of the PCB is…a blob, unfortunately obscuring the identity of the control chip. Generally speaking, considering the price tag therefore the bill-of-materials cost constraints, this design is impressively sparse in response:
The backside of the display backlight strives to redirect the aggregate glow toward the front:
where it’s further diffused by another peel-away-able layer:
Here’s the LCD itself:
As you may have already noticed, a red/black two-wire pair within the broader wiring harness powers the backlight. What about power (not to mention control) between the PCB and the LCD? That’s handled by an elastomeric strip with multiple embedded conductors, pressing against the PCB’s counterparts, an approach which we’ve seen plenty of times before:
Weighing inFor grins, in closing, I decided to put it back together and see if it still worked. Success!
Booting:

And ready and waiting to deliver additional impermanent results:

That’s all I’ve got for you today! As always, please share your thoughts in the comments.
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Dissecting a feature-enhanced digital bathroom scale
- Shipping Scale Converted from Bathroom Scale
- What good is 17.24 bits of flicker-free res?
The post A scale that tells inconsistent-weight tales appeared first on EDN.
GPS-free systems to spur highly advanced sensors, fusion

We’ve come to expect the U.S.-based global positioning system (GPS) to be available and ubiquitous for the countless military, commercial, and consumer applications dependent on it. Its diverse uses represent a huge leap from its original military-centric objective for determining an object’s precise location (positioning), chart its path to a destination (navigation), and manage its movement along that path (guidance)—usually summarized as PNG.
Applications that were not even conceived, let alone doable, are now enabled by tiny GPS ICs and systems that provide amazingly accurate and precise results—you can make your own list here.
If you want some insight into the people who made GPS happen despite severe technical and bureaucratic obstacles, check out Pinpoint: How GPS Is Changing Technology, Culture, and Our Minds by Greg Milner. Though somewhat dated now in its discussion of social implications, this fascinating book from 2016 tells the story of GPS from its conceptual origins as a bomb guidance system to its presence in almost everything we do.
Despite the sense that GPS is everywhere, the reality is that it was never the situation. Underwater, tunnels, indoor sites, and similar RF-blocked locations simply can’t receive enough of the relatively weak satellite signals to provide a viable result.
Now, we’re seeing many more situations where GPS signals are also being “denied” due to deliberate interference or spoofed via false signals by players with various motives. Some of the consequences are modest (lost dogs can’t be found), but others have more serious implications.
One possible solution is to increase the power of the transmitted signals, but that’s technically difficult and won’t happen for years even if and when it does—and doing so will still not help in many of these cases.
Alternatives to GPS
There’s a significant amount of research and product development toward devising ways to provide PNG using non-GPS, non-RF techniques driven by sensors for which jamming or signal access is not an issue. All of them require a considerable amount of computation to make sense of the sensed signals and transform data into results; none of them provide the performance of a GPS-based system—at least not yet. Much of the R&D work is being done by startups and innovators, in addition to traditional sensor vendors.
Among the non-GPS possibilities are:
- Inertial sensing
This is not new, of course, and has been used for decades, beginning with gyroscopes and accelerometers. Both sensors are now reduced to small, low-power MEMS devices that are orders of magnitude smaller, lighter, and lower-power than their electromechanical predecessors of just a few decades ago and even compared to the laser and fiber-optic versions that leverage the Sagnac effect and interferometry. Still, their accuracy is not as good as a high-end GPS system, but it’s improving.
For example, ANELLO Photonics has developed a silicon photonics optical gyroscope—dubbed SiPhOG—that uses an on-chip waveguide manufacturing process, integrated with a patented silicon photonic integrated circuit (Figure 1). Together, they claim these offer fiber-optic gyro performance with a standard silicon manufacturing process.

Figure 1 This silicon photonics optical gyroscope uses an on-chip waveguide manufacturing process that is integrated with a patented silicon photonic IC. Source: ANELLO Photonics
- Magnetic sensors
The Earth’s magnetic field is pervasive, ubiquitous, and unjammable. It’s also uneven, with highly localized variations due to differences in the Earth’s outer-crust and under-crust layers as well as deeper causes (literally) from flows of conducting material within the Earth (Figure 2).

Figure 2 This geomagnetic map of part of the Northern hemisphere is a starting point for more detailed, higher-resolution images and variations, and changes that must be captured for effective magnetic navigation. Source: Geomag
Using supersensitive quantum-based magnetic sensors based on optically pumped, cesium-based, split-beam scalar magnetometers, which have an absolute accuracy between one and three nanoteslas, it’s possible to read that field with high precision. The Earth’s core field has values ranging from 25 to 65 microtesla (that’s 0.25 to 0.65 gauss) at the surface while magnetic anomaly field of interest typically varies by just hundreds of nanotesla.
The readings are then matched to pre-existing maps of Earth’s field. This scheme has the disadvantage of not being very accurate compared to GPS, partially because the Earth’s magnetic field is not static and matching maps need constant updating.
Despite these challenges, companies such as SandboxAQ have developed a navigation technology (AQNav) that leverages proprietary large quantitative models (LQMs) and powerful quantum sensors to make use of the Earth’s crustal magnetic field. By combining high-sensitivity magnetometers with AI algorithms to identify unique magnetic patterns and locate position in real time, it’s possible to determine position in that field. The sensing is entirely passive, so users remain undetected.
- Visual matching
This uses a simple concept of matching what a camera sees to the verified landmarks on a map. Visual terrain-following has been used for decades in cruise missiles which follow a precise terrain-image pattern. Orders-of-magnitude improvements in imaging quality and the associated algorithms needed to process and match the observed image to the map now make this technology even more precise.
One vendor pursuing this approach is Vermeer Corp. Their system uses between one and four electro-optical/infrared camera feeds simultaneously to map real-time video to a locally stored 2.5D or 3D map database to generate an accurate location signal.
- Celestial navigation
This classic approach to navigation now uses modern, automated versions of the transit, celestial charts and precise clocks, aided by computerized calculations. This is a case of “back to the future” but in a new form and implementation.
- E-LORAN
LOng-RAnge Navigation was a hyperbolic radio navigation system developed by the United States during World War II. The third iteration, LORAN-C, was initiated in the late 1960s, but the stations and system were decommissioned in the 1990s due to the availability and performance of GPS.
It uses the differences in timing of received signals from multiple high-power transmitters in the 100-kHz band (yes, that’s kilohertz) to developed positioning information.
Enhanced LORAN is a standard which builds on the now obsolete LORAN system by putting more information into the modulation of the carrier as well as adding a data channel. Like LORAN, E-LORAN offers some benefits such as near-impossibility of jamming and spoofing, but it also requires many high-power transmitters and many of these need to be in inhospitable or remote locations which are difficult to support (Figure 3).

Figure 3 Like its predecessor LORAN, the enhanced LORAN system will require an extensive physical infrastructure located around the world. Source: UrsaNav
While E-LORAN proponents are eternally hopeful, the project has had difficulty getting traction and support due to technical challenges (primarily at the transmitter side), very high up-front infrastructure costs, and best-case accuracy of about 50 to 100 meters (although there are proposed ways to improve that number).
The realities of dealing with a GPS-unavailable world
Many of these alternatives are being enabled by advances in quantum-based sensors. Some may even require supercooled arrangements with all the obvious downsides of that requirement. Each of them offers the virtue of not being jammable or denied.
At the same time, none offers the amazing accuracy and simplicity of GPS for the user. No single technology offers anything close to GPS. A viable alternative, even with reduced accuracy, will require advances in sensors and gigabytes of support data such as maps. Any GPS alternative will also require tight fusion and merging of unrelated sensor technologies and outputs, huge datasets, and extensive use of AI and machine learning to create useful results.
It will be fascinating to see which one of these, if any, takes a dominant role in non-GPS settings, or will it be a balanced fusion? Perhaps some unexpected physical phenomenon will come from behind, as has happened so often in the past. As they say, “predictions are very hard to make, especially about the future.”
Related Content
- When your sensors mislead you
- Sensors Without Wires, But Not “Wireless”
- Navigating without GPS requires advanced sensors, intensive analog
- Sophisticated Sensors, Extreme Conditioning, Advanced Algorithms Yield Amazing Geolocation Results
The post GPS-free systems to spur highly advanced sensors, fusion appeared first on EDN.
Vcc delay
It was with humble spirit and a good dose of Mea Culpa that a semiconductor company, from whom some very large-scale digital large-scale integration (LSI) chips were purchased, had a problem (later corrected, thank goodness) in that their chips would malfunction when powering up if their +5V rail voltage rose too slowly as the system was being turned on.
The vendor’s recommendation was to apply a 0 V (off) to +5 V (on) rail voltage with a steeper rise time (< 45 ms) than our power supply could deliver. We decided that we needed a switching arrangement that would operate as follows in Figure 1.

Figure 1 Providing a steep +5-V rail voltage rise time.
One problem with making something like this was that the input voltage could indeed rise very slowly through ½ volt to 1 volt to 2 volts, and so forth, which were voltage levels that were well below specification limits for any voltage monitoring IC we could find.
The resulting operations were erratic and unpredictable at arbitrarily low input voltages. This did not help the LSI situation even one little bit. (Yes, I am aware of the pun.)
Remedy was achieved using the following circuit in Figure 2.

Figure 2 Rail voltage switch, four loads.
The result obtained was as follows:

Figure 3 Rail voltage delay and rise time speedup.
This worked predictably down to arbitrarily low power supply voltages because there would be no response whatsoever, as long as the TLV431 didn’t see some voltage high enough to get itself conducting.
When the power supply voltage did get high enough to turn on the TLV431 at the time we’re calling “t1”, the power MOSFETs would turn on, and there would be a downward but very short-duration transient voltage drop from the power supply, which would be recovered from very quickly. The rail voltage thus presented to the LSI chips had a sufficiently quick rise time of its own to make those chips happy.
The end result made a bunch of human beings happy, too.
John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
GaN fundamentals: Hybrid structures, HEMT, and substrate choices

Part 1 of this article series on gallium nitride (GaN) fundamentals described crystal structures and the formation of the two-dimensional electron gas (2DEG), along with material figures of merit and the transition from depletion-mode to enhancement-mode GaN HEMTs.
Part 2 will outline hybrid structures and the RDS(on) penalty, as well as provide further details on GaN HEMTs and substrate choices for GaN. It will also make the case for the path to monolithic integration while showing how ohmic contacts, metallization, and packaging advantages are facilitating this design roadmap.

Figure 1 Schematic of low-voltage enhancement-mode silicon MOSFET is shown in series with a depletion-mode GaN HEMT: Cascode circuit (a) and enable/direct-drive circuit (b). Source: Efficient Power Conversion (EPC)
An alternative to monolithic enhancement-mode GaN transistors is the hybrid cascode configuration, pairing a low-voltage enhancement-mode silicon MOSFET with a high-voltage depletion-mode GaN HEMT in series. Figure 1 above illustrates two variants.
The cascode configuration, in particular, is highlighted as a pragmatic intermediate solution: a low-voltage enhancement-mode Si MOSFET is connected in series with a high-voltage d-mode GaN HEMT. The MOSFET gate is the external control terminal; when it turns on, the GaN gate-source is pulled close to zero and the HEMT conducts. When the MOSFET turns off, the GaN gate sees a negative bias through the MOSFET, turning off the high electron mobility transistor (HEMT) and providing normally-off behavior at the system level.
A natural question is how much extra RDS(on) the silicon MOSFET adds to the GaN device. Figure 2 shows a useful plot of the percentage contribution of the MOSFET to total RDS(on) versus the rated voltage of the cascode system. At high voltage, the GaN device dominates, and the MOSFET contribution becomes small.

Figure 2 Percentage RDS(on) contribution from the low-voltage MOSFET in a cascode configuration is shown as a function of the rated breakdown voltage of the composite device. Source: Efficient Power Conversion (EPC)
From this chart, a 600-V cascode device adds only around 3% extra RDS(on) due to the low-voltage MOSFET, because the GaN HEMT’s drift resistance dominates at such high voltage. At lower voltages, the GaN device resistance drops rapidly with VBR, so the MOSFET contribution becomes increasingly significant. For this reason, cascode solutions are practical and attractive for higher voltages (above roughly 200 V), whereas for 100–150 V class devices, monolithic e-mode GaN is generally preferable.
The direct-drive (enable) variant exposes the depletion-mode GaN gate directly to the external driver (typically 0 V on, -12 to -14 V off). The silicon MOSFET serves as a safety “enable” switch, connected to the gate driver’s undervoltage lockout (UVLO). During normal operation, the silicon device remains on and experiences no switching; it only blocks the GaN gate if supply fails. This configuration offers precise control of GaN dynamics but requires bipolar drive capability.
Reverse conduction in HEMT transistors
Reverse conduction behavior is a clear advantage of enhancement-mode GaN HEMTs. The source potential increases in relation to the gate when current is forced from the source to drain while the device is nominally off.
This process continues until the threshold condition for the formation of 2DEG is reached beneath the gate region. The channel now reorganizes and conducts in the opposite direction. Unlike the body diode of a silicon MOSFET, which depends on minority-carrier injection and storage, this is a majority-carrier mechanism. So, there is no stored minority charge and consequently no reverse-recovery penalty.
A positive gate voltage establishes the 2DEG channel during forward conduction, enabling current to move from the drain to the source. When reverse conduction occurs, as it does during a synchronous rectifier’s dead time, current moves from the source to the drain when the drain is at least the threshold voltage lower than the gate.
Conduction is then determined by channel resistance, and the device functions similarly to a low-drop diode. In contrast to silicon MOSFETs, which suffer reverse-recovery losses because of charge storage effects, current almost immediately stops once the reverse bias is eliminated.
Vertical GaN and substrate choices
Instead of using lateral 2DEG transport, vertical GaN transistors employ a conduction path perpendicular to the wafer surface. In a typical structure, p-GaN regions linked to the source extend from the surface toward the drain, and the drain contact is positioned at the bottom of a thick n-GaN drift region. When a negative gate voltage is applied, the n-GaN between the p-regions beneath the gate is depleted, preventing current flow.
The depleted region collapses and electrons move vertically from source to drain when the gate is positively biased. This architecture has the potential to compete with high-voltage SiC devices because it can support breakdown voltages above 1000 V while maintaining quick switching. The sub-650 V market is dominated by lateral GaN, mainly because silicon substrates are more affordable and scalable.
The cost of standard 200-mm silicon wafers is only a few tens of dollars per wafer, which enables direct reuse of established CMOS fabs and high-volume manufacturing, including the potential for monolithic integration of sensing circuits and drivers. Bulk GaN substrates for vertical devices, on the other hand, are still restricted to small diameters (usually ≤150 mm) and cost several hundred to over a thousand dollars per wafer, or tens of dollars per cm². This severely limits cost competitiveness at mid voltages.
From a performance perspective, lateral GaN HEMTs benefit from the creation of a high-density 2DEG, which offers exceptionally high electron mobility and low channel resistance. This translates into excellent light-load efficiency and high-frequency operation, which are essential for applications like DC-DC converters, server power supplies, telecom, and consumer fast chargers.
Vertical architectures, currently dominated by SiC MOSFETs, continue to be the preferred solution for voltages above ~900 V because they provide superior robustness at high electric fields and decouple blocking voltage from lateral device dimensions. While SiC and future vertical GaN aim for high-voltage applications, lateral GaN emphasizes cost-performance optimization over voltage scaling in this regime, solidifying its leadership in the mid-voltage range.
Building a GaN HEMT transistor
Fabrication of a GaN HEMT begins with epitaxial growth of the GaN/AlGaN heterostructure on a foreign substrate. Unlike silicon devices, where the active layer matches the substrate, GaN HEMTs require heteroepitaxy, growing a wurtzite crystal on a substrate with mismatched lattice constant and thermal expansion.
Four substrate materials dominate: bulk GaN, sapphire (Al₂O₃), silicon carbide (SiC), and silicon (Si). Each offers trade-offs in lattice mismatch, thermal expansion coefficient, thermal conductivity, and cost. Silicon (111) orientation substrates have emerged as the commercial workhorse due to their low cost ($1–2 per 200 mm wafer) and compatibility with existing CMOS fabrication infrastructure, despite a 17% lattice mismatch (a_GaN = 3.189 Å vs. a_Si = 3.84 Å) and thermal expansion difference of 3 × 10⁻⁶ K⁻¹.
Heteroepitaxy grows one crystal on a dissimilar substrate. Metal-organic chemical vapor deposition (MOCVD) deposits the GaN/AlGaN layers. The process starts with an AlN seed layer on the substrate to initiate nucleation. An AlGaN buffer layer creates the transition to pure GaN crystal structure. A thick GaN layer forms the semi-insulating base. Finally, a thin AlGaN barrier layer induces strain that forms the 2DEG conduction channel.
Figure 3 illustrates the complete epitaxial stack from substrate to 2DEG interface. For enhancement-mode devices, a p-GaN cap layer grows atop the AlGaN barrier, introducing positive charge to deplete the 2DEG at zero gate bias (Figure 4). This stack enables lateral electron transport parallel to the surface, distinguishing GaN HEMTs from vertical silicon MOSFETs.

Figure 3 The illustration highlights basic steps involved in creating a GaN heteroepitaxial structure: Starting silicon substrate (a), aluminum nitride (AlN) seed layer grown (b), various Al GaN layers grown to transition the lattice from AlN to GaN (c), GaN layer grown (d), and AlGaN barrier layer grown (e). Source: Efficient Power Conversion (EPC)

Figure 4 An additional GaN layer, doped with p-type impurities, can be added to the heteroepitaxy process when producing an enhancement-mode device. Source: Efficient Power Conversion (EPC)
Ohmic contacts and metallization
Source and drain electrodes must form low-resistance ohmic contacts to the 2DEG, penetrating the AlGaN barrier. Multiple metal layers and high-temperature annealing create reliable shunts. The gate electrode sits atop the AlGaN (or p-GaN), modulating the channel via electric field.
Back-end processing adds multilevel copper interconnects with tungsten vias, scaling gate width across thousands of parallel cells. Final passivation (SiNₓ) protects the surface and shapes electric fields to prevent premature breakdown.
Chip-scale packages (BGA and LGA) minimize parasitics, supporting megahertz switching with minimal ringing. Recent advances in QFN (Quad, Flad, No-Lead) have brought packaging alternatives that have minimal compromises in parasitic inductance, resistance, and thermal conductivity.
In either chip-scale of QFN packages, lateral conduction enables bottom-side cooling and ultra-low inductance packaging. Ball grid array (BGA) formats use SnAgCu micro-bumps (150 µm pitch) for 100–650 V devices (1.5 × 1.0 mm² footprint). LGA variants (3.9 × 2.6 mm²) handle 100 V half-bridges at 10 A continuous. Package loop inductance drops below 0.2 nH, supporting dI/dt >2000 A/µs without significant ringing—impossible in wire-bonded discrete packages
The path to monolithic integration
The lateral architecture of GaN HEMTs—where current flows parallel to the surface—eliminates the need for deep vertical vias or trenches, enabling unprecedented levels of monolithic integration. Unlike vertical silicon or SiC devices, multiple passive and signal-level transistors and passive components occupy the same epitaxial plane, with interconnects formed in overlying metal layers. This allows fabrication of complete power stages on a single die smaller than a grain of rice.

Figure 5 A typical process creates solder bars on an enhancement-mode GaN HEMT (not to scale). Source: Efficient Power Conversion (EPC)
Monolithic GaN stages eliminate interconnect parasitics that plague discrete implementations:
- No bond wires: Package inductance <0.2 nH vs. 1–5 nH with discrete multi-chip QFN
- Zero common source and gate loop inductance
- Pin count reduction: 99% fewer external connections vs. discrete half-bridge + drivers
Compared to silicon DrMOS (driver + MOSFET), GaN integration yields:
- 10× lower QG → MHz switching without excessive gate losses
- Zero QRR → no reverse recovery in synchronous rectification
- 25× smaller die area → lower cost at equivalent performance
Maurizio Di Paolo Emilio is director of global marketing communications at Efficient Power Conversion (EPC), where he manages worldwide initiatives to showcase the company’s GaN innovations. He is a prolific technical author of books on GaN, SiC, energy harvesting and data acquisition and control systems, and has extensive experience as editor of technical publications for power electronics, wide bandgap semiconductors, and embedded systems.
Editor’s Note:
The content in this article uses references and technical data from the book GaN Power Devices for Efficient Power Conversion (Fourth Edition) authored by Alex Lidow, Michael de Rooij, John Glaser, Alejandro Pozo Arribas, Shengke Zhang, Marco Palma, David Reusch, Johan Strydom.
Related Content
- SiC vs. GaN: Who wins
- The advantages of Vertical GaN Technology
- A brief history of gallium nitride (GaN) semiconductors
- A new IDM era kickstarts in the gallium nitride (GaN) world
- New GaN Technology Makes Driving GaN-Based HEMTs Easier
The post GaN fundamentals: Hybrid structures, HEMT, and substrate choices appeared first on EDN.
Single-stage design removes 48-V bus in servers

A DC/DC power delivery board from Navitas Semiconductor enables direct conversion from 800 V to 6 V in a single stage. Showcased at NVIDIA GTC 2026, the design eliminates the conventional 48-V intermediate bus converter stage within compute server trays, simplifying power delivery for NVIDIA AI infrastructure.

Using GaNFast power ICs, the board reaches 96.5% peak efficiency at full load with 1-MHz switching and a power density of 2.1 kW/in³. The primary side integrates sixteen 650-V GaNFast FETs in DFN 8×8 packages with dual-side cooling in a stacked full-bridge topology, while center-tapped outputs use 25-V silicon MOSFETs. High-frequency switching enables smaller passives and planar magnetics, increasing power density.
The Navitas power delivery board is about 20% thinner than a mobile phone. Its ultra-low profile allows close placement to the GPU board, minimizing loop inductance to improve transient response and power distribution efficiency.
For more information, contact a Navitas representative or email info@navitassemi.com. A timeline for availability was not provided at the time of this announcement.
The post Single-stage design removes 48-V bus in servers appeared first on EDN.
UWB SoCs extend ranging and radar performance

The ST64UWB family of ultra-wideband SoCs from ST provides increased range and processing capability for automotive applications. Backward compatible with IEEE 802.15.4z, the chips also support the emerging IEEE 802.15.4ab UWB standard, enabling device localization and tracking at distances of several hundred meters. Target use cases include hands-free digital keys and high-accuracy vehicle localization.

Enhancements such as multi-millisecond ranging (MMS) and narrow-band assistance (NBA) provide greater operating range and improve link robustness, particularly for devices carried in bags or rear pockets. These features also facilitate close-range direction finding for more accurate interpretation of user position and movement. In addition, IEEE 802.15.4ab strengthens radar mode for more reliable in-vehicle child presence detection.
The ST64UWAB-A100 and ST64UWB-A500 are built on an 18-nm FD-SOI process, increasing link budget by nearly 3 dB versus bulk technologies and boosting range by up to ~50% beyond IEEE 802.15.4ab. Both devices integrate an Arm Cortex-M85 core, while the ST64UWB-A500 adds AI acceleration and DSP capabilities for edge AI-based radar applications. A third device, the ST64UWB-C100, expands the lineup to cover industrial and consumer applications.
The devices are now sampling to leading Tier 1 suppliers and OEMs.
The post UWB SoCs extend ranging and radar performance appeared first on EDN.
224G ICs optimize signal integrity in linear optics

Semtech’s 224-Gbps/lane TIAs and drivers power 800G–3.2T transceivers and optical engines for AI/ML clusters, hyperscale data centers, and cloud infrastructure. Compliant with CEI‑224G‑Linear and LPO‑MSA, they support half-retimed (LRO), linear pluggable (LPO), next‑gen (XPO), near‑packaged (NPO), and co‑packaged (CPO) optics.

The 224G TIA family—GN1834L, GN1834DL, and GN1838DL—offers quad- and octal-channel architectures with flexible layouts. On-chip equalization, high linearity, and low noise boost signal integrity for LPO and next-generation linear optics.
The 224G Mach-Zehnder Modulator (MZM) drivers—quad GN1877 and octal GN1887—support SiPho, InP MZM, and TFLN optical transmitters with tunable gain and output swing. A CEI‑224G‑Linear host-side equalizer covers a wide range of host interfaces, from compact NPO/CPO to varied LRO/LPO/XPO trace lengths.
Both the TIA and driver series integrate real-time link monitoring and telemetry, enabling proactive diagnostics to reduce link flapping and improve network reliability.
The GN1834L, GN1834DL, and GN1887 are available now; GN1838DL and GN1877 are expected in April 2026.
For more information, visit Semtech’s optical page.
The post 224G ICs optimize signal integrity in linear optics appeared first on EDN.
Double-side cooled MOSFETs reduce server heat

AOS has introduced two MOSFETs—the 25‑V AONC40212 and 80‑V AONC68816—in 3.3×3.3‑mm source-down DFN packages with double-side cooling. This packaging supports high power density in DC/DC intermediate bus converters and meets the strict thermal demands of AI servers and data centers.

The MOSFETs use an optimized top-clip design on the exposed drain, enabling double-sided thermal transfer to remove heat efficiently. Compared with single-sided devices, this approach reduces thermal stress and heat buildup. The large top clip achieves a low maximum thermal resistance of 0.9 °C/W, enhancing thermal performance in demanding applications.
The AONC40202 and AONC68816 MOSFETs support continuous drain currents of 405 A and 119 A, respectively, at 25 °C, with pulsed currents up to 644 A and 476 A. The devices have maximum on-resistances of 0.7 mΩ for the 25-V part and 4.7 mΩ for the 80-V part, while maintaining junction temperatures up to 175 °C. Bottom-side thermal resistance is 1.1 °C/W for both devices.
Available now with a lead time of 14–16 weeks, the AONC40202 and AONC68816 cost $1.85 and $1.95 each in lots of 1000 units.
The post Double-side cooled MOSFETs reduce server heat appeared first on EDN.
Buck ICs improve AI data center power

Infineon’s XDPE1E multiphase PWM buck controllers and TDA49720/12/06 PMBus POL buck regulators streamline voltage regulation in AI data centers, helping customers boost compute performance per rack. With digital control and telemetry-enabled point-of-load regulation, these devices reduce design cycles and accelerate platform bring-up.

Designed for multiprocessor AI platforms and advanced VR inductor topologies, the XDPE1E3G6A and XDPE1E496A digital 3- and 4-loop buck controllers feature configurable phase allocation and fully programmable phase firing order. They support multiple protocols, including PMBus, AVSBus, SVID, and SVI3, ensuring compatibility across processor ecosystems. Digital control features and integrated tools help manage dynamic AI loads, reduce bench time, and improve system robustness.
The TDA49720/12/06 integrated POL buck regulators deliver 6-A, 12-A, and 20-A outputs in 3×3 mm and 3×3.5 mm packages. PMBus telemetry enables reliability monitoring and system optimization, while a proprietary valley current mode constant-on-time control ensures fast transient response, cycle-by-cycle current limiting, and all-MLCC output capacitance compatibility.
More information can be found on Infineon’s digital multiphase controller page and POL voltage regulator page. A timeline for availability was not provided at the time of this announcement.
The post Buck ICs improve AI data center power appeared first on EDN.
Cellular hotspots: Multi-option evaluation thoughts

A cellular data service upgrade prompts new (to this engineer, at least) hardware acquisitions: three models’ worth, four total devices. Smart or superfluous? Read on and decide for yourselves.
When our power went down on December 17, our broadband WAN connection and LAN still remained up for several hours, thanks to our sizeable UPS battery set fueling essential network gear, along with the NUT-controlled auto-shutdown of the multiple power-hungry HDD-based NASs also UPS-tethered. But eventually, the batteries were depleted, Comcast-supplied Ethernet and Wi-Fi both dropped, and we needed to turn to other Internet-access options.
My wife has unlimited data on her Verizon 5G cellular phone account, along with hotspot support (the latter capped at 200 GB max per month, but which my legacy unlimited AT&T 4G LTE cellular phone plan completely lacks). And her service plan is also shared among multiple devices, including several iPads. So that was one option.
AT&T longevity (and stinginess)I’ve also long (since November 2009, I realized in perusing my email archive while writing this) had a dedicated AT&T data plan, with the associated SIM nowadays normally (at least until recently, that is) plugged into my archaic Microsoft Surface Pro X hybrid tablet/computer:

This plan, originally $29.99/month, increased by $5/month beginning in February 2016. More recently, another change arrived. My original DataConnect plan was 4G LTE-based and unlimited from a data usage standpoint. But in March 2023, AT&T converted me to a 5G successor plan, with the second month of service free and $20/month off the normal $55/month price beyond that point (both perks per my legacy customer status). That said, it was no longer unlimited; the base rate included only 50 GBytes of data use per month. Sufficient in a pinch, although not for ongoing daily usage; we average well beyond a half TByte of aggregate data payload per month on Comcast.
When the network went down, I therefore also grabbed and booted up the Surface Pro X, figuring that I’d spread out the household data usage across the multiple cellular services we were already paying for. To my surprise and dismay, however, the usual cellular data connection option in Windows 11’s network settings was missing. And when I dove into Device Manager, I learned why; “This device cannot start”, whatever that meant:


I tried uninstalling the relevant driver, then rebooting so that Windows would auto-reinstall it. I also tried searching for an updated version of the driver. No dice; nothing I tried worked. I was pissed, turning to Reddit to vent and seek other suggestions. What I’d already learned there was that the Windows 11 2H25 update had dropped support for legacy Arm processors, including the SQ1 (a Microsoft-branded Qualcomm Snapdragon 8cx SC8180X) and, I assumed along with it, the chipset’s integrated X24 LTE modem. And, because I’d installed Windows 11 2H25 in mid-October and it was already mid-December, I was beyond the 10-day rollback deadline.
More recently though, and on a hunch, I plugged back in the SIM, rechecked the computer’s “Network & Internet” screen and noticed that the cellular data option had magically returned, which a revisit of Device Manager confirmed:

I have no clue what caused it to resurrect, far from what had led to its (temporary, it turns out) demise in the first place. And, by the way, after further pondering I now suspect that the now-shorter list of supported Arm processors and chipsets in Windows 11 2H25 only affects fresh installations, not upgrades of existing activated builds. It’s all for naught, however; I’ve already moved on. For any of you who wondered what I’d been doing with the SIM before I temporarily “plugged it back in” to the computer, as I intentionally teased a paragraph earlier, read on for the solution to the mystery.
I’ve dabbled with mobile cellular hotspots before, owned by others. And truth be told, I didn’t have to buy one this time. Last January I’d purchased on sale from Amazon two NETGEAR LM1200 cellular broadband modems, one for teardown-to-come and the other for precisely the scenario—premises power-loss connectivity backup—that I experienced in mid-December. They aren’t as-is usable, requiring tether to a router. But I have plenty of those in inventory. And had we stuck around the home more than one night I probably would have pressed the modem-plus-router combo into service, fueled by a portable power unit.


But another limitation, bandwidth, was the same one that already soured me on the Surface Pro X’s integrated modem (along in the ones in my Intel-based Surface Pros, for that matter). The LM1200 “only” supports 4G LTE, which is likely why I bought them (on closeout, I suspect) for only $19.99 each a year-plus back, versus the original $49.99 MSRP. As you’ll soon see, I used a similar “buy a generation-or-few old” stratagem with the mobile hotspots! 4G LTE support was sufficient when that’s all my AT&T service supported (and the unlimited per-month allocation was a nice bonus). But once AT&T upgraded me to 5G…well, you know what they say about shiny new objects… Truth be told, I actually bought three mobile hotspots, for reasons I’ll discuss in the following sections.
The NETGEAR Nighthawk M6 MR6110

I’ll start with the highest-end device, Netgear’s MR6110 (PDF), the entry-level member of the company’s Nighthawk M6 family. Versus its higher-end Nighthawk M6 siblings (this Mobile Internet Resource Center writeup provides a comprehensive comparison), not to mention Nighthawk M7-family successors, it:
- Is carrier-locked to AT&T, and doesn’t support a sufficient diversity of frequency bands (presumably due to firmware versus silicon limitations) to deliver robust support for other cellular carriers, anyway
- Is sub-6 GHz only from a spectrum standpoint, not additionally comprehending mmWave support (which, interestingly, NETGEAR dropped entirely in its Nighthawk M7 generation devices) and
- Supports only Wi-Fi 6, not more advanced protocols
Then again, it only cost me $84.99 plus tax gently used from a legitimate eBay seller (just as I’ve mentioned before with cellular phones, you need to be careful when buying preowned goods to ensure that you haven’t acquired a device whose IMEI has already been banned by the associated cellular carrier). I also sprung for a $24.99 two-year extended warranty. And in case you’re wondering what behind the gray square “doors” at both ends of the front panel in the above stock photo, they’re TS-9 connectors that mate up with NETGEAR’s model 6000451 omnidirectional MIMO antenna, a gently used example of which I bought for $24 off eBay:

I live in a rural region outside of (and above) Golden, Colorado, with trailing-edge cellular technology deployed and spotty coverage for all carriers. To wit, using the NETGEAR MR6110’s internal antenna, I was only able to tune in LTE service…what’s the point, since I’ve already got the NETGEAR LM1200 modem-plus-router combo? But connect the external antenna, tether my laptop to the MR6110 over USB-C, and:


Huzzah! Consider me sold!
The Franklin A50 (model RG2102)
Next up…or down, depending on your perspective…is another AT&T-partner piece of hardware, Franklin’s A50. No integrated Ethernet, although you can still wired-tether to a single device over USB-C, and to an Ethernet-based router via a USB-C-to-Ethernet adapter plus a Cat5e cable. And “only” support for 20 concurrent devices, versus the NETGEAR MR6110’s 32. But user reviews rave about its battery life. It touts diverse 5G band support, and is claimed carrier-unlockable via services such as Cellcorner and Unlocklocks. That’d be convenient in case, for example, I ever wanted to switch my service to Google Fi, a T-Mobile MVNO (mobile virtual network operator). And it only set me back $34 (plus tax) used on eBay. How could I refuse?
The Franklin T9 (model RT717)
This last, lowest-end one—two of them, actually—I bought solely for experimentation purposes, both hacking and teardown. No integrated Ethernet, again. No 5G support this time, either; it only comprehends LTE. And as you can tell from the photo, this time it’s out-of-box locked to T-Mobile. But believe it or not, it’s (unofficially, again) user-unlockable for use with other carriers, not to mention user-hackable to both tweak its default settings and expand its overall feature set. Check out the following example links (in Google search results priority order) for more information:
- Rooting and Unlocking the T-Mobile T9 (Franklin Wireless R717)
- Stefan Todorovic’s Franklin Unlocking Tool
- SIM Unlock a Franklin T9 Hotspot
- T-Mobile Franklin T9 Hacking (complete with teardown photos)
- kernelcon – tmobile test drive hotspot hackery
- Franklin T9 aka R717 Hotspot Thread
And did I mention that each complete kit, in brand new condition this time, cost me only $13.98 plus tax (with free shipping!) on eBay? Once again, how could I resist?
More to comeAs you’ve hopefully already noticed from the two photos I shared earlier, I’m already happily exploring the NETGEAR MR6110, with the other two devices to follow in short order. I’ve also already invested in carrying cases for all three, plus inexpensive spare batteries for both the MR6110 and Franklin A50 (each Franklin T9 kit came with one, so I’m set here), since all three hotspots’ portable power cells are easily user-accessible for swap-out purposes. Stay tuned for more coverage to come in the coming months. And for now, I as-always welcome your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Preemptive utilities shutdown oversight: Too much, too little, or just right?
- Modern UPSs: Their creative control schemes and power sources
- Beefing up backup
The post Cellular hotspots: Multi-option evaluation thoughts appeared first on EDN.
Scoping out the chiplet-based design flow

Today, the design of most monolithic SoCs follows a familiar pattern. Requirements definition leads to an architectural design. Then, the design team selects and qualifies the necessary IP blocks, assembles them into the architecture, and floorplans the die. Functional verification and early power and timing estimation can begin at this point.
The team can now begin RTL synthesis, rough placement, and at least preliminary routing. As these tasks finish, most SoC design teams will bring in physical-design specialists to complete the work until signoff.
But what about a multi-die design based on chiplets? At first glance, the sequence of tasks seems nearly identical to the one for a monolithic SoC. Just substitute chiplets for IP blocks and interposer design for physical chip design, right?
Well, no. Issues and corresponding tasks in chiplet-based design diverge significantly from the flow of most monolithic chip designs. Unless you intend to build a great deal of specialized multi-die expertise in-house, these issues make it vitally important to engage, from the beginning of the project, with a design partner experienced in both chiplets and interposer design and one with deep relationships across the multi-die, global supply chain.
The chiplet path
The two paths diverge early in the design project. In concept, selecting chiplets sounds much like IP selection. However, the IP market is mature: there are sources for almost any common IP function, and specialist IP firms are willing to undertake nearly anything. And usually, IP is highly configurable, either by setting parameters for an RTL generator or by working with the provider.
Only when the SoC requirements demand a unique function or unusual operating constraints, such as market-leading performance or extreme low power, would the SoC team consider designing its own IP internally.
In contrast, the chiplet market, while growing, is still immature. Some combinations of functions may not be available. And chiplets—which are finished dies, after all—cannot be as flexible as an RTL generator tool. You may find an I/O hub chiplet with the right kinds of inputs and outputs, but you may not find one with the correct configuration, the right power, or the proper pad placement for your design.
For these reasons, chiplet-based designs often require the design of one or more chiplets, and chiplets can have very different constraints from stand-alone ICs—they aren’t just little SoCs. Chiplets usually have very high I/O densities, high-speed drivers or serial transceivers tuned to the very short interconnect runs on interposers, and precise pad placement requirements dictated by an interposer layout.
Also, because the finished module will have to be tested when test equipment has limited access to the dies, chiplets often emphasize built-in self-test (BiST) more than a conventional chip. Having a design partner familiar with these issues from the outset can save time and energy.
Memory has issues, too
One type of die in chiplet-based design deserves special mention: memory. In this era of AI everywhere, many chiplet-based architectures will include high-bandwidth memory (HBM). This is undoubtedly true for datacenter processors, but increasingly just as true for edge AI applications such as vision processing or robotics.
Unfortunately, HBM interface design, placement on the interposer, routing, and thermal analysis are all challenges that differ significantly from the issues with logic chiplets. Requirements vary from generation to generation of the HBM standard, and even vendor to vendor. In the intense competition for supply, securing a stable supply of HBM dies or die stacks is essential before locking down the interposer design.
A design partner with deep HBM experience and strong supply-chain connections can ensure your design delivers the memory bandwidth you need with HBM dies you can acquire without having to respin an interposer design.
Interposer design
That brings us to the interposer. Conceptually, interposer design is not unlike IP placement and routing on an SoC. But here, we are talking about placing physical dies on a piece of silicon—usually—and routing between physical pads that can’t be moved. In practice, the constraints and analysis tools differ from those for chip design.
Also, decisions made at this stage can impact earlier and later stages in the design flow. The limited bandwidth between chiplets may influence how the architecture is partitioned across the dies. Even spatial issues, such as how close processor chiplets may be placed to HBM stacks and how far away they may be, can influence architectural partitioning and chiplet designs.
Interposer design also includes tasks that are unfamiliar to most chip design teams. These include signal and power integration analysis, 3D electromagnetic field modeling, and thermal and mechanical analysis of the 3D structure. Furthermore, design-for-test becomes an issue. A test strategy for the completed model must reasonably achieve the required coverage and be consistent with the assembly power budget. The test strategy will also influence the choice of OSAT vendors for the assembly.
Finally, the package must be designed, not chosen off the shelf. This will require yet another set of tools and analyses. Packaging decisions will echo up and down the supply chain: interposer design, availability of materials, geographic location of capable OSAT facilities, and more will be influenced by packaging choices.
It takes a platform
The range of tasks and specialized skills necessary to bring a chiplet-based design to a global market is significantly broader than the set required for a modest SoC design. The fact that many tasks interact up and down the design flow further complicates the project. If too many specialist parties are involved, communications and change management can become a nightmare.
The best solution is not a go-it-alone determination, nor is it a scramble to pull together a horde of best-in-class specialist consultants. Nor is it necessary to turn the whole challenge over to a powerful foundry partner with limited global flexibility.
We have found that the optimum solution is a consolidation platform. This organization combines rich IP access, chiplet design experience, interposer expertise, strong relationships with HBM suppliers, multiple interposer foundries, and chip-on-wafer-capable OSATs worldwide. You need a partner with a platform to address the global challenge of chiplet-based products.

The consolidated platform is an ecosystem solution offering a global ecosystem of IP and design expertise with foundry and OSAT service partners. Source: Faraday Technology Corp.
This consolidated platform combines rich IP access, chiplet design experience, interposer expertise, strong relationships with HBM suppliers, multiple interposer foundries, and chip-on-wafer-capable OSATs worldwide.
Kenneth Lu, marketing manager at Faraday Technology, has over 20 years of experience in the semiconductor industry, spanning product engineering, IP design, and marketing for various application ICs. He currently focuses on business development in advanced packaging, processes, and related innovations.
Special Section: Chiplets Design
- What the special section on chiplets design has to offer
- Chiplet innovation isn’t waiting for perfect standards
The post Scoping out the chiplet-based design flow appeared first on EDN.
Improve 555 frequency linearity

After more than fifty years of continuous production in bipolar and fully half that in CMOS, there’s really neither room nor reason to question the value and versatility of the venerable 555 analog timer. But if it has any significant limitation, it probably lies in the category of raw speed. Still, the LMC555 datasheet tells (albeit in a rather obscure footnote) of an impressive 3MHz capability. The details (including the 3MHz test circuit) appear in Figure 6-2 on page 6 of this 2024 datasheet.
Wow the engineering world with your unique design: Design Ideas Submission Guide
3 MHz for a decades-old, low-power, geriatric analog part isn’t too shabby. It suggests the delightfully simple topology of Figure 1 for a precision 5-decade 1-MHz, current-controlled oscillator, where:
F = 1/(Vth Ct/Ic) = 1/(3.33vCt/Ic) = 1000MHz Ic
Figure 1 A super simple, 5-decade LMC555 current-controlled oscillator.
Figure 1’s LMC555 is doing duty as a current-controlled oscillator with only two external components. It boasts a frequency range spanning 5 decades from 10Hz to (approximately) 1MHz. Cool!
But wait. What’s this “approximately” thing? How problematic is it, and mainly, how can we fix it if it is a problem? Here’s how.
The usual data sheet expression for LMC555 frequency of oscillation (FOO) is:
FOO = 1/(ln(2)RC) = 1/( ln(2) (Ra + 2Rb)C)
But in Figure 6-2, the 3-MHz test circuit, they show Ra = 470, Rb = 200, and C = 200 pF. Those numbers, when plugged into the data sheet arithmetic, yield an RC time constant of 121 ns and therefore predict that the oscillator frequency should hit, not just 3 MHz, but a figure nearly three times faster.
FOO = 1/( ln(2) ((470 + 400)200pF) = 1/120.8ns = 8.28 MHz.
Hold the phone! If 3 MHz is as fast as they can really go, what happened to the missing 5 MHz?
What’s happening is simply that, besides the explicit 121 ns external RC time constant, there’s an implicit time delay (Td) internal to the device of:
Td = 1/3MHz – 1/8.28MHz = 333ns – 121ns = 212ns
These 212 ns of internal delay, while short enough to keep the datasheet cookbook arithmetic accurate for low to moderate frequency, need attention if we want to push things anywhere near pedal-to-the-metal multi-MHz limits. A formula for usefully accurate high-frequency FOO prediction thus becomes more like:
FOO = 1/(Vth Ct/Ic + Td) = 1/(3.33v Ct/Ic + 212ns)
When plotted out, this equation generates the droopy red curve in Figure 2 with a >20% error at 10 mA, which should be 1 MHz but is really only ~800 kHz. Okay. That is pretty pitiful.

Figure 2 Nonlinear red curve versus ideal black shows ~20% error from LMC555 internal delay. The y-axis is the output frequency. The x-axis is the control current.
Luckily, a fix is both available and absurdly easy. It consists of merely a single resistor Rlin added between the Dch (discharge) and Thr (threshold) pins. It works to linearize the current vs frequency function by biasing the Thr pin upward by a voltage = IcRlin. This abbreviates the duration of the sawtooth timing ramp by:
T = IcRlin/(Ic/Ct) = RlinCt = Td
Thus, cancelling the 555 internal delays.
Therefore, if Rlin is chosen so that RlinCt = Td as shown in Figure 3, nonlinearity compensation will be (at least theoretically) complete over the full range of control current as shown in Figure 4. Note:
FOO = 1/(Vth Ct/Ic + 212ns – Td) = 1/(3.33vCt/Ic + 212ns – 212ns) = 1/(3.33vCt/Ic) = Ic 1000MHz

Figure 3 Nonlinearity compensation for 555 internal delays when RlinCt = Td = 212ns.

Figure 4 Frequency of oscillation nonlinearity is foregone and forgotten if Rlin = Td/Ct = 212 ns/300 pF = 706 ohms.
Theoretically.
So the question arises: Can anything practical be made of this theory? More on this soon.
Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974. They have included best Design Idea of the year in 1974 and 2001.
Related Content
- Tune 555 frequency over 4 decades
- 555 VCO revisited
- Inverted MOSFET helps 555 oscillator ignore power supply and temp variations
- Gated 555 astable hits the ground running
The post Improve 555 frequency linearity appeared first on EDN.
Chiplet innovation isn’t waiting for perfect standards

Across markets such as AI, high-performance computing (HPC), and automotive, the demand for computational power continues to accelerate. This demand spans everything from compact edge devices to massive data center servers. Traditionally, that capacity was delivered by monolithic systems-on-chip (SoCs) implemented on a single silicon die. While manufacturing trade-offs can ease some pressures, a large die still limits optimization, forcing designers to balance power and performance across the entire chip rather than fine-tuning each function individually.
The problem is structural. Monolithic SoCs have reached physical and economic limits. As shown in Figure 1, reticle size is fixed, yields decline as die size grows, and the cost of large devices is prohibitively high.

Figure 1 Multi-die architectures are emerging as monolithic scaling reaches its limits. Source: Arteris Inc.
Multi-die systems offer a practical path forward. By breaking a large SoC into smaller chips, teams gain better yields, leverage proven components, and combine diverse process technologies in a single package. Additionally, chiplets can be reused across product lines, improving scalability and reducing cost.
The semiconductor industry has long envisioned chiplets as modular and interoperable, backed by fully proven standards. Companies are not waiting for that vision to materialize fully. They are already moving ahead with chiplet adoption while standards remain in flux.
Why chiplets, and why now?
Until recently, the world’s largest semiconductor companies were the predominant users of chiplet technology. These companies could control every aspect of the design, integration, and packaging processes.
Mid-size and startup companies also long for this future to be realized. However, lacking the resources of industry giants, they must adapt and take incremental steps today, even as the whole framework evolves.
Disaggregating a monolithic design into chiplets offers multiple advantages. By mounting these components on a common silicon substrate, the resulting multi-die systems can be manufactured at the most appropriate technology node.
For example, memory at 28 nm, a high-performance processor at 7 nm, and a cutting-edge CPU at 2 nm. Combining all dies into a single package creates a multi-die system that outperforms a monolithic design.
Standards: Ideal vs. actual
One of the issues is that the standards needed to make chiplets broadly interchangeable are not yet fully baked. They still need to be implemented, validated, and tested across different pieces of silicon before designers can count on them.
Even when two companies follow the exact specification, small details such as sideband signals or initialization steps can differ enough to cause unexpected failures. Until compatibility is proven at scale, design teams need to remain pragmatic in their approach to developing multi-die systems.
The ideal case is often described as chiplets that fit together like Lego bricks, highlighting the requirement that they are straightforward to combine and verified so that they work reliably together. Achieving that vision will ultimately depend on widely adopted industry standards that enable dies from different sources to function as one system.
Initiatives such as AMBA CHI Chip-to-Chip (C2C), Bunch of Wires (BoW), and Universal Chiplet Interconnect Express (UCIe) are helping to define the physical and protocol layers for die-to-die (D2D) links. Yet many challenges remain in areas such as system-level verification, latency optimization, power efficiency, security, and ensuring that chiplets from different vendors perform cohesively, as shown in Figure 2.

Figure 2 Multi-die SoC adoption is expanding across multiple markets. Source: Arteris Inc.
Companies can turn to multi-die systems
Progress can’t be delayed until standards are finalized, so design teams are advancing with innovation. Some of the ways system architects are tackling multi-die design are as follows:
- Design for modularity: Partition compute, memory, and IO into reusable blocks. Utilize silicon-proven network-on-chip (NoC) interconnect IP that supports multiple device-to-device (D2D) protocols and topologies.
- Build with interoperability in mind: Utilize tools and IP that are co-validated with major electronic design automation (EDA), physical layer (PHY), and foundry partners to align chiplet workflows and ensure IP, tool, and foundry compatibility.
- Automate integration: Hand-stitching chiplets together is a time-consuming and error-prone nightmare. Employ tools that automate HW/SW interface definition and assembly, which is essential for fast iteration and derivative design creation.
- Use coherency only where it matters: Certain functions, such as CPU and GPU clusters, may require coherent chiplets and D2D interfaces that necessitate the use of a coherent NoC. By comparison, functions like AI/ML accelerators may be satisfied by non-coherent chiplets and D2D interfaces. These are simpler and more power-efficient and can be addressed with a non-coherent NoC.
- Reuse what works: Adopt chiplet templates that can scale across product families and incorporate proven monolithic dies alongside new multi-die IP in derivative designs.
- Accept that the ecosystem is co-evolving: Standards are years away from full maturity. And companies are just beginning to explore building modular, standard-aware designs, laying the groundwork for the ecosystem’s future.
Build now, don’t wait
Multi-die system development teams should adopt modular design principles, utilize proven IP blocks with flexible D2D support, implement automated integration tools, and embrace ecosystem-aware development flows. Designers should also collaborate with like-minded innovators, partners, and customers to deliver tomorrow’s complex systems today.
Chiplets design solutions show how multi-die architectures can be built and deployed now. They enable companies to address today’s performance and scalability needs while laying the groundwork for seamless interoperability in the future.
Andy Nightingale, VP of Product Management and Marketing at Arteris, has over 39 years of experience in the high-tech industry, including 23 years in various engineering and product management roles at Arm.
Special Section: Chiplets Design
The post Chiplet innovation isn’t waiting for perfect standards appeared first on EDN.
Newer, shinier DMM RTDs—part 2

In the first part of this Design Idea, we saw how a cheap op-amp can give remarkably precise readings from an RTD. We also found that it was a false economy, owing to incurable thermal drift. This concluding part fixes that problem by using a more costly OP177 precision op-amp, which is still much cheaper than an RTD sensor alone.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Now that we don’t need to worry about drift, we can balance the output of the input stage against a passive network: R8 to R11 in Figure 1. That figure also shows the (ideal, theoretical) error curve once the positive-feedback resistor R5 has been trimmed for minimum errors around 0 to 100°C, which is the same as part 1’s Figure 2.

Figure 1 Using a precision op-amp lets us set the output reference level with a passive network because thermal drifts and mismatches are no longer a problem. LED1 acts both as a rail-splitter and a power indicator, while LED2 gives a simple low-battery warning. The calculated error curve assumes perfect components and shows the limits to precision for this circuit.
While this performs identically to part 1’s circuit, there are some practical differences. The OP177 needs at least a 6 V rail to work (but can handle ±15 V), so a 9 V battery (e.g., MN1604) makes a good power source. The rail must be split, which is where LED1 comes in. R1 passes current through the voltage reference D1 and the components bridging it. That current fed through LED1 both lights it and offsets the common rail by a couple of volts from the negative one, which is plenty considering the small voltage swings involved. Bright, pure green devices dropped about 2.3 V; normal ones gave less but were rather dim. R1 was chosen to guarantee the correct operation of D1 down to a battery voltage below 5.6 V, which was where my op-amps actually failed.
Flat battery: you will be warned
The other addition (R12–15, Q1/2, LED2) should ensure that you never run the battery that far down! It’s a simple low-battery indicator that lights LED2 when the voltage across R1 falls below a critical level. As built, that tripped when the battery fell to ~6.5 V. A suitable micropower comparator—didn’t have one handy—would have been neater and not temperature-sensitive, though D2 helps with that. This circuit block isn’t shown on subsequent schematics, but could easily be added.
Next question: since we now have split rails, why not bias A1 with an offset so that its output refers to the common rail, giving 0 V at 0°C? It’s more elegant, because “negative” temperatures now give a negative output, it saves a resistor (cheapskate), and is shown in Figure 2. But that passive network, though discarded for now, will come in handy later.

Figure 2 Adding a biasing network to A1 allows its output to be at 0 mV (common) when sensing 0°C, and to swing negative for lower temperatures.
The necessary biasing is provided by R8–10. Because R4 is now bridged by ~23k, A1’s gain is increased slightly. R5’s new value gives the same error performance as before.
Increasing the measurement span
So far, we’ve only looked at a comparatively narrow temperature band. Taking that to extremes is instructive. Figure 3 plots the errors from -200 up to +600°C. (The Callender–Van Dusen equations apply up to 661°C—the melting point of aluminum.) At the end of part 1, we saw that R5 determines the errors. Now, we can see how varying it can give a different balance of errors over a wider span. Figure 3’s plots are normalized for zero error at 0 and 100°C, which are still valid as calibration points and are used to calculate the ideal slope. No components apart from R5 are affected, though the settings of R6 and R8 will change.

Figure 3 Plotting the errors for various values of R5—the positive feedback resistor—shows that we can optimize performance for minimal errors around 0–100°C (magenta) or accept a wider error band over a greater temperature range. The red curve is within 0.1°C from -130 to +420°C and 1°C from <-200 to >+640°C.
Indicating in Fahrenheit
With a few changes, Figure 2’s circuit can give a direct 1 mV/°F reading, should you need that. The gain must be increased by a nominal 9/5 and an extra offset of 32° provided. Figure 4 shows what’s needed.

Figure 4 Changing three components gives an output of 1 mV/°Fahrenheit.
This works well, but needs care in calibration. Using the 100/138.5Ω (0/100°C) RTD sim from part 1 would mean some iteration: set 32°F, set 212°F, and repeat . . . Paralleling the 100Ω resistor with 1k3354 drops it to 93.0334Ω, the resistance of a (theoretically perfect) 100Ω-PtRTD at 0°F or -17.7778°C. (Yes, more decimals than you’ll need; it never hurts.) The sim then switches cleanly between 0 and +179.0°F—not 180°, because of the RTD’s response curve, which also introduces a minute error at the low point. Hopefully, the actual RTD will be precise enough to minimize the time spent with crushed ice and condensing steam needed for the final trim.
Kelvins
Figure 1’s circuit gives an output (from A1) of ~260 mV at 0°C. If the RTD’s curve were linear and A1’s gain ideal, that would be 273.15 mV, the final readout still being at 1 mV/K. Applying a small offset to A1 fixes things so that we can read absolute temperatures directly. Figure 5 details the necessary changes, with the offset coming indirectly from the voltage across LED1. Again, R5 is shown trimmed for maximum accuracy in the 273.15–373.15K region—and those values are what your meter must show in millivolts when using the 100/138.5Ω sim gadget.

Figure 5 A small negative offset allows the basic circuit to give an output directly proportional to absolute temperature.
Something simple, with split supplies
A final version of the circuit can eliminate the DMM. OP177s will run on supplies from 6 to >30 V, so should you need to add an RTD to kit having suitably split rails, Figure 6—the simplest variant, and little more than part 1’s figure 1 made practical—may be ideal.

Figure 6 Stripping out all the frills and fancies leaves a basic circuit that is ideal for running off split rails and compatible with most ADCs.
R3 is now increased to 68k for a gain of ~69, giving ~10.9 mV/°C, R5 being adjusted accordingly. The 0°C datum is now at ~2.43 V, so even with ±5 V rails, readings can span from -120 to +150°C (±0.1°C error) before A1 saturates. A higher positive rail would allow far higher temperature readings; the lower limit—always above zero volts—merely depends on the allowable error at that point.
This assumes that the output will be read directly by an ADC, probably using a 5 V reference, and that the host system can adjust the zero and the span, which is why no calibration trimmers are shown: that host needs only simple arithmetic rather than the math of a full CVD calculation. For different spans, try the equations for calculating R5 as shown at the end of part 1.
The final build
These circuits were all breadboarded and checked, but I ended up building something slightly different for actual use. This variant starts with the Kelvin approach and then adds the passive reference network discarded from Figure 1, allowing instant switching between the two temperature scales.

Figure 7 Referring the output from the amplifying stage to either a positive reference or to common allows the indication to be switched between Celsius and Kelvin.
For readings in Celsius, the reference network is fed from Vref; for Kelvins, the top of the network is grounded, which both zeroes the reference and keeps the extra resistance seen by R10 constant. This may or may not be useful because on most DMMs the Kelvin range will lose a decimal point compared with the Celsius, so that reading in °C and adding 273.15 gives better accuracy, but it was a fun thing to try.
No DMMs were harmed in the making of this DI.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- https://www.edn.com/newer-shinier-dmm-rtds-part-1/
- DIY RTD for a DMM
- Dropping a PRTD into a thermistor slot—impossible?
- Improved PRTD circuit is product of EDN DI teamwork
- Fake contacts, bounced to order
- Calculation of temperature from PRTD resistance
The post Newer, shinier DMM RTDs—part 2 appeared first on EDN.
What the special section on chiplets design has to offer

Why chiplets and why now? A special section at EDN provides a detailed treatment of this revolutionary silicon technology that’s transforming the semiconductor industry at a time when AI is forcing every serious silicon team to modularize, mix-and-match, and move faster.
This special section will chart key building blocks of chiplet technology—3D ICs, advanced packaging, compute subsystems, heterogeneous integration, interconnects, memory wall, and more—while separating hype from reality.
Find out how system-on-chip (SoC) designs differ from multi-die systems and how standards are evolving in the multi-die chiplets world. Next, a senior executive from a chiplet startup shares how it’s advancing AI systems with HBM4- and SPHBM4-based DRAM solutions.
A technical piece takes a closer look at the chiplet-based design flow and the sequence of tasks, which appears nearly identical to that of a monolithic system-on-chip (SoC) design on the surface. Though in reality, chiplet designs significantly diverge from most SoC designs.
Another article will outline eight best practices for multi-die designs, given that these designs introduce new engineering complexities in areas such as packaging, verification, and thermal dynamics. For instance, it will show how designers can treat packaging as part of the design and engineer the interconnect like a subsystem.
Another article presents 3D ICs as a practical framework for heterogeneous integration. After listing the unique challenges of advanced packaging, it offers tips for efficient 3D IC design and an expert guide to heterogeneous integration.
Then there is a blog taking a sneak peek at co-packaged optics (CPO) challenges and how advances in photonics are aiming to overcome them, including signal integrity, thermal management, optical alignment, and cost. CPO offers a vital alternative to semiconductor packaging built around copper interconnects.
Stay tuned for this chiplets design summit, one article at a time.
Related Content
- Chiplet design basics for engineers
- Chiplet basics: Separating hype from reality
- One-stop advanced packaging solutions for chiplets
- Chiplets Are The New Baseline for AI Inference Chips
- 5 Chiplets Design Challenges Hampering Wider Take-off
The post What the special section on chiplets design has to offer appeared first on EDN.
A battery charger that loudly hums: Dump it or just make it dumb?

An archaic DieHard device has seemingly died hard; is hacking it to resurrect a portion of its original function a worthwhile endeavor?
A decade-plus ago, shortly after moving (part-time, at the time) to Colorado, I came across a smoking (no pun intended…keep reading) deal at Sears: a 12V vehicle battery charger supporting both standard SLA (sealed lead acid) and AGM (absorbed glass mat) cells, along with 2A, 10A and 50A (!!!) charging current options, for $32.99. I bought two, one for me and the other for my then-girlfriend (and now-wife), since we had separate residences at the time.

I’ve held onto both—in spite of the fact that I also now own several newer microprocessor-controlled (versus this transformer-based model) chargers, not only significantly more compact but offering enhanced features such as desulfication support—primarily due to the 50A jump-start capability that only the old-school DieHard charger seemingly delivers.
Geriatric degradationWhen I fired one of them up a few months back after not using either of them for a while, though, I noticed that it was making a loud humming sound—incrementally louder at the 2A, then 10A, and finally 50A settings, as I’d recollected from the past—but much louder at each output option than I’d remembered. To confirm, I pulled the other charger out of its box, which also hummed but at the noticeably lower din that I’d recalled. Plus, the first charger didn’t seem to be doing anything charging-wise, whereas the second still seemingly worked fine.
Here’s the first (loud humming) charger, which I re-hooked up just yesterday to my 2001 Volkswagen Eurovan Camper (which uses a standard SLA, not AGM, battery), at the 2A setting:

10A setting:

and 50A setting:

The gauge readings don’t seem to make sense in any of these cases. As background, I top off the charge (normally using one of my more modern chargers) on the battery in the in-storage van once a month at the beginning of the month. I took those photos a bit more than halfway through the month, after a small amount of leakage discharge had inevitably occurred (less than at the end of the month, but still not nothing). So, the full-charge indication doesn’t seemingly reflect reality. Compared to them, the 2A- and 10A-setting displays when using the second (lower humming) charger are more in line with my expectations:


as is the second charger’s 50A-setting display, which I’ve shot as a video because this time, unlike previously, the LED is rapid-blinking as expected:
A 0V output isn’t always bad newsJust prior to taking the prior photos yesterday, I’d actually begun my investigation by hooking both chargers up to my multimeter to see what they were outputting. Here’s the first (loud humming) charger at its 2A, 10A, and 50A settings, first configured for use with an AGM battery:



and then set for a standard SLA battery:



The output levels were, I initially (albeit incorrectly) ascertained, in the ballpark of what one would expect for a 12V battery charging target, although perhaps a bit low. Now look at what happened when I hooked the second (lower humming) charger up, again at its 2A, 10A and 50A settings, first configured for use with an AGM battery and then a standard SLA battery:

I’ve saved you from looking at six consecutive images of the multimeter displaying the exact same thing: 0V. This initial outcome actually had me wondering whether the second (lower humming) charger was the one that had “gone south”, until I did a bit of online research and learned that this behavior is to be expected. Unless the charger detects that it’s connected to a correct-polarity battery that isn’t already drained (hold that thought), it will disable its output, among other reasons, to prevent sparking in the presence of hydrogen and other off-gassing.
Some amount of transformer hum is to be expected, of course, as many folks reading this already realize; the root-cause phenomenon is known as magnetostriction and results in a generated tone at twice the mains AC frequency (i.e., at 120 Hz in the U.S., for example):
Additional hum sources, quoting Wikipedia, are “stray magnetic fields causing the enclosure and accessories to vibrate.” And it’s also normal for the hum volume to increase somewhat under higher load. Abnormally loud hum and other noise, however, is the result of other, degradation-induced factors, such as progressive disintegration of the transformer’s core adhesive, resulting in separation of the laminated layers, or a rattle caused by loose component mounting bolts.
(Sorta-) twin sons of different mothersAt this point, I’ll point out something else interesting (at least to me) that my research uncovered: there were (at least) two different internal designs that reached production for this particular DieHard charger. It’s the model 71222; as you can see from this closeup of the outer box, mine’s specifically a model 28.71222 (here’s a link to the user manual):

But in searching around, I also came across references to another version, the model 200.71222, including another user manual link (this time even including a parts list and wiring diagram!). The two variants seem functionally identical from a high-level description standpoint and look similar from the outside, too, aside from a multicolor front panel motif in the model 200.71222:

versus my more monochrome model 28.71222. But the insides are a different matter…
At this point, I’ll point out another “information” (I’m using the term somewhat loosely) source that I came across during my research: this video:
Bonus points to Jason Hemphill, the video creator, for knowing (for example) the difference between the transformer’s primary and secondary sides, as well as for (sorta) explaining the purpose of two diodes connected to the transformer’s center tap secondary. But when, in pointing out what he called the “little smart board”, he voiced the following elucidation:
These wires over here…they’re just control…they don’t do anything…
I admittedly started shaking my head. And when, with the charger still powered up, he then yanked the “little smart board’s” fourth (black) wire out of what it was plugged into at its other end (item 7, the 35A circuit breaker, if you’ve already cross-referenced the parts list and wiring diagram in the user manual I pointed out to you earlier), I about fell out of my chair. And then I realized that although his charger was also a DieHard model 71222, it didn’t look like mine on the inside; I hadn’t yet noticed the front-panel motif variance between the two.
Looking “under the hood”At this point, I’ll transition to the teardown portion of my write-up, before returning and concluding. Beginning with the obligatory outer box shots:





Can’t forget this all-important one…

I next opened it up:

and then pulled out the contents (I later found the paper user manual in my filing cabinet):




Convenient carry handle:

Only after connecting the charger to the battery and selecting the desired settings should you, and I quote, “Plug the charger into a live AC power outlet”. Further, “Unplug the AC cord before disconnecting the battery clips”….as well as prior to unplugging internal cabling, yes?

Back off my soapbox…

and back to the backside to uncoil the power cord:

Don’t worry, I won’t ascend the soapbox again. That said…

And now to dive inside. You may have noticed the four screw heads on the sides, two per. Guess what comes next?


That got me partway there:

Oh yeah, there’s another screw head on the underside:




At this point, I was still clinging to the delusion that this charger might be working (I hadn’t yet found Jason Hemphill’s video), so I didn’t disassemble it further. Still, I hope the photos of the internals of my model 28.71222 will be educational for you, not only standalone but also in comparison to Hemphill’s presumed model 200.71222. Here, first off, is the rear-located internal PCB, both much larger than the one in Hemphill’s charger and with an integrated circuit breaker (more accurately stated: fuse pair):
Check out the sizeable SCRs (silicon-controlled rectifiers) and discrete transistors bolted to metal heat-transfer plates on either side of the PCB!
An inner view of the front panel, with the charging current switch at lower left, the gauge at upper right and the AGM-vs-standard SLA switch below it:
And, last but definitely not least, the predominant contributor to the unit’s ~11 lb weight, the transformer. Here’s the primary winding:
And the secondary:
and finally, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, perspectives of the top (primary at left, secondary at right):
and one side (ditto):
Old-school pros, cons and conclusionsNote that the output voltage Jason Hemphill was getting out of his charger prior to his “hack” is a close approximation of what I’m seeing with mine at its 50A setting. He indicated in his video that he exclusively uses his “fixed” unit at 50A, so I’m assuming his entire video was also shot with it configured that way (I couldn’t find a sufficiently clear video frame of the front panel to confirm). And by the way, in scrolling through the comments and his responses, I realized I owed him more credit than the little I’d initially allocated (with minor grammar tweaks by yours truly):
You are exactly right. It is not fixed. And you’re also right; it’s likely to be a 10-cent transistor. But most people will not be able to fix the transistor issue. They won’t spend the time to find it, order it and replace it. The solution I’m offering is to turn it into an old school charger. It takes out the safety technology. This solution is an option for people who are old school and are used to working with things that way. As I stated in the video, this isn’t an option to hook up to a battery and walk away. So, if you’re someone who can’t hook up a battery right or doesn’t understand the idea of overcharging and needs idiot-proof technology to do that for you, this isn’t your option: go buy a new one. But if you’re old school, this will do the job.
Further perusing the 100+ comments (resulting from 115,000+ views to date!) of Jason Hemphill’s video was not only educational but also entertaining. I learned, for example, that the DieHard model 200.71222 is internally identical to the Schumacher Electric (the original developer, I’m assuming) SE5212A charger. No idea who originally developed my DieHard model 28.71222, however. And even if I did, I’m not going to try to resurrect this one, no matter that plenty of other folks seemingly prefer ones of a fully manual fashion.
Sure, by bypassing the “little smart board,” the now-manual charger might attempt to resurrect a fully drained battery, but my more modern chargers already do the same thing. They, plus the still-working sibling to my malfunctioning model 28.71222, will also automatically shut off at the end of the charging cycle, versus overcharging and potentially ruining the battery (not to mention causing other potential broader problems). And they’ll also save me from calamity should I distractingly hook up the charger to the battery in a reverse polarity state.
Thoughts on the topics discussed and internal circuitry revealed in today’s piece? Let me know in the comments! If you’re interested in inheriting this charger and converting it to a manual version yourself (note that I take no financial or other responsibility for any subsequent calamities), send me an email! And by the way, if you’re interested in finding out more about how car battery testers work, head here!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- SLA batteries: More system form factors and lithium-based successors
- Technology Simplicity: A Relocation Transition Provides The Opportunity
- Dead Lead-acid Batteries: Desulfation-resurrection opportunities?
The post A battery charger that loudly hums: Dump it or just make it dumb? appeared first on EDN.
From gap to signal: Non-contact capacitive displacement sensors

Capacitive displacement sensors turn tiny gaps into actionable signals. By measuring changes in capacitance as target moves, these devices deliver precise, non-contact readings of position and motion. Their touch-free nature makes them ideal for fragile surfaces, high-speed machinery, and environments where mechanical wear is unacceptable.
From tuning dials to nanometers: The capacitive lineage
Historically, the lineage from the vintage “gang condenser” to modern capacitive displacement sensors is surprisingly direct. In early radio receivers, variable air capacitors translated a knob twist into resonance tuning by modulating plate overlap. Modern sensors exploit this same fundamental relationship—geometry and permittivity—but invert the objective.
Rather than adjusting capacitance to achieve resonance, they elevate infinitesimal ∆C into the measurement itself, quantifying motion with great fidelity. What was once a utility for frequency selection has become the primary metric of precision measurement, a century-old tuning trick reborn as precision instrumentation.

Figure 1 Rotating plates in a gang condenser modulated capacitance to tune resonance in early radio receivers. Source: Author
Capacitive sensing in everyday tools
A familiar example of this principle at work is the digital caliper. Most mainstream models utilize capacitive linear encoding: as the sliding jaw moves across a scale patterned with fixed conductive tracks, the shifting electrode geometry produces periodic variations in capacitance. The caliper’s onboard electronics digitize these differential phase shifts, translating them into precise position readouts with resolutions typically reaching 0.01 mm.
This method effectively mitigates errors from minor gaps in the slider’s fit. In essence, the tool leverages the same fundamental physics as the gang condenser—the interplay of electrode overlap and dielectric spacing—but adapts that variable capacitance into a robust, high-resolution incremental measurement system.

Figure 2 A teardown reveals the underlying sensing mechanism of a digital caliper. Source: Author
Capacitive displacement: The secret to frictionless precision
When mechanical gears grow too bulky and optical sensors prove too fragile, capacitive displacement sensing steps in. By detecting subtle shifts in an electric field—changes invisible to the eye—these sensors achieve fine accuracy. From high-end CNC machines to scientific instruments, they raise the bar for measurement—precision delivered without the drag of friction.
Capacitive displacement sensors are high-precision, non-contact instruments that measure position or distance by detecting changes in electrical capacitance. The system functions as a parallel plate capacitor, where the sensor probe serves as one conductive plate and the target object acts as the other.
As the gap (dielectric space) between the probe and the target fluctuates, the capacitance shifts in inverse proportion to the distance. By monitoring these minute variations, the sensor provides exceptionally accurate, sub-nanometer resolution measurements without ever making physical contact with the target.
In real-world practice, a capacitive displacement sensor system is not just a single probe but a complete measurement chain that typically includes a sensor head, a controller, a power supply, and cabling. The sensor head (probe) is the capacitive element that interacts with the target surface, while the controller provides excitation, interprets the capacitance changes, and outputs a usable displacement signal.
A power supply—either integrated into the controller or external—ensures stable operation, and shielded cables and connectors maintain signal integrity. For example, systems like the Lion Precision CPL series or Micro-Epsilon capaNCDT sensors use this modular setup: a probe head for sensing, a controller for signal processing, and a power supply to stabilize the system. Some controllers are designed for a single probe input, while others can accommodate multiple probes, enabling multi-point measurements when required.

Figure 3 This capacitive displacement sensor delivers single-channel, noncontact measurement for precision position and displacement applications. Source: Lion Precision
Guard ring and active guarding: Ensuring measurement integrity
On paper, the principle of capacitive displacement measurement relies on the operation of an ideal parallel-plate capacitor. When the distance between the sensor and the measurement object changes, the total capacitance varies accordingly.
If an alternating current of constant frequency and amplitude flows through the sensor capacitor, the resulting alternating voltage becomes directly proportional to the distance to the target (or ground electrode). This variation in distance is detected and processed by the controller, which then outputs a value representing the measured displacement through its designated channels.
However, since the sensor (sensing element) acts as one conductive plate and the target object as the other, accurate measurement requires that the electric field remain confined to the space between them. If the field extends to nearby objects or surfaces, any movement of those items may be misinterpreted as a displacement of the target.
To prevent such interference, a guard ring with active guarding is mostly employed, a technique that ensures the sensing field is restricted to the intended measurement zone, thereby maintaining measurement integrity. In practice, the guard ring—conductive shield around the sensing element—is energized with an alternating voltage.

Figure 4 Guard ring energizes with AC voltage, confines field, and ensures accurate sensing. Source: Author
When simply putting it all together, the capacitive displacement measurement process begins with the sensor generating a controlled electric field between the probe and target, followed by detecting capacitance changes as the gap distance varies, then processing the signal by converting capacitance variation into a proportional voltage output, and finally calculating distance based on the direct correlation between voltage and displacement.
The capacitive displacement sensor circuit integrates several essential elements, including a high-frequency oscillator, capacitance-to-voltage converter, signal conditioning amplifier, guard drive circuitry for noise reduction, a temperature compensation network, and an output linearization circuit.
To ensure accuracy, the guard ring surrounding the sensing element is actively driven at the same potential and phase as the sensor signal, suppressing stray capacitance and preserving uniformity of the electric field.
Wrap-up: Forking up the gaps for refinement
Capacitive displacement sensors are prized first and foremost for precision positioning—keeping machine tools, assemblies, and instruments aligned to exact tolerances. Yet their talent does not stop there. The same principle that tracks motion can also measure thickness, detect vibration, or monitor material expansion.
And while they excel with conductive targets, clever designs enable them to sense non-conductive materials as well, broadening their reach across manufacturing, research, and quality-control applications.
Similarly, capacitive displacement sensors share much with eddy-current sensors; both excel at non-contact measurement and precise positioning. The key difference lies in their physics: one reads electric field shifts, the other tracks magnetic field interactions.
Moving forward, as usual when handling a complex topic, some key pieces may slip through the narrow gaps. Those will be forked up, revisited, and refined. One such area worth expanding later is the role of knob-on-display (KoD), a practical human-machine interface (HMI) element that bridges the gap between tactile mechanical control and dynamic visual feedback.
Interestingly, KoD is often overlooked in broader displacement sensing discussions, despite its sophisticated use of capacitive grids to track angular position. By re-contextualizing the rotary dial as a specialized coordinate-shifting sensor, we move beyond simple HMI aesthetics into the realm of high-reliability, closed-loop feedback systems.
Your insights or questions on KoD, or on any other aspect, are welcome to help sharpen the refinement process.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Capacitive Touch Sensor
- An introduction to capacitive sensing
- Capacitive sensing techniques and considerations–The basics
- Capacitive sensors can replace mechanical switches for touch control
- Effective Design Techniques for Capacitive Sensing in Consumer Applications
The post From gap to signal: Non-contact capacitive displacement sensors appeared first on EDN.
Balun transformers: Linking balanced to unbalanced

Balun transformers remain indispensable in RF and high-frequency design, serving as the quiet interface between balanced transmission lines and unbalanced circuits. By enabling impedance matching, minimizing signal distortion, and suppressing common-mode noise, they provide the foundation for reliable connectivity in applications ranging from antennas to amplifiers to broadband communication systems.
As wireless technologies push toward higher frequencies and tighter integration, understanding the principles and practical nuances of balun transformers is key to optimizing performance and ensuring design resilience.
The term “balun” itself comes from balanced to unbalanced. While many implementations use transformer coupling, not all baluns are transformer-based—some rely on transmission line techniques. Using “balun transformer” specifies the transformer-type design, distinguishing it from coaxial sleeve or other non-transformer baluns.
Historic note: The iconic TV balun adapter
Before digital tuners and streaming boxes took over, this compact 300 Ω to 75 Ω matching transformer was a fixture in analog television setups. Designed to reconcile the impedance and mode mismatch between twin-lead ribbon antennas and coaxial inputs, it featured screw terminals for the antenna wire and a standard coaxial plug for the TV’s antenna input socket.
Connected at the final stage of the antenna lead and plugged directly into the tuner, it quietly performed its dual role—impedance transformation and balanced-to-unbalanced conversion. This ensured that rooftop signals reached living rooms with minimal distortion. In the analog broadcast era, this unassuming adapter was the last link in the RF chain, faithfully bridging generations of antenna technology.

Figure 1 Screwing the 300 Ω ribbon cable into the balun terminals and plugging its coaxial end into the TV’s antenna input socket completes the balanced-to-unbalanced transition. Source: Author
Video balun transformers: Bridging coax and twisted pair
Video balun transformers—more commonly referred to simply as video baluns in industry parlance—extend the utility of balun technology beyond RF and audio domains into the realm of video signal transmission. These devices convert unbalanced coaxial signals (such as composite video) into balanced signals suitable for twisted-pair cabling, and vice versa.
This conversion not only reduces susceptibility to electromagnetic interference (EMI) but also enables cost-effective long-distance video distribution using standard Cat5/Cat6 cabling. Passive video baluns rely on transformer coupling to maintain signal integrity without external power, while active baluns incorporate amplification and equalization to support higher resolutions or longer cable runs.
In surveillance and broadcast applications, video baluns have become indispensable for bridging legacy coaxial infrastructure with modern structured cabling, ensuring clean signal delivery and simplified installation.

Figure 2 Video baluns connect coaxial BNC interfaces to twisted-pair cabling and deliver HD CCTV signals over long distances with reduced interference. Source: Author
As a quick aside, it’s worth noting that the K and MP ratings of a video balun both denote its supported resolution class. The MP rating specifies the maximum camera resolution in megapixels, while the K rating expresses the same capability in terms of horizontal pixel count.
In practice, both ratings reflect the balun’s bandwidth and signal-handling capacity for HD CCTV. For example, a 4K balun supports roughly 8 megapixels of resolution, since 3840 × 2160 pixels equals about 8.3MP (8.3 million pixels).
Baluns in practice: Theory meets application
Balun transformers are invaluable not only for converting between balanced and unbalanced signals but also for performing impedance transformations with minimal loss. Unlike LC circuits, many balun designs can operate effectively across very wide frequency ranges.
In RF applications, baluns are commonly used to interface antennas with transmitters and receivers, ensuring that as much power as practically possible is delivered. This session blends accessible theory—without heavy mathematics—with a few practical pointers and real-world implementations.
Among the fundamental designs, the balun transformer is the most widely recognized. Using magnetic coupling, it converts between balanced and unbalanced signals while providing excellent isolation and impedance matching. Transmission-line baluns achieve balance through carefully arranged lengths of coaxial or twisted-pair lines, making them well-suited for wideband RF applications.
Hybrid baluns combine transformers and transmission-line techniques, offering flexibility across frequency ranges. Together, these basic types form the foundation for more advanced designs, and understanding their principles helps engineers and experimenters select the right balun for applications ranging from antenna systems to CCTV.
In practice, the terms “balun transformer” and “transformer balun” both refer to the same device: a balun realized through transformer coupling. The difference is mostly in emphasis. Balun transformer highlights the function first—balanced-to-unbalanced conversion—while noting that it’s implemented as a transformer.
Transformer balun highlights the construction first, pointing out that it’s a transformer adapted to serve as a balun. Both usages are common, but in technical writing “balun transformer” is often preferred because it stresses the primary role of the device.
A further distinction often made is between voltage baluns and current baluns. A voltage balun enforces equal voltages on the balanced output terminals, which can work well in many cases but may allow unequal currents if the load is not perfectly symmetrical. In contrast, a current balun enforces equal and opposite currents in the balanced lines, often providing better suppression of common-mode currents on antenna feedlines.
Both approaches have their place: voltage baluns are straightforward and widely used, while current baluns are often preferred in RF antenna systems where minimizing feedline radiation and maintaining balance are critical.
Also essential to audio systems, baluns form the core of passive direct injection (DI) boxes. A passive DI employs a transformer—acting as a voltage balun—to convert an unbalanced, high-impedance instrument signal into a balanced, low-impedance output. This conversion is vital for interfacing high-Z sources such as electric guitars with low-Z mixing console inputs over long cable runs.
By enforcing equal and opposite voltages on the balanced lines, the transformer achieves high common-mode rejection, suppressing noise and ensuring transparent signal transfer. This application demonstrates how the balancing principles fundamental to RF and CCTV extend seamlessly into professional audio, underscoring the cross-domain versatility of balun technology.

Figure 3 A passive DI box handles extreme signal levels without introducing any distortion. Source: Radial Engineering
Seemingly, instead of diving straight into balun transformer–based RF or video projects, makers may find it easier—and just as rewarding—to begin with a closely related audio build: the passive DI box. Ready-to-use direct box transformers are widely available, and their simplicity makes them an ideal starting point for a fun and accessible DIY project.
Notable part numbers include JT-DB-EPC and A187A10C, both excellent examples of components that make this project approachable for beginners. The Hammond 1140-DB-A is another great catch, offering a versatile option for those eager to experiment with high-quality audio designs.

Figure 4 The 1140-DB-A direct box transformer delivers a balanced microphone output from an unbalanced line-level signal, enabling long cable runs with minimal high-frequency loss. Source: Hammond
From first steps to deeper layers
As is often the case, we have only just wetted our feet—there is still a vast ocean of balun transformer theory, design variations, and application nuances left to explore. From specialized wideband implementations to creative DIY builds, each path opens new insights into how these deceptively simple devices shape signal integrity across RF, audio, and video domains.
This overview is meant as a starting point, a foundation for deeper dives into the many layers of balun transformer technology that await.
Your turn: If this sparked your curiosity, take the next step—experiment with a simple antenna balun build, revisit your audio gear with fresh eyes, or explore advanced designs in RF literature. Share your experiences, questions, or even your own schematics, because the best way to deepen understanding is to connect theory with practice.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Sevick’s Transmission Line Transformers, Baluns
- Delicate balancing acts ensure balun performance
- Understand baluns for highly integrated RF modules
- Harmonic balance simulation speeds RF mixer design
- Using Baluns and RF Components for Impedance Matching
The post Balun transformers: Linking balanced to unbalanced appeared first on EDN.
Designing the voice AI stack: Integrating spatial hearing AI with edge-based intent gating

We’re past the point where voice can be treated as just another feature.
For more than a decade, the smart home has operated under a flawed assumption: that voice is optional. It’s not. As homes grow more complex and connected, voice is the only interface that aligns with how people actually live.
Traditional interfaces don’t scale: touchscreens fail when your hands are full, apps demand too much attention, and remotes are always missing when you need them. Voice is the only input that works across rooms, contexts, and users, if it works reliably.
And yet, we’re still tethered to physical buttons and remote controls, because we don’t fully trust voice interfaces. They miss commands, struggle in noisy environments, and break the moment connectivity becomes unstable. That’s not a UI flaw. It’s an architectural one.
To replace the light switch, voice needs to be always available, always accurate, and always in context. That means rethinking where intelligence lives and how decisions are made.
Hybrid Voice AI architecture is not an incremental upgrade, it’s an engineering breakthrough that transforms the smart home from a scattered set of reactive gadgets into a cohesive, proactive system. By separating real-time, on-device reflexes from deep, cloud-based reasoning, this architecture is designed to make voice a trusted, primary interface, every time, in every room.
Making voice work in the real worldThe flaw in current voice technology isn’t a lack of data; it’s a lack of clarity
Real homes are acoustically chaotic. They’re full of overlapping conversations, background music, household noise, and hard surfaces that introduce echo and reverb. Users speak from different rooms, distances, and angles. Commands are often ambiguous or incomplete. These aren’t edge cases. They’re the default operating conditions.
Current cloud-only models are powerful but slow, while legacy on-device models are fast but dim-witted. Neither alone can deliver the “Star Trek” experience users crave. To achieve the non-negotiable standard of 100% reliability, we need a system that mimics the human brain’s ability to process reflexes locally and complex thoughts deeply
In that context, today’s voice interfaces consistently fall short. Not because of a lack of data or model size, but because of fundamental architecture-level decisions about where processing happens, how quickly systems respond, and how they handle failure.
A symbiotic two-tier architectureThe innovation lies in splitting the intelligence. By decoupling immediate execution from deep reasoning, we create a system that is both instant and intelligent.
- The Reflex Layer – Edge AI (Supports Instant Response):
- Definition: Think of this as the smart home’s autonomic nervous system.
- Innovation: High-performance, always-on SLM embedded directly on the device’s silicon.
- Function: Handles the “here and now.” Commands like “Lights on” or “Volume down” are processed locally with near-zero latency.
- Impact: Delivers absolute privacy and instant responsiveness. No data leaves the room, and the experience feels as immediate as flipping a physical switch.
- The Reasoning Layer – Cloud AI (Intelligent Coordination):
- Definition: This acts as the system’s prefrontal cortex—responsible for reasoning.
- Innovation: Leverages large language models (LLMs) to manage long-term state, memory, and complex logic across devices and use cases.
- Function: Handles the “what if” and “what next.” It manages household routines, coordinates multiple devices, and draws inferences from incomplete inputs (e.g., “Order dinner for whoever is home tonight.”)
- Impact: Enables devices to go beyond command execution—they begin to understand intent, anticipate user needs, and adapt over time (Figure 1).

Figure 1 A hybrid voice stack routes audio through on-device perception (AEC, spatial analysis, separation, intent gating) and escalates only complex requests to cloud reasoning. (Source: Kardome)
Differentiation for the decade aheadFor OEMs and Tier 1 suppliers, architecture, not features, is emerging as the defining battleground for the next generation of smart home systems.
The market is saturated with devices that can set timers, play music, or toggle lights. These capabilities are now commodity. What will set future systems apart is their ability to demonstrate true Auditory Intelligence—to perceive, localize, and interpret human speech reliably, even in noisy, multi-speaker, real-world environments.
By integrating spatial hearing AI and cognition technologies into a hybrid architecture, manufacturers can go beyond individual product features and instead build the auditory nervous system of the modern home.
We are past the era of voice assistants that require users to repeat themselves or speak in rigid syntax. Hybrid Voice AI enables a different class of experience—one where technology is felt, but rarely seen.

Figure 2 Spatial processing turns a mixed audio scene (TV + two speakers + reverb) into separated target streams suitable for intent detection and command execution. (Source: Kardome)
What “reflex vs. reasoning” meansIn a production voice system, “hybrid” isn’t simply “ASR on-device and an LLM in the cloud.” It’s a routing architecture with a continuously running perception pipeline that decides:
- Is anyone speaking?
- Who is speaking (and where)?
- Is it directed at the device?
- Can we execute locally, or do we need cloud reasoning?
A practical edge “reflex” stack typically includes:
- Acoustic front end (always-on): microphone capture → gain control / denoise → echo cancellation (to remove the device’s own playback).
- Spatial scene analysis: estimate how many sources exist and where they are relative to the device (near/far, left/right, different rooms).
- Source separation + target selection: isolate the intended speaker stream(s) and suppress competing sources (TV, music, second speaker).
- Speech activity detection + endpointing: stable detection of speech start/stop to avoid clipped commands and reduce false triggers.
- Device-directed intent gating (SLM): a lightweight model answers: “Is this speech for the device?” using spatial cues + conversational flow + linguistic signals.
- Execution vs. escalation:
- Local path: deterministic actions and short commands (“lights on,” “stop,” “volume down”) with minimal latency.
- Cloud path: long-horizon reasoning, multi-device planning, and tasks requiring external knowledge—only when needed.
The engineering advantage is that the system can stay fast and predictable for everyday commands while still enabling deeper capabilities when appropriate.
Why spatial audio is the “make or break” layerMost failures in today’s voice assistants begin before language: the system is fed garbage audio (mixed speakers, reverberation, background media), then asked to “understand” it. Hybrid architectures push the hard work earlier: fix the audio scene first, then do language.
Spatial processing matters because it enables three foundational capabilities:
- Localization: determine where speech is coming from and whether it’s in the same room.
- Separation: isolate a voice even with overlapping speakers and media noise.
- Attribution: reduce wrong-room actions and improve “who said what” reliability.
This is also where direction of arrival (DOA)-only approaches struggle in real homes: reflective surfaces create strong echoes and multiple delayed arrivals. A “flat” directional estimate can become unstable under reverb, causing separation and attribution errors. A more robust approach treats each source as having a unique spatial signature (an “acoustic fingerprint”) and uses that signature to stabilize separation and tracking over time.
Latency, offline behavior, failure modesIf voice is going to replace physical controls, reliability can’t be an aspiration—it has to be engineered with explicit budgets and test matrices.
Latency budgetHumans pause roughly ~200ms between conversational turns, while cloud round trips often land in the 1–3 second range—good enough for Q&A, not good enough for control.
The reflex path should therefore be designed so the most common commands complete without waiting on the network.
Offline and “brownout” modesDefine tiers of capability that remain functional without connectivity:
- Tier A (must work offline): lights, volume, stop/quiet, timers, basic routines.
- Tier B (cloud-required): deep reasoning, external services.
This avoids a binary “voice works / voice is dead” experience and increases user trust.
Failure modes that must be tested (not treated as edge cases)
- overlapping speakers (barge-in, crosstalk)
- competing media (TV/music)
- far-field speech + occlusion (speaker in hallway / adjacent room)
- changing echo paths (content and volume changes)
- reverberant rooms (kitchen tile, open-plan living spaces)
Metrics that map to trust (beyond WER):
- end-to-end command success rate by scenario class
- false accept / false reject rates for device-directed intent gating
- speaker attribution / room attribution accuracy
- P95 latency (not just average) for Tier A commands
- recovery time after connectivity loss
A counterintuitive benefit of edge-first reflex layers is that they can be more private and more cost-stable than cloud-streaming approaches—because a large fraction of everyday interactions can be processed locally, and the cloud is invoked only when deeper reasoning is necessary.
On the economics side, cloud inference costs scale with usage, while edge compute is amortized with silicon volume and can reduce the need for continuous cloud processing for trivial requests.
One example of this architectural direction is Kardome, which focuses on combining spatial hearing (to separate and localize voices) with an on-device context-aware SLM (to decide whether speech is directed at the system), escalating to the cloud only when deeper reasoning is needed.

Dr. Alon Slapak is the co-founder and CTO of Kardome, a voice AI startup pioneering Spatial Hearing and Cognition AI technology that enables seamless, natural voice interaction in real-world noisy environments. He holds a Ph.D. from Tel Aviv University and brings deep expertise in acoustics, signal processing, and machine learning. Alon and co-founder and CEO Dr. Dani Cherkassky launched Kardome out of a shared passion for solving end-user frustrations with voice devices, combining their expertise in acoustics and advanced machine learning to build leading-edge voice user interface technology. Kardome has raised $10M in Series A funding.
Related Content
- Sparse AI MCU facilitates voice processing and cleanup at edge
- IoT: GenAI voice helps generate speech recognition models
- AI noise suppression for a better listening experience
The post Designing the voice AI stack: Integrating spatial hearing AI with edge-based intent gating appeared first on EDN.


































