EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 23 min 30 sec ago

Tapo or Kasa: Which TP-Link ecosystem best suits ya?

Mon, 12/08/2025 - 15:00

The “smart home” market segment is, I’ve deduced after many years of observing it and, in a notable number of “guinea pig” cases, personally participating in it (with no shortage of scars to show for my experiments and experiences), tough for suppliers to enter and particularly remain active for any reasonable amount of time. I generally “bin” such companies into one of three “buckets”, ordered as follows in increasing “body counts”:

  • Those that end up succeeding long-term as standalone entities (case study: Wyze)
  • Those who end up getting acquired by larger entities (case studies: Blink, Eero, and Ring, all by Amazon, and Nest, by Google)
  • And the much larger list of those who end up fading away (one recent example: Neato Robotics’ robovacs), a demise often predated by an interim transition of formerly-free associated services to paid (one-time or, more commonly, subscription-based) successors as a last-gasp revenue-boosting move, and a strategy that typically fails due to customers instead bailing and switching to competitors.

There’s one other category that also bears mentioning, which I’ll be highlighting today. It involves companies that remain relevant long-term by periodically culling portions of the entireties of product lines within the overall corporate portfolio when they become fiscally unpalatable to maintain. A mid-2020 writeup from yours truly, as an example, showcased a case study of each; Netgear stopped updating dozens of router models’ firmware, leaving them vulnerable to future compromise in favor of instead compelling customers to replace the devices with newer models (four-plus years later, I highlighted a conceptually similar example from Western Digital), and Best Buy dumped its Connect smart home device line in its entirety.

A Belkin debacle

 

Today’s “wall of shame” entry is Belkin. Founded more than 40 years ago, the company today remains a leading consumer electronics brand; just in the past month or so, I’ve bought some multi-port USB chargers, a couple of MagSafe Charging Docks, several RockStar USB-C and Lightning charging-plus-headphone adapters, and a couple of USB-C cables from them. Their Wemo smart plugs, on the other hand…for a long time, truth be told, I was a “frequent flyer” user and fan of ‘em. See, for example, these teardowns and hands-on evaluation articles:

A few years ago, however, my Wemo love affair started to fade. Researchers had uncovered a buffer overflow vulnerability in older units, widely reported in May 2023, that allowed for remote control and broader hacking. But Belkin decided to not bother fixing the flaw, because affected devices were “at the end of their life and, as a result, the vulnerability will not be addressed” (whether it could have even been fixed with just a firmware update, with Belkin’s decline to do so therefore just a fundamental business decision, or reflected a fundamental hardware flaw necessitating a much more costly replacement of and/or refunds for affected devices, was never made clear to the best of my knowledge).

Two-plus years later, and earlier this summer, Belkin effectively pulled the plug on the near- entirety of the Wemo product line by announcing the pending sever of devices’ tethers not only to the Wemo mobile app and associated Belkin server-side account and devices’ management facilities, in a very Insteon-reminiscent move, but also in the process the “cloud” link to Amazon’s Alexa partner services. Ironically, the only remaining viable members of the Wemo product line after January 31, 2026 will be a few newer products that are alternatively controllable via the Thread protocol. As I aptly noted within my 2024 CES coverage:

Consider that the fundamental premise of Matter and Thread was to unite the now-fragmented smart home device ecosystem exemplified by, for example, the various Belkin Wemo devices currently residing in my abode. If you’re an up-and-coming startup in the space, you love industry standards, because they lower your market-entry barriers versus larger, more established competitors. Conversely, if you’re one of those larger, more established suppliers, you love barriers to entry for your competitors. Therefore the lukewarm-at-best (and more frequently, nonexistent or flat-out broken) embrace of Matter and Thread by legacy smart home technology and product suppliers.

Enter TP-Link

Clearly, it was time for me to look for a successor smart plug product line supplier and device series. Amazon was the first name that came to mind, but although its branded Smart Plug is highly rated, it’s only controllable via Alexa:

I was looking for an ecosystem that, like Wemo, could be broadly managed, not only by the hardware supplier’s own app and cloud services but also by other smart home standards such as the aforementioned Amazon (Alexa), along with Apple (HomeKit and Siri), Google (Home and Assistant, now transitioning to Gemini), Samsung (SmartThings), and ideally even open-source and otherwise vendor-agnostic services such as IFTTT and Matter-and-Thread.

I also had a specific hardware requirement that still needed to be addressed. The fundamental reason why we’d brought smart plugs into the home in the first place was so that we could remotely turn off the coffee maker in the kitchen if we later realized that we’d forgotten to do so prior to leaving the home; my wife’s bathroom-located curling iron later provided another remote-power-off opportunity. Clearly, standard smart plugs designed for low-wattage lamps and such wouldn’t suffice; we needed high-current-capable switching devices. And this requirement led to the first of several initially confusing misdirections with my ultimately selected supplier (motivated by rave reviews at The Wirecutter and elsewhere), TP-Link.

I admittedly hadn’t remembered until I did research prior to writing this piece that I’d actually already dissected an early TP-Link smart plug, the HS100, back in early 2017. That I’d stuck with Belkin’s Wemo product line for years afterward, admittedly coupled with my increasingly geriatric brain cells, likely explains the memory misfire. That very same device, along with its energy-monitoring HS110 sibling, had launched the company’s Kasa smart home device brand two years earlier, although looking back at the photos I took at the time I did my teardown, I can’t find a “Kasa” moniker anywhere on the device or its packaging, so…🤷‍♂️

My initial research indicated that the TP-Link Kasa HS103 follow-on, introduced a few years later and still available for purchase, would, along with the related HS105 be a good tryout candidate:

 

The two devices supposedly differed in their (resistive load) current-carrying capacity: 10 A max for the HS103 and 15 A for the HS105. I went looking for the latter, specifically for use with the aforementioned coffee maker and curling iron. But all I could find for sale was the former. It turns out that TP-Link subsequently redesigned and correspondingly up-spec’d the HS103 to also be 15A-capable, effectively obsoleting the HS105 in the process.

Smooth sailing, at least at first

And I’m happy to say that the HS103 ended up being both a breeze to set up and (so far, at least) 100% reliable in operation. Like the HS100 predecessor, along with other conceptually similar devices I’ve used in the past, you first connect to an ad-hoc Wi-Fi connection broadcast by the smart plug, which you use to send it your wireless LAN credentials via the mobile app. Then, once the smart plug reboots and your mobile device also reconnects to that same wireless LAN, they can see and communicate with each other via the Kasa app:

And then, after one-time initially installing Kasa’s Alexa skill and setting up my first smart plug in it, subsequent devices added via the Kasa app were then automatically added in Alexa, too:

Inevitable glitches

The latest published version of the Wirecutter’s coverage had actually recommended a newer, slightly smaller (but still 15A-capable) TP-Link smart plug, the EP10, so I decided to try it next:

 

Unfortunately, although the setup process was the same, the end result wasn’t:

This same unsuccessful outcome occurred with multiple devices from the first two EP10 four-pack sets I tried, which, like their HS103 forebears, I’d sourced from Amazon. Remembering from past experiences that glitches like this sometimes happen when a smartphone—which has two possible network connections, Wi-Fi and cellular—is used for setup purposes, I first disabled cellular data services on my Google Pixel 7, then tried a Wi-Fi-only iPad tablet instead. No dice.

I wondered if these particular smart plugs, which, like their seemingly more reliable HS103 precursors, are 2.4 GHz Wi-Fi-only, were somehow getting confused by one or more of the several relatively unique quirks of my Google Nest Wifi wireless network:

  1. The 2.4 GHz and 5 GHz Wi-Fi SSIDs broadcast by any node are the same name, and
  2. Being a mesh configuration, all nodes (both stronger-signal nearby and weaker, more distant, to which clients sometimes connect instead) also have the exact same SSID

Regardless, I couldn’t get them to work no matter what I tried, so I sent them back for a refund…

Location awareness

…only to then have the bright idea that it’d be cool to take both an HS103 and an EP10 apart and see if there was any hardware deviation that might explain the functional discrepancy. So, I picked up another EP10 combo, this one a two-pack. And on a “third time’s the charm” hunch (and/or maybe just fueled by stubbornness), I tried setting one of them up again. Of course, it worked just fine this time 🤷‍♂️

This time, I decided to try a new use case: controlling a table lamp in our dining room that automatically turned on at dusk and turned off again the next morning. We’d historically used an archaic mechanical timer for lamp power control, an approach not only noisy in operation but which also needed to be re-set after each premises electricity outage, since the timer didn’t embed a rechargeable battery to act as a temporary backup power source and keep time:

The mechanical timer was also clueless about the varying sunrise and sunset times across the year, not to mention the twice-yearly daylight saving time transitions. Its smart plug successor, which knows where it is and what day and time it is (whenever it’s powered up and network-connected, of course), has no such limitations:

Rebrands and migrations

Spec changes…inconsistent setup outcomes…there’s one more bit of oddity to share in closing. As this video details:

“Kasa” was TP-Link’s original smart home device brand, predominantly marketed and sold in North America. The company, for reasons that remain unclear to me and others, subsequently, in parallel, rolled out another product line branded as “Tapo” across the rest of the world. Even today, if you revisit the “smart plugs” product page on TP-Link’s website, whose link I first shared earlier in this writeup, you’ll see a mix of Kasa- and Tapo-branded products. The same goes for wall switches, light bulbs, cameras, and other TP-Link smart home devices. And historically, you needed to have both mobile apps installed to fully control a mixed-brand setup in your home.

Fortunately, TP-Link has made some notable improvements of late, from which I’m reading between the lines and deducing that a full transition to Tapo is the ultimate intended outcome. As I tested and confirmed for myself just a couple of days ago, it’s now possible to manage both legacy Kasa and newer Tapo devices using the same Tapo app; they also leverage a common TP-Link user account:

They all remain visible to Alexa, too, and there’s a separate Tapo skill that can also be set up:

along with, as with Kasa, support for other services:

Further hands-on evaluation

To wit, driven by curiosity as to whether device functional deviations are being fueled by (in various cases) hardware differences, firmware-only tweaks or combinations of the two, I’ve taken advantage of a 30%-off Black Friday (week) promotion to also pick up a variety of other TP-Link smart plugs from Amazon’s Resale (formerly Warehouse) area, for both functional and teardown analysis in the coming months:

  • Kasa EP25 (added Apple HomeKit support, also with energy monitoring)
  • Tapo P105 (seeming Tapo equivalent to the Kasa EP10)
  • Tapo P110M (Matter compatible, also with energy monitoring)
  • Tapo P115 (energy monitoring)
  • Tapo P125 (added Apple HomeKit support)

Some of these devices look identical to others, at least from the outside, while in other cases dimensions and button-and-LED locations differ product-to-product. But for us engineers, it’s what’s on the inside that counts. Stand by for further writeups in this series throughout 2026. And until then, let me know your thoughts on what I’ve covered so far in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Tapo or Kasa: Which TP-Link ecosystem best suits ya? appeared first on EDN.

Silicon MOS quantum dot spin qubits: Roads to upscaling

Mon, 12/08/2025 - 12:51

Using quantum states for processing information has the potential to swiftly address complex problems that are beyond the reach of classical computers. Over the past decades, tremendous progress has been made in developing the critical building blocks of the underlying quantum computing technology.

In its quest to develop useful quantum computers, the quantum community focuses on two basic pillars: developing ‘better’ qubits and enabling ‘more’ qubits. Both need to be simultaneously addressed to obtain useful quantum computing technology.

The main metrics for quantifying ‘better’ qubits are their long coherence time—reflecting their ability to store quantum information for a sufficient period, as a quantum memory—and the high qubit control fidelity, which is linked to the ‘errors’ in controlling the qubits: sufficiently low control errors are a prerequisite for successfully performing a quantum error correction protocol.

The demand for ‘more’ qubits is driven by practical quantum computation algorithms, which require the number of (interconnected) physical qubits to be in the millions, and even beyond. Similarly, quantum error correction protocols only work when the errors are sufficiently low: otherwise, the error correction mechanism actually ‘increases’ error, and the protocols diverge.

Of the various quantum computing platforms that are being investigated, one stands out: silicon (Si) quantum dot spin qubit-based architectures for quantum processors, the ‘heart’ of a future quantum computer. In these architectures, nanoscale electrodes define quantum dot structures that trap a single electron (or hole), its spin states encoding the qubit.

Si spin qubits with long coherence times and high-fidelity quantum gate operations have been repeatedly demonstrated in lab environments and are therefore a well-established technology with realistic prospects. In addition, the underlying technology is intimately linked with CMOS manufacturing technologies, offering the possibility of wafer-scale uniformity and yield, an important stepping stone toward realizing ‘more’ qubits.

A sub-class of Si spin qubits uses metal-oxide-semiconductor (MOS) quantum dots to confine the electrons, a structure that closely resembles a traditional MOS transistor. The small size of the Si MOS quantum dot structure (~100 nm) offers an additional advantage to upscaling.

Low qubit charge noise: A critical requirement to scale up

In the race toward upscaling, Si spin qubit technology can potentially leverage advanced 300-mm CMOS equipment and processes that are known for offering a high yield, high uniformity, high accuracy, high reproducibility and high-volume manufacturing—the result of more than 50 years of down selection and optimization. However, the processes developed for CMOS may not be the most suitable for fabricating Si spin quantum dot structures.

Si spin qubits are extremely sensitive to noise coming from their environment. Charge noise, arising from the quantum dot gate stack and the direct qubit environment, is one of the most widely identified causes of reduced fidelity and coherence. Two-qubit ‘hero’ devices with low charge noise have been repeatedly demonstrated in the lab using academic-style techniques such as ‘lift off’ to pattern the quantum dot gate structures.

This technique is ‘gentle’ enough to preserve a good quality Si/SiO2 interface near the quantum dot qubits. But this well-controlled fabrication technique cannot offer the required large-scale uniformity needed for large-scale systems with millions of qubits.

On the other hand, industrial fabrication techniques like subtractive etch in plasma chambers filled with charged ions or lithography-based patterning based on such etching processes easily degrade the device and interface quality, enhancing the charge noise of Si/SiO2-based quantum dot structures.

First steps in the lab-to-fab transition: Low-charge noise and high-fidelity qubit operations achieved on an optimized 300mm CMOS platform

Imec’s journey toward upscaling Si spin qubit devices began about seven years ago, with the aim of developing a customized 300-mm platform for Si quantum dot structures. Seminal work led to a publication in npj Quantum Information in 2024, highlighting the maturity of imec’s 300-mm fab-based qubit processes toward large-scale quantum computers.

Through careful optimization and engineering of the Si/SiO2-based MOS gate stack with a poly-Si gate, charge noise levels of 0.6 µeV/ÖHz at 1Hz were demonstrated, the lowest values achieved on a fab-compatible platform at the time of publication. The values could be demonstrated repeatedly and reproducibly.

Figure 1 These Si MOS quantum dot structures are fabricated using imec’s optimized 300-mm fab-compatible integration flow. Source: imec

More recently, in partnership with the quantum computing company Diraq, the potential of imec’s 300mm platform was further validated. The collaborative work, published in Nature, showed high-fidelity control of all elementary qubit operations in imec’s Si quantum dot spin qubit devices. Fidelities above 99.9% were reproducibly achieved for qubit preparation and measurement operations.

Fidelity values systematically exceeding 99% were shown for one- and two-qubit gate operations, which are the operations performed on the qubits to control their state and entangle them. These values are not arbitrarily chosen. In fact, whether quantum error correction ‘converges’ (net error reduction) or ‘diverges’ (the net error introduced by the quantum error correction machinery increases) is crucially dependent on a so-called threshold value of about 99%. Hence, fidelity values over 99% are required for large scale quantum computers to work.

Figure 2 Schematic of a two-qubit Diraq device on a 300-mm wafer shows the full-wafer, single-die, and single-device level. Source: imec

Charge noise was also measured to be very low, in line with the previous results from the npj Quantum Information paper. Gate set tomography (GST) measurements shed light on the residual errors; the low charge noise values, the coupling between the qubit, and the few remaining residual nuclear-spin-carrying Si isotopes (29Si) turned out to be the main factor in limiting the fidelity for these devices. These insights show that even higher fidelities can be achieved through further isotopic enrichment of the Si layer with 28Si.

In the above studies, the 300-mm processes were optimized for spin qubit devices in an overlapping gate device architecture. Following this scheme, three layers of gates are patterned in an overlapping and more or less self-aligned configuration to isolate and confine an electron. This multilayer gate architecture, extensively studied and optimized within the quantum community, offers a useful vehicle to study individual qubit metrics and small-scale arrays.

Figure 3 Illustration of a triple quantum dot design uses overlapping gates; electrons are shown as yellow dots. The gates reside in three different layers: GL1, GL2, and GL3, as presented at IEDM 2025. Source: imec

The next step in upscaling: Using EUV for gate patterning to provide higher yield, process control, and overlay accuracy

Thus far, imec used a wafer-scale, 300-mm e-beam writer to print the three gate layers that are central to the overlapping gate architecture. Although this 300-mm-compatible technique facilitates greater design flexibility and small pitches between quantum dots, it comes with a downside: its slow writing time does not allow printing full 300-mm wafers in a reasonable process time.

At IEDM 2025, imec for the first time demonstrated the use of single-print 0.33 NA EUV lithography to pattern the three gate layers of the overlapping gate architecture. EUV lithography has by now become the mainstay for industrial CMOS fabrication of advanced (classical) technology nodes; imec’s work demonstrates that it can be equally used to define and fabricate good quantum dot qubits. This means a significant leap forward in upscaling Si spin qubit technology.

Full 300-mm wafers can now be printed with high yield and process control—thereby fully exploiting the reproducibility of the high-quality qubits shown in previous works. EUV lithography brings an additional advantage: it allows the different gates to be printed with higher overlay accuracy than with the e-beam tools. That benefits the quality of the qubits and allows being more aggressive in the dot-to-dot pitches.

Figure 4 TEM and SEM images, after patterning the gate layers with EUV, highlight critical dimensions, as presented at IEDM 2025. Source: imec

The imec researchers demonstrated robust reproducibility, full-wafer room temperature functionality, and good quantum dot and qubit metrics at 10 mK. Charge noise values were also comparable to measurements on similar ‘ebeam-lithography’ devices.

Inflection point: Moving to scalable quantum dot arrays to address the wiring bottleneck

The overlapping gate architecture, however, is not scalable to the large quantum dot arrays that will be needed to build a quantum processor. The main bottleneck is connectivity: each qubit needs individual control and readout wiring, making the interconnect requirements very different from those of classical electronic circuits. In the case of overlapping gates, wiring fanout is provided by the different gate layers, and this imposes serious limitations on the number of qubits the system can have.

Several years ago, a research group at HRL Laboratories in the United States came up with a more scalable approach to gate integration: the single-layer gate device architecture. In this architecture, the gates that are needed to isolate the electrons—the so-called barrier and plunger gates—are fabricated in one and the same layer, more closely resembling how classical CMOS transistors are built and interconnected using a multilayer back end of line (BEOL).

Today, research groups worldwide are investigating how large quantum dot arrays can be implemented in such a single-layer gate architecture, while ensuring that each qubit can be accessed by external circuits. At first sight, the most obvious way is a 2D lattice, similar to integrating large memory arrays in classic CMOS systems.

But eventually, this approach will hit a wiring scaling wall as well. The NxN quantum dot array requires a large number of BEOL layers for interconnecting the quantum dots. Additionally, ensuring good access for reading and controlling qubits that are farther away from the peripheral charge sensors becomes challenging.

A trilinear quantum dot architecture: An imec approach

At IEDM 2021, imec therefore proposed an alternative, smart way of interconnecting neighboring silicon qubits: the bilinear array. The design is based on topologically mapping a 2D square lattice to form a bilinear design, where alternating rows of the lattice are shifted into two rows (or 1D arrays).

While the odd rows of the 2D lattice are placed into an upper 1D array, the even rows are moved to a lower 1D array. In this configuration, all qubits remain addressable while maintaining the target connectivity of four in the equivalent 2D square lattice array. These arrays are conceptually scalable as they can further grow in one dimension, along the rows.

Recently, the imec researchers expanded this idea toward a trilinear quantum dot device architecture that is compatible with the single-layer gate integration approach. With this trilinear architecture, a third linear array of (empty) quantum dots is introduced between the upper and lower rows. This extra layer of quantum dots now serves as a shuttling array, enabling qubit connectivity via the principle of qubit shuttling.

Figure 5 View the concept of mapping a 2D lattice onto a bilinear design and expanding that design to a trilinear architecture. The image illustrates the principle of qubit shuttling for the interaction between qubits 6 and 12. Source: imec

Figure 6 Top view of a 3×5 trilinear single gate array is shown with plunger (P) and barrier (B) gates placed in a single layer, as presented at IEDM 2025. Source: imec

The video below explains how that works. In the trilinear array, single and some of the two-qubit interactions can happen directly between nearest neighbors, the same way as in the bilinear architecture. For others, two-qubit interactions can be performed through the ‘shuttle bus’ that is composed of empty quantum dots. Take a non-nearest neighbor interaction between two qubits as an example.

The video shows schematics, conceptual operation, and manufacturing of trilinear quantum dot architecture. Source: imec

The first qubit is moved to the middle array, shuttled along this array to the desired site to perform the two-qubit operation with a second, target qubit, and shuttled back. These ‘all-to-all’ qubit interactions were not possible using the bilinear approach. Note that these interactions can only be reliably performed with high-fidelity quantum operations to ensure that no information is lost during the shuttling operation.

But how can this trilinear quantum dot architecture address the wiring bottleneck? The reason is the simplified BEOL structure: only two metal layers are needed to interconnect all the quantum dots. For the upper and lower 1D arrays, barrier and plunger gates can connect to one and the same metal layer (M1); the middle ‘shuttle’ array can partly connect to the same M1 layer, partly to a second metal layer (M2). Alongside the linear array, charge sensors can be integrated to measure the state of the quantum dots for qubit readout.

The architecture is also scalable in terms of number of qubits, as the array can further grow along the rows. If that approach at some point hits a scaling wall, it can potentially be expanded to four, five or even more linear arrays, ‘simply’ by adding more BEOL layers.

Using EUV lithography to process the trilinear quantum dot architecture: A world first

At IEDM 2025, imec showed the feasibility of using EUV lithography for patterning the critical layers of this trilinear quantum dot architecture. Single-print 0.33 NA EUV lithography was used to print the single-layer gate, the gate contacts, and the two BEOL metal layers and vias.

Figure 7 Single-layer gate trilinear array is shown after EUV lithography and gate etch with TEM cross sections in X and Y directions, as presented at IEDM 2025. Source: imec

One of the main challenges was achieving a very tight pitch across all the different layers without pitch relaxation. The gate layer was patterned with a bidirectional gate pitch of 40 nm. It was the first time ever that such an ‘unconventional’ gate structure was printed using EUV lithography, since EUV lithography for classical CMOS applications mostly focuses on unidirectional patterns. Next, 22-nm contact holes were printed with <2.5 nm (mean + 3 sigma) contact-to-gate overlay in both directions. The two metal layers M1 and M2 were patterned with metal pitch in the order of 50 nm.

Figure 8 From top to bottom, see the trilinear array (a-c) after M1 and (d-f) after M2 patterning, as presented at IEDM 2025. Source: imec

In the race for upscaling, the use of EUV lithography allows full 300-mm wafers to be processed with high yield, uniformity, and overlay accuracy between the critical structures. First measurements already revealed a room temperature yield of 90% across the wafer, and BEOL functionality was confirmed using dedicated test structures.

The use of single-patterning EUV lithography additionally contributes to cost reduction by avoiding complex multi-patterning schemes and to the overall resolution of the printed features. Moreover, the complexity and asymmetry of the 2D structure cannot be achieved with double patterning techniques.

The outlook: Upscaling and further learnings

In pursuit of enabling quantum systems with increasingly more qubits, imec made major strides: first, reproducibly achieving high-fidelity unit cells on two-qubit devices; second, transitioning from ebeam to EUV lithography for patterning critical layers; and third, moving from overlapping gate architectures to a single-layer gate configuration.

Adding EUV to imec’s 300-mm fab-compatible Si spin qubit platform will enable printing high-quality quantum dot structures across a full 300-mm wafer with high yield, uniformity, and alignment accuracy.

The trilinear quantum dot architecture, compliant with the single-layer gate approach, will allow upscaling the number of qubits by addressing the wiring bottleneck. Currently, work is ongoing to electrically characterize the trilinear array, and to study the impact of both the single-layer gate approach and the use of EUV lithography on the qubit fidelities.

The trilinear quantum dot architecture is a stepping stone toward truly large-scale quantum processors based on silicon quantum dot qubits. It may eventually not be the most optimal architecture for quantum operations involving millions of qubits, and clear bottlenecks remain.

But it’s a step in the learning process toward scalability and allows de-risking the technology around it. It will enhance our understanding of large-scale qubit operations, qubit shuttling, and BEOL integration. And it will allow exploring the expandability of the architecture toward a larger number of arrays.

In parallel, imec will continue working on the overlapping gate structure which can offer very high qubit fidelities. These high-quality qubits can be used as a probe to further study and optimize the qubit’s gate stack, understand the limiting noise mechanisms, tweak and optimize the control modules, and develop the measurement capability for larger scale systems in a systematic, step-by-step approach—leveraging the process flexibility offered by imec’s 300-mm infrastructure.

It’s a viable research vehicle in the quest for better qubits, providing learnings much faster than any large-scale quantum dot architecture. It can help increase our fundamental knowledge of two-qubit systems, an area in which there is still much to learn.

Sofie Beyne, project manager for quantum computing at imec, started her career at Intel, working as an R&D reliability engineer on advanced nodes in the Logic Technology Development department. She rejoined imec in 2023 to focus on bilateral projects around spin qubits.

Clement Godfrin, device engineer at imec, specializes in the dynamics of single nuclear high spin, also called qudit, either to implement quantum algorithm proof of principle on single nuclear spin of a molecular magnet system, or quantum error correction protocol on single donor nuclear spin.

Stefan Kubicek, integration engineer at imec, has been involved in CMOS front-end integration development from 130-nm CMOS node to 14-nm FinFET node. He joined imec in 1998, and since 2016, he has been working on the integration of spin qubits.

Kristiaan De Ggreve, imec fellow and program director for quantum computing at imec, is also Proximus Chair at Quantum Science and Technology and professor of electrical engineering at KU Leuven. He moved to imec in 2019 from Harvard University, where he was a fellow in the physics department and where he retains a visiting position.

Related Content

The post Silicon MOS quantum dot spin qubits: Roads to upscaling appeared first on EDN.

The Big Allis generator sixty years ago 

Fri, 12/05/2025 - 15:00

Think back to the 1965 electrical power blackout in the Northeast United States of just over sixty years ago. It was on November 9, 1965. There was a huge consequence for Consolidated Edison in New York City.

Their power-generating facility in Ravenswood had been equipped with a generator made by Allis-Chalmers, as shown in the following screenshots.

Figure 1 Ravenswood power generating facility and the Big Allis power generator.

That generator was the largest of its kind in the whole world at that time. Larger generators did get made in later years, but at that time, there were none bigger. It was so big that some experts opined that such a generator would not even work. Because of its size and its manufacturer’s name, that generator came to be called “Big Allis”.

Big Allis had a major design flaw. The bearings that supported the generator’s rotor were protected by oil pumps that were powered from the Big Allis generator itself.

When the power grid collapsed, Big Allis stopped delivering power, which then shut down the pumps delivering the oil pressure that had been protecting the rotor bearings.

With no oil pressure, the bearings were severely damaged as the rotor slowed down to a halt. One newspaper article described the bearings as having been ground to dust. It took months to replace those bearings and to provide their oil pumps with separate diesel generators devoted solely to maintaining the protective oil pressure.

So far as I know, Big Allis is still in service, even through the later 1977 and 2003 blackouts, so I guess that those 1965 revisions must have worked out.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

The post The Big Allis generator sixty years ago  appeared first on EDN.

Design specification: The cornerstone of an ASIC collaboration

Fri, 12/05/2025 - 09:51

Engaging with an ASIC development partner can take many forms. The intended chip may be as simple as a microcontroller, as sophisticated as an AI-based edge computing system-on-chip (SoC), or even a large language model (LLM) AI accelerator for data centers. The customer design team may include experienced ASIC design, verification, and test engineers or comprise only application experts. Each customer relationship is different.

Yet they all share one fundamental need. The customer and the ASIC developer must agree, in greater detail, on what they are trying to build. That is the role of design specification documents. At Faraday, this document is the cornerstone of conversations between customers and chip design teams, covering critical decisions throughout the design process. The topics can range from initial feasibility estimation through sign-off and beyond.

If the design specification is so important, an obvious question arises: how do you construct a specification that will result in a successful ASIC design experience? However, the real answer is that a successful design specification is a joint effort between the customer and the ASIC development partner.

So, there must be a comprehensive, cooperative, checklist-driven procedure for creating a design specification. It allows to mesh smoothly with customers’ design teams, whether they are starting with only a wish list of features or with a detailed design plan. It also works across a wide range of sizes and complexities in today’s ASIC landscape.

What design specification does

The design specification will serve many purposes during the ASIC design. Fundamentally, it will list the design requirements for the ASIC implementation team. As such, it will serve as a shopping list of silicon IP to be included in the design, an outline of the architecture that integrates that IP, and a guide for integration, verification, and testing.

Less obviously, the design specification can be a point of reference for discussions that will take place between the customer and the design team. What exactly should this block do? How much power can we allocate to this function? Does this alternative approach to implementation work for you? All these discussions can begin with the design specification.

Also, the specification can be invaluable for tasks where the customer is often uninvolved. For example, knowing the design intent and how the chip will be used can be priceless in developing verification plans, self-test architectures, test benches, and manufacturing test strategies. Information from the design specification is vital for detailed design activities, such as determining clock architectures and power-management strategy, in which the customer would typically not be directly involved.

Key elements of design specification

So, what goes into the design specification? There are several important categories of information. The most obvious is a set of functional requirements—what the chip is supposed to do. Often, this will be a list of features, but it may be much more detailed. It’s also essential that the specifications include performance, power, and area requirements.

These will influence many conversations, from our initial feasibility assessment to foundry and process selection, library selection, and power-management strategies. And much of this information will be included in the specifications.

It’s also essential to capture a description of the system in which the chip will operate, including the other components. For example, which SPI flash chip will be used for an external flash? The minor differences in SPI protocols between memory chips can determine which SPI controller IP we select.

Another essential kind of system information is more physical: the thermal and mechanical environment. Heat sinks, passive or forced-air cooling, and so on will influence power management and package design.

Not just what, but how

The specification is not just a list of requirements. It also jointly develops design plans for implementing the chip. Chief among these is the gross architecture.

The architecture of an ASIC may be implicit in its function. For instance, a microcontroller may be a CPU core, some memory on a memory bus, and some peripheral controllers on a low-speed peripheral bus. However, a more elaborate SoC may have several CPU cores clustered around a shared cache and a hierarchy of buses, determined by the bandwidth and latency requirements of particular data flows between the memory and IP instances. If the customer hasn’t already decided on architecture, the design team will develop a proposal and review it with the customer.

Figure 1 An example of the proposed architecture for the customer is highlighted in a comprehensive diagram that describes the architecture and provides additional information. Source: Faraday Technology Corp.

In some cases, the customer will already have architecture in place. This may be because the chip extends to an existing product family. Or it may use a network-on-chip (NoC) scheme or something entirely original, such as a data-flow architecture designed to accelerate a particular algorithm. In these cases, ASIC designer’s role is to ensure that the information in the specification is complete enough to capture the design intent unambiguously, to drive the selection of IP with the proper interfaces, and to adequately inform about the chip layout.

The specification may also include information about specific IP blocks. If a block is a controller for a standard interface—say, a USB controller—then there needs to be enough additional information. For instance, it should be a Gen3 USB host with power delivery to allow the design team to select the appropriate IP.

In some cases, a functional block may be something unique. This often happens when the IP is customer specific. In these cases, the customer must provide enough detail for the design team to create and test the block. This may be simply a detailed functional description. Or it may require pseudo-code or Verilog code for critical portions of the block.

Pulling it together

Altogether, the design specification becomes an agreed-upon statement of what the customer requires and what the design team is designing. But which parts of the document come from the customer, which are jointly written, and which are supplied by ASIC designer for the customer to review vary widely from case to case.

At Faraday, we have developed a formal process, called e-cooking, to collect the data. The process begins with a request for a quote from our sales organization. This RFQ will often contain much of the information we need for the design specification.

With RFQ in hand, we assign an engineer to the project in a role we call a technical consultant (TC). TC begins working through a design checklist to transfer information from the RFQ to the design specification.

When an item is missing or requires more detail, TC will contact the customer, explain what further information we need and why, and obtain the necessary data. If the item requires information the customer can’t provide—for instance, a choice of logic libraries—TC can ask the Faraday design team for input, which we then share with the customer for review.

The completed design specification document is a blueprint for the chip design. It will provide information regarding architectural and IP selection, verification, test plans, and packaging choices. It will also explain the statement of work, which describes which design tasks will be done by the customer and which by ASIC designer.

Figure 2 Technical consultants and engineers enter all project information into the e-cooking system, a tool that tracks the chip’s content. Source: Faraday Technology Corp.

The e-cooking process aims to capture customers’ design intent and the work they have already done toward implementation (Figure 2). The designers enter information into the tool, such as the actual cell size and name, silicon area, quantity, spacing, and I/O.

Next, ASIC designer reviews any suggestions for changes or additional data with the customer team. That brings clarity on what ASIC designer intends to implement at the start of the project. By the end of the project, the only surprises are how smoothly the two teams worked together and how well the delivered chip met customers’ expectations.

Barry Lai heads the System Development and Chip Design department at Faraday Technology Corp., a leading provider of ASIC design services and IP. With 20 years of experience in IC design, Barry specializes in SoC integration, specification definition, digital design, low-power design, and integration automation.

Related Content

The post Design specification: The cornerstone of an ASIC collaboration appeared first on EDN.

High-voltage SiC MOSFETs power critical energy systems

Thu, 12/04/2025 - 21:53

Navitas is now sampling 2.3-kV and 3.3-kV SiC MOSFETs in power-module, discrete, and known-good-die (KGD) formats. Leveraging fourth-generation GeneSiC Trench-Assisted Planar (TAP) technology, these ultra-high-voltage devices offer improved reliability and performance for mission-critical energy-infrastructure applications.

According to Navitas, the TAP architecture uses a multistep electric-field management profile that significantly reduces voltage stress and improves blocking performance compared with trench and conventional planar SiC MOSFETs. In addition to increased long-term reliability and avalanche robustness, TAP incorporates an optimized source contact that enables higher cell-pitch density and improved current spreading. Together, these advances deliver better switching figures of merit and lower on-resistance at elevated temperatures.

Packaging options include the SiCPAK G+ power module, which uses epoxy-resin potting to deliver more than a 60% improvement in power-cycling lifetime and over a 10% improvement in thermal-shock reliability compared with similar silicone-gel–potted designs. Discrete SiC MOSFETs are offered in TO-247 and TO-263-7 packages, while KGD products provide system manufacturers with greater flexibility for custom SiC power-module development. AEC-Plus–grade SiC devices are qualified to standards that exceed conventional AEC-Q101 and JEDEC requirements.

To request samples of the ultra-high-voltage SiC MOSFETs, contact Navitas at info@navitassemi.com.

Navitas Semiconductor 

The post High-voltage SiC MOSFETs power critical energy systems appeared first on EDN.

Thermistors suppress inrush currents

Thu, 12/04/2025 - 21:53

S series NTC thermistors from TDK Electronics handle steady-state currents up to 35 A and absorb energy up to 750 J. They enable reliable inrush current suppression in switch-mode power supplies, frequency converters, photovoltaic inverters, UPS systems, and soft-start motors.

The S series includes two leaded variants—the S30 and S36—with disk diameters of 30 mm and 36 mm, respectively. The S30 features 7.5-mm lead spacing and a maximum power handling of 19 W, while the larger S36 has 19-mm lead spacing and extends power handling to 25 W. Both variants are rated for a wide climatic category of 55/170/21 in accordance with IEC 60068-1 requirements.

The S30 (ordering code B57130S0M000) and S36 (B57136S0M100) families cover basis resistance values of 2 Ω to 15 Ω and 2 Ω to 20 Ω, respectively. They support continuous currents ranging from 12 A to 25 A (S30) and 10 A to 35 A (S36). Permissible capacitances can reach up to 13,050 µF at 240 VAC (see datasheet for details). The table below summarizes the key electrical characteristics of the S30 and S36 variants.

To access the datasheets for the S30 series and S36 series, click here.

TDK Electronics 

The post Thermistors suppress inrush currents appeared first on EDN.

Compact 1.25-kV MLCCs ensure stability

Thu, 12/04/2025 - 21:53

Murata has begun mass production of 1.25‑kV multilayer ceramic capacitors (MLCCs) with a capacitance of 15 nF in a 1210-size (3.2×2.5 mm) package. These MLCCs use a Class 1 ceramic dielectric (C0G, also known as NP0), making them a strong choice for onboard chargers in electric vehicles and power supply circuits in high-performance consumer devices.

Leveraging Murata’s ceramic and electrode materials, along with thin-layer molding and precision stacking technologies, these chip capacitors are optimized for the latest SiC MOSFETs. Thanks to the inherent advantages of C0G per EIA standards—low loss and stable capacitance across a temperature range of -55°C to +125°C—they are suitable for both resonant and snubber circuits.

For added design flexibility, 1.25‑kV MLCCs in the 1210 package are also available in capacitances from 4.7 nF to 12 nF. All of the new devices, including the 15‑nF chip, are offered with tolerances of ±1%, ±2%, and ±5%.

Datasheets for the new high-voltage MLCCs in the 1210 package can be accessed here.

Murata Manufacturing 

The post Compact 1.25-kV MLCCs ensure stability appeared first on EDN.

Enhanced hybrid caps handle increased ripple current

Thu, 12/04/2025 - 21:53

Taiyo Yuden has launched the HVX(-J) and HTX(-J) series of conductive polymer hybrid aluminum electrolytic capacitors. These updated models offer a higher rated ripple current and a lower profile than previous HVX and HTX capacitors. They also meet the AEC-Q200 standard, ensuring reliability for automotive applications.

With increasing current demands in automotive power sources, there is growing need for hybrid capacitors that combine higher ripple current ratings, compact profiles, and a variety of sizes. The HVX(-J) and HTX(-J) series address this need, offering 36 different types. For example, the RAHTX331M1TFH0002JX achieves 3400 mA RMS at 135 °C—a 70% increase over the previous model’s 2000 mA RMS at the same temperature. Devices in the series are available in five package sizes, ranging from 8 mm in diameter and 10 mm in height to 12.5 mm in diameter and 13.5 mm in height.

The new hybrid capacitors offer rated voltages of 25 VDC to 63 VDC, capacitance values from 47 µF to 1000 µF, and high rated ripple currents at 135°C ranging from 2200 mA RMS to 4000 mA RMS. These performance characteristics make them well suited for noise suppression and power smoothing in automotive power-supply circuits, including control systems such as electric power steering and safety-critical applications like ADAS.

More information on the HVX(-J) and HTX(-J) series can be found here.

Taiyo Yuden

The post Enhanced hybrid caps handle increased ripple current appeared first on EDN.

Chip antennas boost Wi-Fi and UWB signal integrity

Thu, 12/04/2025 - 21:53

Three chip antennas from Taoglas—the ILA.257, ILA.68, and ILA.89—provide Wi-Fi 6/7, ultra-wideband (UWB), and ISM connectivity. Manufactured using a low-temperature co-fired ceramic (LTCC) process, the antennas deliver high radiation efficiency and frequency stability in ultra-compact packages. According to Taoglas, they also require a smaller keep-out area than competing antennas.

The ILA.257 is a 3.2×1.6×0.5-mm antenna for Wi-Fi 6/7, providing tri-band coverage across 2.4 GHz, 5.8 GHz, and 7.125 GHz with strong radiation efficiency and stable signal integrity. Its small footprint and minimal keep-out area make it well-suited for wearables, portable electronics, and industrial IoT devices.

Engineered for UWB operation from 6 GHz to 8.5 GHz, the ILA.68 3.2×1.6×1.1-mm antenna delivers a stable omnidirectional radiation pattern with consistent repeatability and low insertion loss. It supports applications such as indoor positioning, access control, and short-range radar in space-constrained IoT and automotive systems.

Designed for the 868-MHz and 915-MHz ISM bands, the ILA.89 supports global LPWAN and LoRa deployments with up to 47.9% radiation efficiency and 0.56 dBi peak gain. Its 4.0×12.0×1.6-mm footprint, simple layout, and regional variants help reduce design complexity and speed time-to-market for small IoT devices.

The ILA.257, ILA.68, and ILA.89 antennas are now available from Taoglas and its authorized distributors.

Taoglas

The post Chip antennas boost Wi-Fi and UWB signal integrity appeared first on EDN.

Handheld enclosures add integrated cable glands

Thu, 12/04/2025 - 20:33
OKW's CONNECT handheld enclosures with integrated cable glands.

OKW now offers CONNECT fast-assembly handheld plastic enclosures with optional integrated cable glands, making it easier to install power and data cables.

Cost-effective CONNECT is ideal for network technology, building services, safety engineering, IoT/IIoT, medical devices, analytical instruments, data loggers, detectors, sensors, test and measurement.

OKW's CONNECT handheld enclosures with  integrated cable glands.(Source: OKW Enclosures Inc.)

CONNECT’s two case shells snap together for fast and easy assembly: no screws are required. This offers the choice of two ‘fronts’: one shell is convex – perfect for LEDs – while the other is flat and recessed for a compact display or membrane keypad. Inside the flat shell there are mounting pillars for PCBs and components.

CONNECT enclosures feature open apertures at each end. For these, design engineers can specify a combination of ASA+PC blank end panels and soft-touch TPE cable glands with integrated strain relief. Cable diameters from 0.134“ to 0.232“ are accommodated. The two long sides provide ample space for USB connectors.

These UV-stable ASA+PC (UL 94 V-0) enclosures are available in six sizes from 2.36″ x 1.65″ x 0.87″ to 6.14″ x 2.13″ x 0.87″. The standard colors are off-white (RAL 9002) and black (RAL 9005). Custom colors are also available.

The cable glands come in volcano (gray) and black (RAL 9005). The end parts are off-white (RAL 9002) and black (RAL 9005). Other accessories include wall holders, rail holding clamps for round tubes up to ø 1.26″, and self-tapping screws.

OKW can supply CONNECT fully customized. Services include machining, lacquering, printing, laser marking, decor foils, RFI/EMI shielding, and installation and assembly of accessories.

For more information, view the OKW website: https://www.okwenclosures.com/en/Plastic-enclosures/Connect.htm

The post Handheld enclosures add integrated cable glands appeared first on EDN.

Through-hole connector resolves surface-mount dilemma

Thu, 12/04/2025 - 16:19

Manufacturing of a modern component-laded printed circuit board (PCB) is an amazing fusion and coordination of diverse technologies. There’s the board as substrate itself, the stencils and masks that enable precise placement of solder paster, and the pick-and-place mechanical system that places components (both ICs and passive ones) on the appropriate lands with pinpoint precision and repeatability, all culminating in most cases in a sophisticated reflow-soldering process.

Most of the loaded components use surface mount technology (SMT) and tiny contacts to their respective lands on the PCB. However, it wasn’t always an SMT world. In the early days of PCBs, the situation was somewhat different. Most of the components were dual inline package (DIP) ICs and passives with tangible wire leads, where their connections went through holes in the board (Figure 1).

Figure 1 Dual-inline package (DIP) was dominant in the early days of ICs and is still favored by makers and DIY enthusiasts; but most devices are no longer offered this way, nor can they be. Source: Wikipedia

Not only did this require costly drilling of hundreds and thousands of space-consuming holes, but component installation was a challenge. The loaded board—with these through-hole components mounted on one side only—went through a wave-soldering process which soldered the leads to the tracks on the bottom of the board.

The advent of SMT

The use of surface-mount technology began in the 1960s, when it was originally called “planar mounting”. However, surface mount technology didn’t become popular until the mid-1980s, and even as recently as 1986; surface-mount components represented only around 10% of the total market. The technique took off in the late 1980s, and most high-tech electronic PCBs were using surface mount devices by the late 1990s.

SMT enables smaller components, higher board densities, use of top and bottom sides of the board for components, and a reflow soldering process. Today, active and passive components are offered in SMT packages whenever possible, with through-hole packages being the exception. SMT devices can be placed using an automated arrangement, while many larger through-hole ones require manual insertion and soldering. Obviously, this is costly and disruptive to the high-volume production process.

The demand for SMT versions is so overwhelming that many products are available only in that package type. SMT makes possible many super-tiny components we now count on; some are just a millimeter square or smaller.

Due to the popularity of SMT, vendors often announce when they have managed to make a former through-hole component into a SMT one. Doing so is not easy in many cases for ICs, as there are die-layout, thermal, packaging, and reliability issues.

There are also transitions for passives. For example, Vishay Intertechnology recently announced that it has transformed one of its families of axial-leaded safety resistors into surface-mount versions using a clever twisting to the leads in conjunction with a T-shaped PCB land pattern (Figure 2). This is not a trivial twist because these resistors must also meet various safety and regulatory mandates for performance under normal and fault conditions while being compatible with automated handling.

Figure 2 Transforming this leaded safety resistor from a through-hole to SMT device involved much more than a clever design as the SMT version must meet a long list of stringent safety-related requirements and tests. Source: Vishay

In other cases, vendors of leaded discrete devices such as mid-power MOSFETs have announced with fanfare that they have managed to engineer a version with the same ratings in an SMT package. No question about it; it’s a big deal in terms of attractiveness to the customer.

What about the SMT holdouts?

Despite the prevalence of, and desire for, SMT devices, some components are not easily transformed into SMT-friendly packaging that is also compatible with reflow soldering. Larger connecters for attaching discrete terminated wires to wiring blocks are a good example. If they were SMT devices, the stress they endure would flex the board and weaken their soldered connections as well as affect the integrity of the adjacent components. Their relatively large size also makes SMT handling a challenge.

But that dilemma is seeing some resolution. Connector vendor Weidmüller Group has developed what it calls through-hole reflow (THR) technology. These are terminal-block connectors for discrete wires that do require PCB holes and through-hole mounting for mechanical integrity. Yet, it can then be soldered using the standard reflow process along with other SMT devices on the board.

One of the vendor’s families with this capability was developed for Profinet applications and supports Ethernet-compliant data transmission up to 100 Mbps (Figure 3).

Figure 3 One of the available families of THR connector blocks is for Profibus installations. Source: Weidmüller

These connector blocks use glass-fiber-reinforced liquid crystal polymer (LCP) bodies to guarantee a high level of shape stability. The favorable temperature properties of the material (melting point of over 300°C) and the in-built pitch space (stand-off) of 0.3 mm (minimum) are well-suited for the solder-paste process. They come in choice of two pin lengths of 1.5 mm and 3.2 mm to precisely match board thickness, all with very tight tolerance on dimensional stability and pin centering (Figure 4).

Figure 4 The connector pin must have the right length and precise centering for reliable contact. Source: Weidmüller

The reflow wondering profile is like the ones required for other SMT components, so the entire board can be soldered in one pass (Figure 5).

Figure 5 The recommended reflow soldering profile for these THR connectors matches the profile of other SMT devices. Source: Weidmüller

Another connector family supports various USB connections (Figure 6).

Figure 6 A range of THR USB connectors is also available. Source: Weidmüller

With these THR connectors, you get the mechanical integrity of through-hole devices alongside the manufacturing benefit of automatic insertion (Figure 7) and reflow soldering. There is no need for a separate step to manually insert the connector and have a separate soldering step. You can also use them for through-hole wave-soldering as well, if you prefer.

Figure 7 Even the larger-block THR connectors can be automatically inserted using SMT pick-and-place systems. Source: Weidmüller

Connectors such as these will undoubtedly lower manufacturing costs while not compromising performance. Once again, it’s a reminder of the vital role and impact of mechanical know-how and material-science expertise to less-visible, low-glamour yet important advances in our “electronics” industry.

Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.

Related Content

The post Through-hole connector resolves surface-mount dilemma appeared first on EDN.

The Oura Ring 4: Does “one more” deliver much (if any) more?

Thu, 12/04/2025 - 15:00

The most surprising thing to me about the Oura Ring 4, compared to its Gen3 predecessor, is how similar the two products are in terms of elemental usage perception. Granted, the precursor’s three internal finger-orientation bumps:

are now effectively gone:

and there are also multiple internal implementation differences between the two generations, some of which I’ll touch on in the paragraphs that follow. But they both use the same Android and iOS apps, generate the same data, and run for roughly the same ~1 week between charges.

One key qualifier on that last point: I bought them both used on eBay. The Ring 4, which claims 8 days of operating life when new, may have already accumulated more cycles from prior-owner usage than was the case with the Gen3 forebear, which touts 7 days’ operating life when new.

Smart ring “kissing cousins”

They look similar, too: the Gen3 in “Brushed Titanium” is the lower of the two rings on my left index finger in the following photos, with the Ring 4 in “Brushed Silver” above it:

And here’s the Ring 4 standalone, alongside my wedding band:

A smart ring enthusiast’s detailed analysis of the two product generations, complete with an abundance of comparative captured-data results, is below for those of you interested in more of an on-finger relative appraisal than I was able (and, admittedly, willing) to muster:

Sensing enhancements

Perhaps the biggest claimed innovation with the newer Ring 4 is Smart Sensing:

Smart Sensing is powered by an algorithm that works alongside the research-grade sensors within Oura Ring 4 to respond to each member’s unique finger physiology, including the structure and distinct features of your finger (i.e. skin tone, BMI, and age).

 The multiple sensors form an 18-path multi-wavelength photoplethysmography (PPG) subsystem, which adjusts dynamically to your lifestyle throughout the day and night.

As the functional representation in this conceptual video suggests:

there are two multi-LED clusters, each supporting three separate light wavelengths (red, green and infrared), with corresponding reception photodiodes in the rectangular structures to either side of each cluster (three structures total):

To complete the picture, here’s the inner top half of my Ring 4:

Six total LEDs, outputting to three total photodiodes, translates to 18 total possible light path options (which is presumably how Oura came up with the number I quoted earlier), with the optimal paths initially determined as part of the first-time ring setup:

and further fine-tuning is dynamically done while the ring is being worn, including compensating for non-optimum repositioning on the finger per the earlier-mentioned lack of distinct orientation bumps in this latest product generation.

What are the various-wavelength LEDs used for? Generally speaking, the infrared ones are capable of penetrating further into the finger tissue than are their visible-light counterparts, at some presumed tradeoff (accuracy, perhaps?). And specifically:

  • Red and infrared LEDs measure blood oxygen levels (SpO2) while you sleep.
  • Green and infrared LEDs track heart rate (HR) and heart rate variability (HRV) 24/7, as well as respiration rate during sleep.

All three LED types were also present with the Gen3 ring, albeit in a different multi-location configuration than the Ring 4 (albeit common to both the Heritage and Horizon Gen3 styles):

The labeling in the following Ring 4 “stock” image, by the way, isn’t locationally or otherwise accurate, as far as I can tell; the area labeled “accelerometer” is actually a multi-LED cluster, for example, and in contrast to the distinct “Red And Infrared…” and “Green And Infrared…” labels in the stock image, both of the clusters actually contain both green and red (plus infrared) LEDs:

Also embedded within the ring is a 3D accelerometer, which I’ve just learned, thanks to a Texas Instruments technical article I came across while researching this writeup, is useful not only for counting steps (along with, alas, keystrokes and other finger motions mimicking steps) but also “used in combination with the light signals as inputs into PPG algorithms.”

And there’s also a digital temperature sensor, although it doesn’t leverage direct skin contact for measurement purposes. Instead, it’s a negative temperature coefficient (NTC) thermistor whose (quoting from Wikipedia) “resistance decreases as temperature rises; usually because electrons are bumped up by thermal agitation from the valence band to the conduction band”.

Battery life optimizations

As noted in the public summary of a recent Ring 4 teardown by TechInsights, the newer smart ring has a higher capacity battery (26 mAh) than its Gen3 predecessor, which is likely a key factor in its day-longer specified operation between recharges. Additionally, the Ring 4’s Smart Sensing algorithms further optimize battery life as follows:

In order to optimize signal quality and power efficiency, Oura Ring 4 selects the optimal LED for each situation, instead of burning several LEDs simultaneously.

and

Smart Sensing also helps maximize the battery life of Oura Ring 4 by dynamically adjusting the brightness of the LEDs, using the dimmest possible setting to achieve the desired signal quality. This allows the battery life of Oura Ring 4 to extend up to eight days.

Here, for example, is a dim-light photo of both green LEDs in action, one in each cluster:

Generally speaking, the LEDs are active only briefly (when they’re illuminated at all, that is) and I haven’t yet succeeded in grabbing my smartphone and activating its camera in time to capture photos of any of the other combinations I’ve observed and note below. They include:

  • Single green LED (either cluster)
  • Concurrent single green and single red LEDs (one from each cluster), and
  • Both single (either cluster) and dual concurrent (both clusters) red LED(s)

I’ve also witnessed transitions from bright to dim output illumination, prior to turnoff, for both one and two concurrent green LEDs, but not (yet, at least) for either one or both red LED(s). And perhaps obviously, the narrow-spectrum eyes-and-brain visual sensing and processing subsystem in my noggin isn’t capable of discerning infrared (or even near-IR) emissions, so…

Third-party functional insights

Operating life between integrated battery recharges, which I’ve already covered, is key to wearer satisfaction with the product, of course, as is recharge speed to “full” for the next multi-day (hopefully) wearing period.

But for long-term satisfaction, a sufficiently high number of supported recharge cycles prior to effective battery expiration (and subsequent landfill donation) is also necessary. To wit, I’ll close with some interesting (at least to me) information that I indirectly (and surprisingly, happily) stumbled across.

First off, here’s what the Ring 4 looks like in the process of charging on its inductive dock:

In last month’s Oura Gen3 write-up, I shared a photo of the portable charging case (including an integrated battery) that I’d acquired from Doohoeek via Amazon, with the dock mounted inside. Behind it was the Doohoeek charging case for the Oura Ring 4. They look the same, don’t they?

That’s because, it turns out, they are the same, at least from a hardware standpoint. Requoting what I first mentioned last month, the “development story (which I got straight from the manufacturer) was not only fascinating in its own right but also gave me insider insight into how Oura has evolved its smart ring charging scheme for the smart ring over time. More about that soon, likely next month.

Here’s the Ring 4 and dock inside the second-generation Doohoeek case (which, by the way, is also backwards-compatible with the Gen3 ring and dock):

And as promised, here’s the full back-and-forth between myself (in bold) and the manufacturer (in italics) over Amazon’s messaging system:

As I believe you already realize, while Doohoeek’s first-generation battery case that I’d bought from you through Amazon works fine with the Oura Gen3, it doesn’t (any longer, at least) work with the Ring 4. For that, one of Doohoeek’s second-generation battery cases is necessary. Can you comment on what the incompatibility was that precluded ongoing reliable operation of the original battery case with the Ring 4 charging dock (although it still works fine for the Gen3)? A USB-PD handshaking issue between your battery and the charging dock? Or was it something specific to the ring itself?

Hi Brian,

thank you for your question! Here’s a brief technical explanation of the Ring 4 compatibility issue with our original charging case:

Our first-gen charging case used a smart current-detection algorithm to determine charging status. Under normal conditions, when the ring reached full charge, the current would drop and remain consistently low—triggering our case to stop charging. This worked flawlessly with Oura Gen3 and initially with the Ring 4.

However, after a recent Oura firmware update, the Ring 4 began exhibiting unstable current draw patterns during charging—specifically, prolonged periods of low current followed by unexpected current spikes, even when the ring was not fully charged. This behavior caused our case to misinterpret the ring as “fully charged” and prematurely terminate charging.

To resolve this, we redesigned our charging logic in the updated version to implement a more robust timing-based backup protocol.

We appreciate your interest and hope this clarifies the engineering challenge we addressed!

Best,

Doohoeek Support Team

This is perfect! It was obvious to me that whatever it was, it was something that a firmware update couldn’t resolve, and I’d wondered if ring-generated current draw variances were to blame. I suspect the Ring 4 is doing this to maximize battery life over extended charge cycle counts. Thanks again!

p.s…I also wonder why you didn’t change the product naming, box labeling, etc. so potential buyers could have reassurance as to which version they’d be getting?

Hi Brian,

Thank you for your insightful feedback — you’ve clearly thought deeply about how these systems interact, and we really appreciate that.

Yes, the current behavior on the Ring 4 appears optimized for long-term battery longevity 🙂

Regarding your question about naming and packaging:

We actually had already mass-produced the outer shells and packaging for old version when Oura pushed the update that changed the charging behavior. Rather than discard those components (and create unnecessary waste), we decided to prioritize a firmware-level fix and use the same exterior.

That’s why the outside looks identical, but the internal charging behavior is now completely updated.

If you’d like to confirm whether your unit is the latest version, you can check the FNSKU barcode on the package:

Old version (no longer in production) ONLY used: X004HYCA09

New version (may change in future production) currently used: X004Q62DV9

Customers can also contact us with a photo of the label, and we’d be happy to verify it for them personally.

Thanks again for your support and sharp eyes.

Best,

Doohoeek Support Team

Very interesting! So it IS possible to firmware-retrofit existing units. Would that require a unit shipment back to the factory for the update, or did you consider developing a Windows-based (for example) update utility for customer upgrade purposes (by tethering the battery case’s USB-C input to a computer)?

Hi Brian,

Great question.

Unfortunately, a firmware update is not possible for units that have already been shipped. The hardware design does not support customer-side or even a cost-effective return-to-factory update process.

The only practical solution we could implement was to correct the firmware in all newly produced units moving forward, which is what you have received.

We appreciate your understanding!

Best,

Doohoeek Support Team

And with that, having recently passed through 2,000 words, I’ll wrap up for today. Stay tuned for the aforementioned teardown-to-come (on a different Ring 4; I plan to keep using this one!), and until then, I as-always welcome your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post The Oura Ring 4: Does “one more” deliver much (if any) more? appeared first on EDN.

A digital filter system (DFS), Part 1

Wed, 12/03/2025 - 15:00

Editor’s note: In this Design Idea (DI), contributor Bonicatto designs a digital filter system (DFS. This is a benchtop filtering system that can apply various filter types to an incoming signal. Filtering range is up to 120 kHz.

In Part 1 of this DI, the DFS’s function and hardware implementation are discussed.

In Part 2 of this DI, the DFS’s firmware and performance are discussed.

Selectable/adjustable bench filter

Over the years, I have been able to obtain a lot of equipment needed for designing, testing, and diagnosing electronic equipment. I have accumulated power supplies, scopes, digital voltmeters (DVMs), spectrum analyzers, signal generators, vector network analyzers (VNAs), LCR meters, etc., etc.

One piece of equipment I never found is a reasonably priced lab bench filter—something that would take in a signal and filter it with a filter whose parameters could be set on the front panel.

There are some tools that run on a PC’s sound card, but I don’t like to connect my electronic tests on my PC for fear that I’ll damage the PC. The other issue is that I am looking for something that can go up to 100 kHz or so, which is not typical of many soundcards. So, it was time to try to design one.

Wow the engineering world with your unique design: Design Ideas Submission Guide

What I came up with in a small bench-top device with one BNC input for the signal you want filtered and one BNC output for the resulting filtered signal (Figure 1). It has a touchscreen LCD to select a filter type and the cutoff/center frequency. So, what can it do?

Figure 1 The finished digital filter system that allows you to select a low-pass, high-pass, band-pass, or band-stop filter type.

You can select a low-pass, high-pass, band-pass, or band-stop filter type. The filter can also be either a two-pole Butterworth or a four-pole.

For the frequency, you can select anywhere from a few Hz to 120 kHz. The are also three gain controls (an analog input gain knob, an analog output gain, and an internal digital gain.)

The cost to build the filter is around $75, as well as some odds and ends you probably already have around.

I also included a download for a 3D printable enclosure. Let’s take a deeper look at this design.

The circuit

The design is centered around a digital filter executed in a Cortex M4 microcontroller (MCU). The three main blocks of the system are an analog front end (AFE), which is composed of four op-amps providing input gain adjustment and antialiasing filtering.

Next is a single board computer (SBC) powered by a Cortex M4. This provides an input for the ADC, controls the LCD and touchscreen, executes the digital filters, and controls the output DAC.

The last block is the analog back end (ABE), which again consists of four op-amps that make up the analog gain circuit and the analog output reconstruction filter.

Let’s take a look at the schematic to see more detail (Figure 2).

Figure 2 The DFS schematic showing the AFE, the ABE, and SBC that provides an input for the ADC, controls the TFT display, executes the digital filters, and controls the output DAC.

Here you can see the blocks we just talked about and a few other minor pieces. Let’s dive a little deeper.

The AFE

The AFE starts by AC-coupling the external signal you want to filter. Then, the first op-amp, after the protection diodes, provides an adjustable gain for the input. This uses a simple single-supply inverting op-amp circuit. RV1 is a potentiometer on the front panel (see Figure 1 above) that allows for a gain of the input from 1x to 5x.

Again, looking at the schematics, we next see a single-pole low-pass filter, which is tuned to 120 kHz. Next are a pair of 2-pole Sallen-Key low-pass filters with components selected to create a Butterworth filter set to 120 kHz.

So now our input signal has been filtered at a frequency that will allow the MCU’s ADC to sample without aliasing. I designed this filter and the ABE filter using TI’s WEBENCH Circuit Designer.

So, we have a 5-pole low-pass filter frontend that will give us a roll-off of 30 dB per octave, or 100 dB per decade.

The flywheel RC circuit is next. As explained in a previous article, the capacitor in this RC circuit provides a charge to hold up the voltage level when the ADC samples the input. More on this can be found at: ADC Driver Ref Design Optimizing THD, Noise, and SNR for High Dynamic Range

The ABE

We’ll skip the MCU for now and jump to the right side of the schematic. Here we see a circuit very similar to the AFE, but this is used as a reconstruction filter that removes artifacts created by the discrete steps used in the MCU’s DAC.

So, starting from the DAC output from the SBC, we see an adjustable gain stage which allows the user, via the output potentiometer, to increase the output level, if desired. This output gain can be adjusted from 1x to 5x.

Next in the schematic, you’ll see two stages of two-pole Sallen-Key low-pass filters configured exactly like the pair in the AFE. So again, they are configured as a 120 kHz Butterworth filter. 

The last op-amp circuit in the ABE is a 2x gain stage and buffer. Why a 2x gain stage? I’ll explain more later, but the gist is that the DAC has a limited slew rate compared to the sample rate I used. So, I reduced the value in the DAC by 2 and then compensated for it in this gain stage.

A note about the op-amps used in this design: The design calls for something that can handle 120 kHz passing through a gain of up to 5 and also dealing with the Sallen-Key filters (the TI WEBENCH shows a gain-bandwidth requirement of at least 6 MHz). I also needed a slew rate that could deal with a 120 kHz signal with a level of 3.3 Vpp. The STMicroelectronics TSV782 fit the bill nicely.

The last two components are the resistor and the capacitor before the output BNC connector. The resistor is used to stabilize the op-amp circuit if the output is connected to a large capacitance load. The 1uF capacitor provides AC coupling to the output BNC.

The MCU

The brains used in this design is a Feather M4 Express SBC, which contains a Microchip Technology’s ATSAMD51 that has a Cortex M4 core. This is primarily powered by a USB connection (or a battery we will discuss in Part 2).  

This ATSAMD51 has a few ADCs and DACs, and we use one of each in this design. It also has plenty of memory (512 kB of program memory and 192  kB of SRAM).

It runs at a usable 120 MHz and is enhanced with a floating-point processor. All this works nicely for the digital filtering we will explain in Part 2. Other features I used include a number of digital I/O ports, an SPI port, and a few other ADC inputs.

One feature I found very nice on the SBC was a 3.3 VDC linear regulator that not only powers the MCU, but has sufficient output to power all other devices in the design.

On the schematic (Figure 1), you can see that the AFE connects to an ADC input on the SBC, and an SBC DAC connects to the ABE circuit. Another major component is the TFT LCD and touchscreen, powered by the 3.3 VDC coming from the SBC.

Miscellaneous schematic items

That leaves a few extra items on the schematic.

Voltage reference

There are 2 simple ½ voltage dividers to generate 1.65 VDC from the 3.3 VDC supply. One is used on the AFE to get a mid-voltage reference for the single supply op-amp design. This reference is simply two equal resistors and a capacitor connected to ground, and from the center of the series-connected resistors.

A second reference was created for the ABE circuit. I used two references as I was laying this out on a protoboard, and the circuits were separated by a significant distance (without a ground plane).

LED indicator

There is also an LED used to indicate that the ADC is clipping the signal because the input is too large or too small. Another LED indicates the DAC is clipping for the same reasons. There will be more discussion on this in the firmware section in Part 2.

Floating ground

An interesting feature of the SBC is that it contains the charging circuit for a lithium polymer 3.7-V battery. This is optional in the design, but it does allow you to operate the DFS with a floating ground and a quiet voltage supply, which may help in your testing.

Enable

A somewhat unique feature, which turns out to be helpful, is an enable that is used to turn off the system if you pull it to ground.

If you use a battery, along with the USB, and want to use a typical power on/off switch, you would need to break the incoming USB line and the battery line, which makes it a 2-pole switch.

So, to get the DFS to power down, I pull the enable line to ground using a 3-pole SPDT switch, which I found has the typical “O/I” on/off indications. You can use a SPST switch; this will have to be switched to “I” to shut it down and “O” to turn it on.

USB voltage display

A ½ voltage divider, with a filter capacitor, is connected to the USB input and used as an input to one of the ADCs, so we can display the connected USB voltage.

Optional reset

The last item is an optional reset. I did not provide a hole to mount a pushbutton, but you can drill a hole in the back of the enclosure for a normally-open pushbutton.

More information

This device is a fairly easy to build. I built the circuit on a protoboard with SMT parts (thru-hole would have been easier). Maybe someone would like to lay out a PCB and share the design. I think you’ll find this DFS has a number of uses in your lab/shop.

The schematic, code, 3D print files, links to various parts, and more information and notes on the design and construction can be downloaded at: https://makerworld.com/en/@user_1242957023/upload

Editor’s Note: Stay tuned for Part 2 to learn more about the device’s firmware.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

Related Content

The post A digital filter system (DFS), Part 1 appeared first on EDN.

Silly simple precision 0/20mA to 4/20mA converter

Wed, 12/03/2025 - 15:00

This Design Idea (DI) offers an alternative solution for an application borrowed from frequent DI contributor R. Jayapal, presented in: “A 0-20mA source current to 4-20mA loop current converter.” 

It converts a 0/20mA current mode input, such as produced by some process control instrumentation, into a standard industrial 4/20mA current loop output.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the circuit. It’s based on a (very) old friend—the LM337 three-legged regulator. Here’s how it works.

Figure 1 U1 plus R1 through R5 current steering networks convert 0/20mA input to 4/20mA output.

The fixed resistance of the R1 + R2 + R3 series network, working in parallel with the adjustable R4 + R5 pair, presents a combined load of 312 ohms to the 1.25v output of U1. That causes a zero-input current draw of 1.25/312 = 4 mA, trimmed by R5 (see calibration sequence detailed later).

Summed with this is a 0 to 16 mA current derived from the 0 to 20 mA input, controlled by the 4:1 ratio current split provided by the R1/R2/R3 current divider and fine trimmed by R2 (ditto). 

Note that 4 mA is below the guaranteed minimum regulation current specification for the LM337. In fact, most will work happily with half that much, but you might get a greedy one. So just be aware.

The result is a precision conversion of the 0 to 20mA input to an accurate 4 to 20mA loop current. Conversion precision and stability are insensitive to R2 trimmer wiper resistance due to the somewhat unusual input topology in play.

Calibration proceeds in a four-step linear (iteration-free one-pass) sequence consisting of:

  1. Set input = 0.0 mA.
  2. Adjust R5 for 4.00 mA loop current.
  3. Set input = 20.00 mA.
  4. Adjust R2 for 20.00 mA loop current.

Done.

The input voltage burden is a negative 1.0 volt. The output loop voltage drop is 4 volts minimum to 40 volts maximum. The maximum ambient temperature (with no U1 heatsink) is 100oC. Resistors should be precision types, and the trimmer pots should be multiturn cermet or similar.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Silly simple precision 0/20mA to 4/20mA converter appeared first on EDN.

Transitioning from Industry 4.0 to 5.0: It’s not simple

Tue, 12/02/2025 - 18:35
Industry 4.0 to Industry 5.0.

The shift from Industry 4.0 to 5.0 is not an easy task. Industry 5.0 implementation will be complex, with connected devices and systems sharing data in real time at the edge. It encompasses a host of technologies and systems, including a high-speed network infrastructure, edge computing, control systems, IoT devices, smart sensors, AI-enabled robotics, and digital twins, all designed to work together seamlessly to improve productivity, lower energy consumption, improve worker safety, and meet sustainability goals.

Industry 4.0 to Industry 5.0.(Source: Adobe Stock)

In the November/December issue, we take a look at evolving Industry 4.0 trends and the shift to the next industrial evolution: 5.0, building on existing AI, automation, and IoT technologies with a collaboration between humans and cobots.

Technology innovations are central to future industrial automation, and the next generation of industrial IoT technology will leverage AI to deliver productivity improvements through greater device intelligence and automated decision-making, according to Jack Howley, senior technology analyst at IDTechEx. He believes the global industry will be defined by the integration of AI with robotics and IoT technologies, transforming manufacturing and logistics across industries.

As factories become smarter, more connected, and increasingly autonomous, MES, digital twins, and AI-enabled robotics are redefining smart manufacturing, according to Leonor Marques, architecture and advocacy director of Critical Manufacturing. These innovations can be better-interconnected, contributing to smarter factories and delivering meaningful, contextualized, and structured information, she said.

One of those key enabling technologies for Industry 4.0 is sensors. TDK SensEI defines Industry 4.0 by convergence, the merging of physical assets with digital intelligence. AI-enabled predictive maintenance systems will be critical for achieving the speed, autonomy, and adaptability that smart factories require, the company said.

Edge AI addresses the volume of industrial data by embedding trained ML models directly into sensors and devices, said Vincent Broyles, senior director of global sales engineering at TDK SensEI. Instead of sending massive data streams to the cloud for processing, these AI models analyze sensor data locally, where it’s generated, reducing latency and bandwidth use, he said.

Robert Otręba, CEO of Grinn Global, agrees that industrial AI belongs at the edge. It delivers three key advantages: low latency and real-time decision-making, enhanced security and privacy, and reduced power and connectivity costs, he said.

Otręba thinks edge AI will power the next wave of industrial intelligence. “Instead of sending vast streams of data off-site, intelligence is brought closer to where data is created, within or around the machine, gateway, or local controller itself.”

AI is no longer an optional enhancement, and this shift is driven by the need for real-time, contextually aware intelligence with systems that can analyze sensor data instantly, he said.

Lisa Trollo, MEMS marketing manager at STMicroelectronics, calls sensors the silent leaders driving the industrial market’s transformation, serving as the “eyes and ears” of smart factories by continuously sensing pressure, temperature, position, vibration, and more. “In this industrial landscape, sensors are the catalysts that transform raw data into insights for smarter, faster, and more resilient industries,” she said.

Energy efficiency also plays a big role in industrial systems. Power management ICs (PMICs) are leading the way by enabling higher efficiency. In industrial and industrial IoT applications, PMICs address key power challenges, according to contributing writer Stefano Lovati. He said the use of AI techniques is being investigated to further improve PMIC performance, with the aim of reducing power losses, increasing energy efficiency, and reducing heat dissipation.

Don’t miss the top 10 AC/DC power supplies introduced over the past year. These power supplies focus on improving efficiency and power density for industrial and medical applications. Motor drivers are also a critical component in industrial design applications as well as automotive systems. The latest motor drivers and development tools add advanced features to improve performance and reduce design complexity.

The post Transitioning from Industry 4.0 to 5.0: It’s not simple appeared first on EDN.

Expanding power delivery in systems with USB PD 3.1

Tue, 12/02/2025 - 18:00
Microchip's MCP19061 USB dual-charging-port board.

The Universal Serial Bus (USB) started out as a data interface, but it didn’t take long before progressing to powering devices. Initially, its maximum output was only 2.5 W; now, it can deliver up to 240 W over USB Type-C cables and connectors, processing power, data, and video. This revision is known as Extended Power Range (EPR), or USB Power Delivery Specification 3.1 (USB PD 3.1), introduced by the USB Implementers Forum. EPR uses higher voltage levels (28 V, 36 V, and 48 V), which at 5 A will deliver power of 140 W, 180 W, and 240 W, respectively.

USB PD 3.1 has an adjustable voltage supply mode, allowing for intermediate voltages between 9 V and the highest fixed voltage of the charger. This allows for greater flexibility by meeting the power needs of individual devices. USB PD 3.1 is backward-compatible with previous USB versions including legacy at 15 W (5 V/3 A) and the standard power range mode of below 100 W (20 V/5 A).

The ability to negotiate power for each device is an important strength of this specification. For example, a device consumes only the power it needs, which varies depending on the application. This applies to peripherals, where a power management process allows each device to take only the power it requires.

The USB PD 3.1 specification found a place in a wide range of applications, including laptops, gaming stations, monitors, industrial machinery and tools, small robots and drones, e-bikes, and more.

Microchip USB PD demo board

Microchip provides a USB PD dual-charging-port (DCP) demonstration application, supporting the USB PD 3.1 specification. The MCP19061 USB PD DCP reference board (Figure 1) is pre-built to show the use of this technology in real-life applications. The board is fully assembled, programmed, and tested to evaluate and demonstrate digitally controlled smart charging applications for different USB PD loads, and it allows each connected device to request the best power level for its own operation.

Microchip's MCP19061 USB dual-charging-port board.Figure 1: MCP19061 USB DCP board (Source: Microchip Technology Inc.)

The board shows an example charging circuit with robust protections. It highlights charge allocation between the two ports as well as dynamically reconfigurable charge profile availability (voltage and current) for a given load. This power-balancing feature between ports provides better control over the charging process, in addition to delivering the right amount of power to each device.

The board provides output voltages from 3 V to 21 V and output currents from 0.5 A to 3 A. Its maximum input voltage range is from 6 V to 18 V, with 12 V being the recommended value.

The board comes with firmware designed to operate with a graphical user interface (GUI) and contains headers for in-circuit serial programming and I2C communication. An included USB-to-serial bridging board (such as the BB62Z76A MCP2221A breakout board USB) with the GUI allows different configurations to be quickly tested with real-world load devices charging on the two ports. The DCP board GUI requires a PC with Microsoft Windows operating system 7–11 and a USB 2.0 port. The GUI then shows parameter and board status and faults and enables user configuration.

DCP board components

Being a port board with two ports, there are two independent USB PD channels (Figure 2), each with their own dedicated analog front end (AFE). The AFE in the Microchip MCP19061 device is a mixed-signal, digitally controlled four-switch buck-boost power controller with integrated synchronous drivers and an I2C interface (Figure 3).

Block diagram shows two independently managed USB PD channels on Microchip's MCP19061-powered DCP board.Figure 2: Two independently managed USB PD channels on the MCP19061-powered DCP board (Source: Microchip Technology Inc.) Block diagram of Microchip's MCP19061 four-switch buck-boost device.Figure 3: Block diagram of the MCP19061 four-switch buck-boost device (Source: Microchip Technology Inc.)

Moreover, one of the channels features the Microchip MCP22350 device, a highly integrated, small-format USB Type-C PD 2.0 controller, whereas the other channel contains a Microchip MCP22301 device, which is a standalone USB Type-C PD port controller, supporting the USB PD 3.0 specification.

The MCP22350 acts as a companion PD controller to an external microcontroller, system-on-chip or USB hub. The MCP22301 is an integrated PD device with the functionality of the SAMD20 microcontroller, a low-power, 32-bit Arm Cortex-M0+ with an added MCP22350 PD media access control and physical layer.

Each channel also has its own UCS4002 USB Type-C port protector, guarding from faults but also protecting the integrity of the charging process and the data transfer (Figure 4).

Traditionally a USB Type-C connector embeds the D+/D– data lines (USB2), Rx/Tx for USB3.x or USB4, configuration channel (CC) lines for charge mode control, sideband-use (SBU) lines for optional functions, and ground (GND). The UCS4002 protects the CC and D+/D– lines for short-to-battery. It also offers battery short-to-GND (SG_SENS) protection for charging ports.

Integrated switching VCONN FETs (VCONN is a dedicated power supply pin in the USB Type-C connector) provide overvoltage, undervoltage, back-voltage, and overcurrent protection through the VCONN voltage. The board’s input rail includes a PMOS switch for reverse polarity protection and a CLC EMI filter. There are also features such as a VDD fuse and thermal shutdown, enabled by a dedicated temperature sensor, the MCP9700, which monitors the board’s temperature.

Block diagram of Microchip's UCS4002 USB port protector device.Figure 4: Block diagram of the UCS4002 USB port protector device (Source: Microchip Technology Inc.)

The UCS4002 also provides fault-reporting configurability via the FCONFIG pin, allowing users to configure the FAULT# pin behavior. The CC, D+/D –, and SG_SENS pins are electrostatic-discharge-protected to meet the IEC 61000-4-2 and ISO 10605 standards.

The DCP board includes an auxiliary supply based on the MCP16331 integrated step-down switch-mode regulator providing a 5-V voltage and an MCP1825 LDO linear regulator providing a 3.3-V auxiliary voltage.

Board operation

The MCP19061 DCP board shows how the MCP19061 device operates in a four-switch buck-boost topology for the purpose of supplying USB loads and charging them with their required voltage within a permitted range, regardless of the input voltage value. It is configured to independently regulate the amount of output voltage and current for each USB channel (their individual charging profile) while simultaneously communicating with the USB-C-connected loads using the USB PD stack protocols.

All operational parameters are programmable using the two integrated Microchip USB PD controllers, through a dynamic reconfiguration and customization of charging operations, power conversion, and other system parameters. The demo shows how to enable the USB PD programmable power supply fast-charging capability for advanced charging technology that can modify the voltage and current in real time for maximum power outputs based on the device’s charging status.

The MCP19061 device works in conjunction with both current- and voltage-sense control loops to monitor and regulate the load voltage and current. Moreover, the board automatically detects the presence or removal of a USB PD–compliant load.

When a USB PD–compliant load is connected to the USB-C Port 1 (on the PCB right side; this is the higher one), the USB communication starts and the MCP19061 DCP board displays the charging profiles under the Port 1 window.

If another USB PD load is connected to the USB-C Port 2, the Port 2 window gets populated the same way.

The MCP19061 PWM controller

The MCP19061 is a highly integrated, mixed-signal four-switch buck-boost controller that operates from 4.5 V to 36 V and can withstand up to 42 V non-operating. Various enhancements were added to the MCP19061 to provide USB PD compatibility with minimum external components for improved calibration, accuracy, and flexibility. It features a digital PWM controller with a serial communication bus for external programmability and reporting. The modulator regulates the power flow by controlling the length of the on and off periods of the signal, or pulse widths.

The operation of the MCP19061 enables efficient power conversion with the capability to operate in buck (step-down), boost (step-up), and buck-boost topologies for various voltage levels that are lower, higher, or the same as the input voltage. It provides excellent precision and efficiency in power conversions for embedded systems while minimizing power losses. Its features include adjustable switching frequencies, integrated MOSFET drivers, and advanced fault protection. The operating parameters, protection levels, and fault-handling procedures are supervised by a proprietary state machine stored in its nonvolatile memory, which also stores the running parameters.

Internal digital registers handle the customization of the operating parameters, the startup and shutdown profiles, the protection levels, and the fault-handling procedures. To set the output current and voltage, an integrated high-accuracy reference voltage is used. Internal input and output dividers facilitate the design while maintaining high accuracy. A high-accuracy current-sense amplifier enables precise current regulation and measurement.

The MCP19061 contains three internal LDOs: a 5-V LDO (VDD) powers internal analog circuits and gate drivers and provides 5 V externally; a 4-V LDO (AVDD) powers the internal analog circuitry; and a 1.8-V LDO supplies the internal logic circuitry.

The MCP19061 is packaged in a 32-lead, 5 × 5-mm VQFN, allowing system designers to customize application-specific features without costly board real estate and additional component costs. A 1-MHz I2C serial bus enables the communication between the MCP19061 and the system controller.

The MCP19061 can be programmed externally. For further evaluation and testing, Microchip provides an MCP19061 dedicated evaluation board, the EV82S16A.

The post Expanding power delivery in systems with USB PD 3.1 appeared first on EDN.

Simple state variable active filter

Tue, 12/02/2025 - 15:00

The state variable active filter (SVAF) is an active filter you don’t see mentioned much today; however, it’s been a valuable asset for us old analog types in the past. This became especially true when cheap dual and quad op-amps became common place, as one can “roll their own” SVAF with just one IC package and still have an op-amp left over for other tasks!

Wow the engineering world with your unique design: Design Ideas Submission Guide

The unique features of this filter are having low-pass (LP), high-pass (HP), and band-pass (BP) filter results simultaneously available, with low component sensitivity, and an independent filter “Q” while creating a quadratic 2nd order filter function with 40-dB/decade slope factors. The main drawback is requiring three op-amps and a few more resistors than other active filter types.

The SVAF employs dual series-connected and scaled op-amp integrators with dual independent feedback paths, which creates a highly flexible filter architecture with the mentioned “extra” components as the downside.

With the three available LP, HP, and BP outputs, this filter seemed like a nice candidate for investigating with the Bode function available in modern DSOs. This is especially so for the newer Siglent DSO implementations that can plot three independent channels, which allows a single Bode plot with three independent plot variables: LP, HP, and BP.

Creating a SVAF with a couple of LM358 duals (didn’t have any DIP-type quad op-amps like the LM324 directly available, which reminds me, I need to order some soon!!), a couple of 0.01-µF mylar Caps, and a few 10 kΩ and 1 kΩ resistors seemed like a fun project.

The SVAF natural frequency corner is simply 1/RC, as shown in the notebook image in Figure 1 as ~1.59 kHz with the mentioned component values. The filter’s “Q” was set by changing R4 and R5.

Figure 1 The author’s hand-drawn schematic with R1=R2, R3=R6, and C1=C2, resistor values are 1 kΩ and 10 kΩ, and capacitors are 0.01 µF.

This produced plots of a Q of 1, 2, and 4 shown in Figure 2Figure 3, and Figure 4, respectively, along with supporting LTspice simulations.

The DSO Bode function was set up with DSO CH1 as the input, CH2 (red) as the HP, CH3 (cyan) as the LP, and CH4 (green) as the BP. The phase responses can also be seen as the dashed color lines that correspond to the colors of the HP, LP, and BP amplitude responses.

While it is possible to include all the DSO channel phase responses, this clutters up the display too much, so on the right-hand side of each image, the only phase response I show is the BP phase (magenta) in the DSO plots.

Figure 2 The left side shows the Q =1 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =1 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

Figure 3 The left side shows the Q =2 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =2 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

Figure 4 The left side shows the Q =4 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =4 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

The Bode frequency was swept with 33 pts/dec from 10 Hz to 100 kHz using a 1-Vpp input stimulus from a LAN-enabled arbitrary waveform generator (AWG). Note how the three responses all cross at ~1.59 kHz, and the BP phase, or the magenta line for the images on the right side, crosses zero degrees here.

If we extend the frequency of the Bode sweep out to 1 MHz, as shown in Figure 5, well beyond where you would consider utilizing an LM358. The simulation and DSO Bode measurements agree well, even at this range. Note how the simulation depicts the LP LM358 op-amp output resonance ~100 kHz (cyan) and the BP Phase (magenta) response.

Figure 5 The left side shows the Q =7 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =7 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

I’m honestly surprised the simulation agrees this well, considering the filter was crudely assembled on a plug-in protoboard and using the LM358 op-amps. This is likely due to the inverting configuration of the SVAF structure, as our experience has shown that inverting structures tend to behave better with regard to components, breadboard, and prototyping, with all the unknown parasitics at play!

Anyway, the SVAF is an interesting active filter capable of producing simultaneous LP, HP, and BP results. It is even capable of producing an active notch filter with an additional op-amp and a couple of resistors (requires 4 total, but with the LM324, a single package), which the interested reader can discover.

Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Exelis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.

Related Content

The post Simple state variable active filter appeared first on EDN.

A budget battery charger that also elevates blood pressure

Mon, 12/01/2025 - 16:55

At the tail end of my September 1 teardown of EBL’s first-generation 8-bay battery charger:

I tacked on a one-paragraph confession, with an accompanying photo that as usual, included a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

I’ll wrap up with a teaser photo of another, smaller, but no less finicky battery charger that I’ve also taken apart, but, due to this piece as-is ending up longer-than-expected (what else is new?), I have decided to instead save for another dedicated teardown writeup for another day:

An uncertain lineage

That day is today. And by “finicky”, as was the case with its predecessor, I was referring to its penchant for “rejecting batteries that other chargers accepted complaint-free.”

Truth be told, I can’t recall how it came into my possession in the first place, nor how long I’ve owned it (aside from a nebulous “really long time”). Whatever semblance of an owner’s manual originally came with the charger is also long gone; tedious searches of both my file cabinet and online resources were fruitless. There’s not even a company name or product code to be found anywhere on the outer device labeling, just a vague “Smart Timer Charger” moniker:

The best I’ve been able to do, thanks to Google Image Search, is come across similar-looking device matches from a company called “Vidpro Power2000” (with the second word variously alternatively referred to as “Power 2000”) listed on Amazon under multiple different product names, such as the XP-333 when bundled with four 2900 mah AA NiMH batteries:

and the XP-350 with four accompanying 1000mAh AAA batteries, again NiMH-based:

My guess is that neither “Vidpro Power2000” nor whatever retail brand name was associated with this particular charger was actually the original manufacturer. And by the way, those three plastic “bumps” toward the top of the front panel, above the battery compartment and below the “Power2000” mark, aren’t functional, only cosmetic. The only two active LEDs are the rectangular ones at the front panel’s bottom edge, seen in action in an earlier photo.

Anyhoo, after some preparatory top, bottom, and side chassis views as supplements to the already shared front and back perspectives:

A few screws loose

Let’s work our way inside, beginning (and ending?) with the visible screw head in between the two foldable AC plug prongs:

Nope, that wasn’t enough:

Wonder what, if anything, is under the back panel sticker? A-ha:

There we are:

“Nice” unsightly blob of dried glue in the upper left corner there, eh?

No more screws, clips, or other retainers left; the PCB lifts away from the remainder of the plastic chassis straightaway:

As I noted earlier, those “three bumps” are completely cosmetic, with no functional purpose:

Dual-tone and contract manufacturer-grown

And speaking of cosmetics, the two-tone two-sided PCB is an unexpected aesthetic bonus:

As you may have already noticed from the earlier glimpse of the PCB’s backside, the trace regions are sizeable, befitting their hefty AC and DC power routing purposes and akin to those seen last time (where, come to think of it, the PCB was also two-tone for the two sides). But the PCB itself is elementary, seemingly with no embedded trace layers, therein explaining the between-regions routing jumpers that through-hole feed to the other side:

We’ve also finally found a product name: the “TL2000S” from “Samyatech”. My Google search results on the product code were fruitless; let me know in the comments if you had any better luck (I’m particularly interested in finding a PDF’d user manual). My research on the company was more fruitful, but only barely so. There are (or perhaps more accurately in this case, were) two companies that use(d) the “Samyatech” abbreviation, both named “Samya Technology” in full. One is based in Taiwan, the other is in South Korea. The former, I’m guessing, is our candidate:

Samya Technology is a manufacturer of charging solutions for consumer products. The company manufactures power banks, emergency chargers, mobile phone battery chargers, USB charging products, Solar based chargers, Secondary NiMH Batteries, Multifunction chargers, etc. The company has two production bases, one in Taiwan and the other in China.

The website associated with the main company URL, www.samyatech.com, is currently timing out for me. Internet Archive Wayback Machine snapshots suggest two more information bits:

  • The main URL used to redirect to samyatech.com.tw, which is also timing out, and
  • More generally, although I can’t read Chinese, so don’t take what I’m saying as “gospel”, it seems the company shut down at the start of the COVID-19 lockdown and didn’t reopen.

Up top is the AC-to-DC conversion circuitry, along with other passives:

And at the bottom are the aforementioned LEDs and their attached light pipes:

Back to the PCB backside, this time freed of its previous surrounding-chassis encumbrance:

That blotch of dried glue sure is ugly (not to mention, unlike its same-color counterparts on the other side that keep various components in place, of no obvious functional value), isn’t it?

Algorithmic (over)simplicity

The IC nexus of the design was a surprise (at least to me, perhaps less so to others who are already more immersed in the details of such designs):

At left is the AZ324M, a quad low-power op amp device from (judging by the company logo mark) Advanced Analog Circuits, part of BCD Semiconductor Manufacturing Limited, and subsequently acquired by Diodes Incorporated.

And at right? When I first saw the distinctive STMicroelectronics mark on one end of the package topside, I assumed I was dealing with a low-end firmware-fueled microcontroller. But I was wrong. It’s the HCF4060, a 14-stage ripple carry binary counter/divider and oscillator. As the Build Electronics Circuits website notes, “It can be used to produce selectable time delays or to create signals of different frequencies.”

This all ties to, as I’ve been able to gather from my admittedly limited knowledge and research, how basic battery chargers like this one work in the first place (along with why they tend to be so fickle). Perhaps obviously, it’s important upfront for such a charger to be able to discern whether the batteries installed in it are actually the intended rechargeable NiMH formulation.

So, it first subjects the cells to a short-duration, relatively high current pulse (referencing the HCF4060’s time delay function), then reads back their voltages. If it discerns that a cell has a higher-than-expected resistance, it assumes that this battery’s not rechargeable or is instead based on an alternative chemistry such as alkaline or NiCd…and terminates the charge cycle.

That said, rechargeable NiMH cells’ internal resistance also tends to increase with use and incremental recharge cycles. And batteries that are in an over-discharge state, whether from sitting around unused (a particular problem with early cells that weren’t based on low self-discharge architectures) or from being excessively drained by whatever device they were installed in, tend to be intolerant of elementary recharging algorithms, too.

That said, I’ve conversely in the past sometimes been able to convince this charger to accept a cell that it initially rejected, even if the battery was already “full” (if I’ve lost premises power and the charger acts flaky when the electricity subsequently starts flowing again later, for example) by popping it into an illuminated flashlight for a few minutes to drain off some of the stored electrons.

So…🤷‍♂️ And again, as I mentioned back in September, a more “intelligent” (albeit also more expensive) charger such as my La Crosse Technology BC-9009 AlphaPower is commonly much more copacetic with (including being capable of resurrecting) cells that simplistic chargers comparatively reject:

Some side-view shots in closing, including closeups:

And with that, I’ll turn it over to you for your thoughts in the comments. A reminder that I’m only nominally cognizant of analog and power topics (and truth be told, I’m probably being overly generous-of-self in even claiming that), dear readers—I’m much more of a “digital guy”—so tact in your responses is as-always appreciated! I’m also curious to poll your opinions as to whether I should bother putting the charger back together and donating it to another, as I normally do with devices I non-destructively tear down, or if it’d be better in this case to save potential recipients the hassle and instead destine it for the landfill. Let me know!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

 Related Content

The post A budget battery charger that also elevates blood pressure appeared first on EDN.

Delta-sigma demystified: Basics behind high-precision conversion

Mon, 12/01/2025 - 07:57

Delta-sigma (ΔΣ) converters may sound complex, but at their core, they are all about precision. In this post, we will peel back the layers and uncover the fundamentals behind their elegant design.

At the heart of many precision measurement systems lies the delta-sigma converter, an architecture engineered for accuracy. By trading speed for resolution, it excels in low-frequency applications where precision matters most, including instrumentation, audio, and industrial sensing. And it’s worth noting that delta-sigma and sigma-delta are interchangeable terms for the same signal conversion architecture.

Sigma-delta classic: The enduring AD7701

Let us begin with a nod to the venerable AD7701, a 16-bit sigma-delta ADC that sets a high bar for precision conversion. At its core, the device employs a continuous-time analog modulator whose average output duty cycle tracks the input signal. This modulated stream feeds a six-pole Gaussian digital filter, delivering 16-bit updates to the output register at rates up to 4 kHz.

Timing parameters—including sampling rate, filter corner, and output word rate—are governed by a master clock, sourced either externally or via an on-chip crystal oscillator. The converter’s linearity is inherently robust, and its self-calibration engine ensures endpoint accuracy by adjusting zero and full-scale references on demand. This calibration can also be extended to compensate for system-level offset and gain errors.

Data access is handled through a flexible serial interface supporting asynchronous UART-compatible mode and two synchronous modes for seamless integration with shift registers or standard microcontroller serial ports.

Introduced in the early 1990s, Analog Devices’ AD7701 helped pioneer low-power, high-resolution sigma-delta conversion for instrumentation and industrial sensing. While newer ADCs have since expanded on their capabilities, AD7701 remains in production and continues to serve in legacy systems and precision applications where its simplicity and reliability still resonate.

The following figure illustrates the functional block diagram of this enduring 16-bit sigma-delta ADC.

Figure 1 Functional block diagram of AD7701 showcases its key architectural elements. Source: Analog Devices Inc.

Delta-sigma ADCs and DACs

Delta-sigma converters—both analog-to-digital converters (ADCs) and digital-to-analog converters (DACs)—leverage oversampling and noise shaping to achieve high-resolution signal conversion with relatively simple analog circuitry.

In a delta-sigma ADC, the input signal is sampled at a much higher rate than the Nyquist frequency and passed through a modulator that emphasizes quantization noise at higher frequencies. A digital filter then removes this noise and decimates the signal to the desired resolution.

Conversely, delta-sigma DACs take high-resolution digital data, shape the noise spectrum, and output a high-rate bitstream that is smoothed by an analog low-pass filter. This architecture excels in audio and precision measurement applications due to its ability to deliver robust linearity and dynamic range with minimal analog complexity.

Note that from here onward, the focus is exclusively on delta-sigma ADCs. While DACs share similar architectural elements, their operational context and signal flow differ significantly. To maintain clarity and relevance, DACs are omitted from this discussion—perhaps a topic for a future segment.

Inside the delta-sigma ADC

A delta-sigma ADC typically consists of two core elements: a delta-sigma modulator, which generates a high-speed bitstream, and a low-pass filter that extracts the usable signal. The modulator outputs a one-bit serial stream at a rate far exceeding the converter’s data rate.

To recover the average signal level encoded in this stream, a low-pass filter is essential; it suppresses high-frequency quantization noise and reveals the underlying low-frequency content. At the heart of every delta-sigma ADC lies the modulator itself; its output bitstream represents input signal’s amplitude through its average value.

A block diagram of a simple analog first-order delta-sigma modulator is shown below.

Figure 2 The block diagram of a simple analog first-order delta-sigma modulator illustrates its core components. Source: Author

This modulator operates through a negative feedback loop composed of an integrator, a comparator, and a 1-bit DAC. The integrator accumulates the difference between the input signal and the DAC’s output. The comparator then evaluates this integrated signal against a reference voltage, producing a 1-bit data stream. This stream is fed back through DAC, closing the loop and enabling continuous refinement of the output.

Following the delta-sigma modulator, the 1-bit data stream undergoes decimation via a digital filter (decimation filter). This process involves data averaging and sample rate reduction, yielding a multi-bit digital output. Decimation concentrates the signal’s relevant information into a narrower bandwidth, enhancing resolution while suppressing quantization noise within the band of interest.

It’s no secret to most engineers that second-order delta-sigma ADCs push noise shaping further by using two integrators in the modulator loop. This deeper shaping shifts quantization noise farther into high frequencies, improving in-band resolution at a given oversampling ratio.

While the design adds complexity, it enhances signal fidelity and eases post-filtering demands. Second-order modulators are common in precision applications like audio and instrumentation, though stability and loop tuning become more critical as order increases.

Well, at its core, the delta-sigma ADC represents a seamless integration of analog and digital processing. Its ability to achieve high-resolution conversion stems from the coordinated use of oversampling, noise shaping, and decimation—striking a delicate balance between speed and precision.

Delta-sigma ADCs made approachable

Although delta-sigma conversion is a complex process, several prewired ADC modules—built around popular, low-cost ICs like the HX711, ADS1232/34, and CS1237/38—make experimentation remarkably accessible. These chips offer high-resolution conversion with minimal external components, ideal for precision sensing and weighing applications.

Figure 3 A few widely used modules simplify delta-sigma ADC practice, even for those just starting out. Source: Author

Delta-sigma vs. flash ADCs vs. SAR

Most of you already know this, but flash ADCs are the speed demons of the converter world—using parallel comparators to achieve ultra-fast conversion, typically at the expense of resolution.

Flash ADCs and delta-sigma architectures serve distinct roles, with conversion rates differing by up to two orders of magnitude. Delta-sigma ADCs are ideal for low-bandwidth applications—typically below 1 MHz—where high resolution (12 to 24 bits) is required. Their oversampling approach trades speed for precision, followed by filtering to suppress quantization noise. This also simplifies anti-aliasing requirements.

While delta-sigma ADCs excel in resolution, they are less efficient for multichannel systems. Architecture may use sampled-data modulators or continuous-time filters. The latter shows promise for higher conversion rates—potentially reaching hundreds of Msps—but with lower resolution (6 to 8 bits). Still in early R&D, continuous-time delta-sigma designs may challenge flash ADCs in mid-speed applications.

Interestingly, flash ADCs can also serve as internal building blocks within delta-sigma circuits to boost conversion rates.

Also, successive approximation register (SAR) ADCs sit comfortably between flash and delta-sigma designs, offering a practical blend of speed, resolution, and efficiency. Unlike flash ADCs, which prioritize raw speed using parallel comparators, SAR converters use a binary search approach that is slower but far more power-efficient.

Compared to delta-sigma ADCs, SAR designs avoid oversampling and complex filtering, making them ideal for moderate-resolution, real-time applications. Each architecture has its sweet spot: flash for ultra-fast, low-resolution tasks; delta-sigma for high-precision, low-bandwidth needs; and SAR for balanced performance across a wide range of embedded systems.

Delta-sigma converters elegantly bridge the analog and digital worlds, offering high-resolution performance through clever noise shaping and oversampling. Whether you are designing precision instrumentation or exploring audio fidelity, understanding their principles unlocks a deeper appreciation for modern signal processing.

Curious how these concepts translate into real-world design choices? Join the conversation—share your favorite delta-sigma use case or challenge in the comments. Let us map the noise floor together and surface the insights that matter.

T.K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Delta-sigma demystified: Basics behind high-precision conversion appeared first on EDN.

Power Tips #147: Achieving discrete active cell balancing using a bidirectional flyback

Fri, 11/28/2025 - 15:00

Efficient battery management becomes increasingly important as demand for portable power continues to rise, especially since balanced cells help ensure safety, high performance, and a longer battery life. When cells are mismatched, the battery pack’s total capacity decreases, leading to the overcharging of some cells and undercharging of others—conditions that accelerate degradation and reduce overall efficiency. The challenge is how to maintain an equal voltage and charge among the individual cells.

Typically, it’s possible to achieve cell balancing through either passive or active methods. Passive balancing, the more common approach because of its simplicity and low cost, equalizes cell voltages by dissipating excess energy from higher-voltage cells through a resistor or FET networks. While effective, this process wastes energy as heat.

In contrast, active cell balancing redistributes excess energy from higher-voltage cells to lower-voltage ones, improving efficiency and extending battery life. Implementing active cell balancing involves an isolated, bidirectional power converter capable of both charging and discharging individual cells.

This Power Tip presents an active cell-balancing design based on a bidirectional flyback topology and outlines the control circuitry required to achieve a reliable, high-performance solution.

System architecture

In a modular battery system, each module contains multiple cells and a corresponding bidirectional converter (the left side of Figure 1). This arrangement enables any cell within Module 1 to charge or discharge any cell in another module, and vice versa. Each cell connects to an array of switches and control circuits that regulate individual charge and discharge cycles.

Figure 1 A modular battery system block diagram with multiple cells a bidirectional converter where any cell within Module 1 can charge/discharge any cell in another module. Each cell connects to an array of switches and control circuits that regulate individual charge/discharge cycles. Source: Texas Instruments

Bidirectional flyback reference design

The block diagram in Figure 2 illustrates the design of a bidirectional flyback converter for active cell balancing. One side of the converter connects to the bus voltage (18 V to 36 V), which could be the top of the battery cell stack, while the other side connects to a single battery cell (3.0 V to 4.2 V). Both the primary and secondary sides employ flyback controllers, allowing the circuit to operate bidirectionally, charging or discharging the cell as required.

Figure 2 A bidirectional flyback for active cell balancing reference design. Source: Texas Instruments

A single control signal defines the power-flow direction, ensuring that both flyback integrated circuits (ICs) never operate simultaneously. The design delivers up to 5 A of charge or discharge current, protecting the cell while maintaining efficiency above 80% in both directions (Figure 3).

Figure 3 Efficiency data for charging (left) and discharging (right). Source: Texas Instruments

Charge mode (power from Vbus to Vcell)

In charge mode, the control signal enables the charge controller, allowing Q1 to act as the primary FET. D1 is unused. On the secondary side, the discharge controller is disabled, and Q2 is unused. D2 serves as the output diode providing power to the cell. The secondary side implements constant-current and constant-voltage loops to charge the cell at 5 A until reaching the programmed voltage (3.0 V to 4.2 V) while keeping the discharge controller disabled.

Discharge mode (power from Vcell to Vbus)

Just the opposite happens in discharge mode; the control signal enables the discharge controller and disables the charge controller. Q2 is now the primary FET, and D2 is inactive. D1 serves as the output diode while Q1 is unused. The cell side enforces an input current limit to prevent discharge of the cell above 5 A. The Vbus side features a constant-voltage loop to ensure that the Vbus remains within its setpoint.

Auxiliary power and bias circuits

The design also integrates two auxiliary DC/DC converters to maintain control functionality under all operating conditions. On the bus side, a buck regulator generates 10 V to bias the flyback IC and the discrete control logic that determines the charge and discharge direction. On the cell side, a boost regulator steps the cell voltage up to 10 V to power its controller and ensure that the control circuit is operational even at low cell voltages.

Multimodule operation

Figure 4 illustrates how multiple battery modules interconnect through the reference design’s units. The architecture allows an overcharged cell from a higher-voltage module, shown at the top of the figure, to transfer energy to an undercharged cell in any other module. The modules do not need to be connected adjacently. Energy can flow between any combination of cells across the pack.

Figure 4 Interconnection of battery modules using TI’s reference design for bidirectional balancing. Source: Texas Instruments

Future improvements

For higher-power systems (20 W to 100 W), adopting synchronous rectification on the secondary and an active-clamp circuit on the primary will reduce losses and improve efficiency, thus enhancing performance.

For systems exceeding 100 W, consider alternative topologies such as forward or inductor-inductor-capacitor (LLC) converters. Regardless of topology, you must ensure stability across the wide-input and cell-voltage ranges characteristic of large battery systems.

Modern multicell battery systems.

The bidirectional flyback-based active cell balancing approach offers a compact, efficient, and scalable solution for modern multicell battery systems. By recycling energy between cells rather than dissipating this energy as heat, the design improves both energy efficiency and battery longevity. Through careful control-loop optimization and modular scalability, this architecture enables high-performance balancing in portable, automotive, and renewable energy applications.

Sarmad Abedin is currently a systems engineer with Texas Instruments, working in the power design services (PDS) team, working on both automotive and industrial power supplies. He has been designing power supplies for the past 14 years and has experience in both isolated and non-isolated power supply topologies. He graduated from Rochester Institute of Technology in 2011 with his bachelor’s degree.

 

Related Content

The post Power Tips #147: Achieving discrete active cell balancing using a bidirectional flyback appeared first on EDN.

Pages