EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 20 min 37 sec ago

Six critical trends reshaping 3D IC design in 2026

7 hours 36 min ago

AI compute is scaling at ~1.35× per year, nearly twice the pace of transistor scaling. Thus, the semiconductor industry has reached a hard inflection point: if we can’t scale down, we must scale up. Increasingly, engineering teams are turning to 3D ICs to keep pace with the ascent of next-gen AI scaling.

However, designing in three-dimensions also exacerbates system complexity, leaving IC and package designers with a pressing question: how do you explore millions of design considerations and still optimize and validate system performance within schedule constraints?

This article examines six trends that will help design teams overcome this challenge and help them reshape the future of 3D IC design in 2026.

 

Trend 1: STCO becomes crucial for multi-chiplet integration at AI scales

Advanced packages already exceed tens of millions of pins, with trajectories pointing toward hundreds of millions. At this scale, no design teams can fully comprehend the system through traditional spreadsheets or point tools. Design complexity has fundamentally shifted to system-level orchestration.

This is where system-technology co-optimization (STCO) becomes critical by incorporating packaging architectures, die-to-die interconnects, power delivery networks, thermal paths, and mechanical reliability into a unified optimization loop.

Figure 1 STCO unifies packaging architectures, die-to-die interconnects, power delivery networks, thermal paths, and mechanical reliability into a single optimization loop. Source: Siemens EDA

A core benefit is the industry’s long-awaited “shift-left” for 3D ICs: Predictive multiphysics modeling allows teams to assess performance, power, thermal headroom, and mechanical stress concurrently and address architectural risks.

To enable true STCO, EDA toolchains must evolve from siloed analysis into integrated system platforms that create a unified 3D digital twin with shared data models, giving all stakeholders a persistent, system-level view and ensuring cross-domain optimization from a single, consistent dataset.

As chiplet-based architectures scale, STCO will become a foundational requirement for achieving performance, yield, and reliability targets in next-generation AI and high-performance computing systems.

Trend 2: Co-packaged optics reshape AI system architectures

As AI clusters push beyond 100 Tb/s per node, the gap between what silicon can generate and what traditional copper interconnects can deliver is widening fast. Even with SerDes continuing to scale, copper links are approaching fundamental limits in bandwidth density and energy efficiency, turning interconnect power into a major system bottleneck.

With global AI data center power demand projected to rise 50% by 2027, efficiency gains have become non-negotiable. This pressure is accelerating momentum behind co-packaged optics (CPO). By placing optical engines directly adjacent to switch ASICs, accelerators, and chiplets, CPO collapses electrical trace lengths from inches to millimeters, dramatically reducing signal loss while improving bandwidth density, latency, and power efficiency.

Figure 2 CPO reduces electrical trace lengths from inches to millimeters to significantly lower signal loss. Source: Siemens EDA

Nvidia reports that moving from pluggable transceivers to CPO in 1.6T networks can reduce link power from roughly 30 W to 9 W per port. Industry forecasts project over 10 million 3.2T CPO ports by 2029, signaling a shift from early pilots to volume deployment. However, this transition introduces new design challenges.

Photonic ICs are highly temperature-sensitive, while 3D CPO integration adds hybrid bonding interfaces, die thinning, and vertical heat flow that create complex thermo-mechanical interactions. Thermal gradients can induce wavelength drift, alignment errors, and long-term reliability risks—making thermal-optical co-design and multiphysics analysis essential for production-scale CPO deployment.

Trend 3: Advanced packaging innovations drive integration scale-out

New power delivery architectures and vertical integration schemes continue to emerge. As thermal-compressed bonds reach their integration limits, hybrid bonds will drive the 3D interconnect to 1 µm and below. Additionally, AI and high-performance computing (HPC) suppliers are considering wafer- and panel-level architectures to place more computing closer together, and foundries are pursuing more modular wafer-scale strategies.

Material innovation is also reshaping system integration. Glass substrates are gaining traction for large-area packaging and high-frequency AI and 6G applications, supporting more reliable signaling at higher data rates while reducing package warpage by nearly 50% in large substrates.

To adapt to this pace of change, an open and scalable workflow is critical to aligning new application requirements with manufacturability, yield, and cost. So, EDA tools must support rapid design-space exploration, early multiphysics modeling, and AI-assisted optimization to navigate the exponentially expanding solution space.

Trend 4: Novel thermal solutions rise to meet AI power density challenges

Power densities in leading-edge 3D ICs have already been compared to those at the surface of the sun. With multiple chiplets stacked in extreme proximity, 3D IC power densities create intense localized hotspots and trap heat in tiers far from the heat sink. This vertical thermal confinement is pushing conventional top-down air and cold-plate cooling approaches beyond their practical limits.

To address this challenge, microfluidic cooling architectures are being heavily researched and gaining early pilot traction. By etching micron-scale channels directly into silicon dies or interposers, engineers can route coolant within tens of micrometers of active transistors, enabling localized heat extraction and significantly shortening thermal conduction paths.

At the package interface, thermal interface materials (TIM) remain one of the dominant thermal bottlenecks. TIM1—located between the die and heat spreader—is particularly critical due to its proximity to active silicon. An effective TIM must minimize thermal resistance while maintaining mechanical compliance under thermal cycling and package-induced stress.

Among near-term solutions, indium foils have emerged as leading candidates for high-performance TIM1 applications. Researchers are also exploring advanced alternatives, including phase-change materials, graphene and carbon nanotube composites, silver-filled thermal gels, and liquid metals. Some experimental approaches aim to reduce or bypass conventional TIM layers altogether by integrating cooling structures directly onto the die surface.

Ultimately, ensuring thermal, power, and mechanical reliability is an inherently interdisciplinary challenge—one that no single innovation in chip architecture, materials, or cooling design can solve in isolation. By unifying multiphysics analysis, thermal-driven floorplanning, and system-aware design within a single digital thread, Siemens Innovator3D IC and Calibre 3DThermal enable engineers to establish reliability early on the design process, evaluate trade-offs earlier, and converge faster on manufacturable, high-performance 3D IC designs.

Figure 3 Thermal solutions for 3D ICs allow engineers to evaluate trade-offs early in the design process. Source: Siemens EDA

Trend 5: AI accelerates 3D IC designs for AI

The semiconductor industry needs more than one million additional skilled workers by 2030. There simply aren’t enough domain experts to balance signal integrity, power integrity, thermal effects, and mechanical stress across complex 3D ICs.

AI offers a practical path to scale scarce engineering expertise and close the productivity gap. One high-impact application is AI-driven, design-space exploration. Modern 3D IC architectures involve thousands to millions of tightly coupled variables, spanning die partitioning, material stacks, floorplanning, interconnect topology, and power delivery design.

Machine learning and reinforcement learning techniques accelerate exploration by rapidly predicting outcomes, learning from prior iterations, and uncovering non-obvious trade-offs that deliver measurable performance, power, and reliability gains.

Another critical application is automated power-thermal co-analysis. In 3D ICs, power dissipation directly raises temperature, while temperature feeds back into leakage and dynamic power behavior. Agentic AI and ML techniques improve both accuracy and turnaround time by automating complex modeling steps.

Predictive characterization can infer cell behavior at new temperature corners, while intelligent leakage modeling extracts temperature-dependent behavior directly from data, reducing manual calibration effort and improving model fidelity.

Over the past several years, Siemens EDA has embedded industrial-grade AI directly into 3D IC design flows, from verification and multiphysics analysis to design exploration, guided by five foundational principles:

  • Accuracy: Conforming to strict physical laws
  • Verifiability: Transparent decision-making
  • Robustness: Consistent performance with new data
  • Generalizability: Applying insights across new problems
  • Usability: Seamless integration with existing CAD/CAE tools

Trend 6: Integrated multiphysics workflow sets new standards for 3D IC system performance

Thermal, mechanical, and electrical effects are no longer secondary concerns that can be checked after layout. A chiplet may meet specifications in isolation yet may suffer degraded reliability when exposed to the actual thermal gradients, stress fields, power-delivery impedance, and IR-drop profiles inside a 3D stack.

This reality is driving a clear shift left in multiphysics analysis. These effects must be considered as part of early architecture decisions, chiplet partitioning, RTL modeling, and floorplanning—when the most impactful trade-offs are still on the table.

To make this practical, the industry needs standardized “multiphysics Liberty files” that capture temperature- and stress-dependent behavior of chiplet blocks. With this information available upfront, designers can verify whether a chiplet will remain within safe operating limits under realistic thermal and mechanical conditions.

Just as important, multiphysics evaluation cannot be a one-time checkpoint. 3D IC design is highly iterative, and every change—to layout, interfaces, materials, or stack configuration—can subtly reshape thermal paths, stress distributions, and electrical parasitics. Without continuous re-validation, risk accumulates quietly until it shows up as yield loss or reliability failures.

Integrated multiphysics platforms help teams stay ahead of this complexity by anchoring analysis to a shared, authoritative representation of the full 3D assembly. Working from a single source of truth allows teams to iterate confidently, uncover risks earlier, and validate decisions consistently across the entire stack.

The tools of the trade

Success in this new era requires more than a collection of isolated point tools. Design teams need a unified, end-to-end flow that brings together architecture exploration, multiphysics analysis, and cross-domain optimization in a single platform.

3D IC tools deliver exactly this integrated approach, tearing down the traditional walls between IC design, advanced packaging, and system-level validation. By giving design teams a shared source of truth and enabling them to tackle critical challenges earlier in the design cycle, these tools help engineers close on designs faster, explore more ambitious architectures, and ultimately build the silicon that will power the next generation of AI systems.

Kevin Rinebold is technology manager for 3D IC and heterogeneous packaging solutions at Siemens EDA. He has 34 years of experience in defining, developing, and supporting advanced packaging and system planning solutions for the semiconductor and systems markets. Prior to joining Siemens EDA, Kevin was product manager for IC packaging and co-design products at Cadence.

Related Content

The post Six critical trends reshaping 3D IC design in 2026 appeared first on EDN.

Perusing a LED-based gel nail polish UV lamp

Mon, 02/16/2026 - 21:05

This engineer doesn’t use nail polish, but his wife does. And he deals with plenty of PCBs. What do these things have in common? Read on.

Speaking of LEDs that lose their original intensity over time and use…

In the fall of 2020, after accepting that due to the COVID pandemic she wasn’t going to be getting back inside a nail salon any time soon, my wife invested in a UV lamp so she could do her own gel polish-based nails at home. While the terminology I just used in the prior sentence, not to mention the writeup title that preceded it, might be “old news” to at least some of you, others (like me, at first) might be confused. Here goes:

Gel details

First off, what is gel nail polish, both in an absolute sense and relative to its traditional counterpart? Here’s manufacturer OPI’s take:

A gel manicure is a coat of colored gel that looks deceptively similar to nail polish. It’s a thin brush-on formula, designed for high performance and a glossier finish than regular nail polish…An OPI GelColor manicure [also] lasts for up to 3 weeks…The primary difference between gel nails and a regular manicure is curing. Between each coat, you cure the color and set the gel nail polish by putting your nails under a special light.

That “special light” is a UV lamp. Initially, they were constructed using fluorescent tubes. But nowadays, mirroring the broader trend, they increasingly use LEDs instead. The one my wife first bought is Bolasen’s SunX Plus, a “Professional True 80W Salon Grade LED Nail Dryer for Gel Polish.” The link to it on Amazon’s main site now auto-forwards to a more recent battery-operated model (this one’s AC-powered), but I found a still-live product page copy on Amazon’s South Africa site (believe it or not). Here’s the associated stock artwork:

I’m not sure I want to know what the “no black hands” phrase references…

The black base shown in the stock images is missing in action; my wife found that foregoing the bottom plate expanded the lamp’s extremity-insertion gap spacing, thereby easing use. More generally, she’s now replaced this initial UV lamp with a newer successor; the original device’s intensity apparently faded over time, eventually taking excessively long to work its drying magic.

Polymer processing

Speaking of drying (or if you prefer, curing), what’s so special about UV light? Over to a blog post at the Manicurist website for an explanation:

Whether LED or UV, these lamps emit ultraviolet (UV) rays that trigger a chemical reaction called “polymerization”. Under UV exposure, the molecules in the polish bond together to form a solid and durable film known as a “polymer network”.

UV curing is more broadly used in a variety of applications and industries, as Wikipedia notes:

UV curing (ultraviolet curing) is the process by which ultraviolet light initiates a photochemical reaction that generates a crosslinked network of polymers through radical polymerization or cationic polymerization. UV curing is adaptable to printing, coating, decorating, stereolithography, and in the assembly of a variety of products and materials. UV curing is a low-temperature, high speed, and solventless process as curing occurs via polymerization. Originally introduced in the 1960s, this technology has streamlined and increased automation in many industries in the manufacturing sector.

More generally, electrical engineers out there will likely be particularly interested, for example, in UV light’s key role in the photolithography process used to make printed circuit boards!

So why, if this is a UV lamp that’s supposedly emitting light beyond the visible spectrum, is its output discernible by the human visual system (along with my smartphone’s camera)?


(cool photo, eh?)

Part of the answer may be that the LEDs in the design aren’t true UV at all, but instead leverage the lower-cost alternative referred to as near-UV. Part of it may be that the output spectral plot is sufficiently broad to still-noticeably “leak” into the violet portion of the visible range. And part of it may be that, to reassure users that the device is “on” (thereby preventing lengthy periods of peering at “pure” UV light, with likely retinal-damage consequences), the LED manufacturer added a phosphor layer to additionally generate a visible light output. Hold that thought.

Power spec uncertainty

Last (but definitely not least), before diving in, what’s with that “80W” output claim? The device actually supports two different power output modes, 80W and “low-heat” 55W, user-selectable via one of the four topside switches. When I initially plugged the lamp in without the LEDs illuminated, my Kill A Watt electricity usage meter measured 1W of power consumption:

Switch the LEDs on, in low-output mode, and I got 12W:

And in “high” mode? 23W:

12W ≠ 55W. And 23W ≠ 80W. So, what gives? At first, I wasn’t confident that my Kill A Watt was measuring power consumption correctly. But then I looked at the “wall part” that powers the lamp (in the first image of the sequence that follows, as with subsequent images in this post, accompanied by a 0.75″, i.e., 19.1 mm diameter U.S. penny for size comparison purposes):

Let’s zoom in on that last one:

Unless my math’s totally whacked, 24V times 1.5A equals 36W, not 80W, far from higher than that (to account for consumption inefficiency). So again, what gives? Was Bolasen’s marketing team being flat-out deceiving? Maybe: my cynical side certainly resonates with that conclusion.

But at the end of the day, I’ve decided to give the company the benefit of the doubt and conclude that, just as LED light bulb manufacturers do in spec’ing their devices vs incandescent precursors, Bolasen is using fluorescent UV tube intensity equivalents in rating its LED-based UV device. Online references I’ve found equate the lumens brightness rating of a 20-plus watt LED to that of an 80W fluorescent tube. Granted, that’s for visible light, but perhaps the comparison holds in the ultraviolet band as well…regardless, let me know your thoughts in the comments!

Diving inside

My background-info pontification now concluded (thank goodness, right?), let’s get to tearing down, shall we? Here’s our patient:

Raise the transport handle!

FCC-certified? Really? Call me cynical (again):

The specs say 42 total “beads” (LEDs). That matches my count in the photo that follows:

Look closely and you’ll also see, among other things, five screw heads, which I’m wagering are our pathway inside, along with an array of passive ventilation holes. Here’s the 12-LED cluster at the top (when the device is in its normal operating orientation, that is):

and the cluster-of-six at the back:

Along each side are four more clusters-of-three, such as this one:

IR enhancements

The ones at either end, i.e., straddling the lamp’s opening in its normal orientation, are special. The one at the right side (again, in normal orientation) also includes an IR (infrared) transmitter:

while the other additionally incorporates an IR receiver:

This, dear readers, is how the lamp implements the following function (quoting the original broken English on the Amazon product page):

Use the auto-sensor, it would turn on or off automatically when you put hand/foot in or out.

Insert your appendage (hand or foot, to be precise), and its presence breaks the infrared beam that normally traverses the transmitter-to-receiver gap in an uninterrupted fashion. Voila!

OK, let’s get those five screws outta there:

and with only a bit of remaining prying to do:

we’re in!

Although this original lamp may now be too slow to operate for my wife to tolerate, it still works. I’d therefore prefer to put it back together still fully functional and then donate it for someone else to use…or maybe I’ll keep it and use it to cure resin or…hey…make my own PCBs! Regardless, I’m keeping the internal wiring intact. Don’t worry, we’ll still be able to see its guts a’plenty. Let’s start with the inside of the top half of the chassis:

The power connector pops right out of its usual location:

Now for that PCB in the center:

At left is the two-wire connection that powers the LED array. At right is the power input. And the four-wire harness coming out the bottom feeds (and is fed by) the IR transmitter and receiver. The 14-lead IC PCB-labeled U1 in the upper right corner is unmarked, alas, but is presumably the system “brains”. And at lower left is a P60NF03 n-channel MOSFET likely employed for both LED power switching and variable voltage generation (for both “80W” and “55W” modes) purposes.

Flip the PCB over:

and what now emerges into view is the multi-digit eight-segment display along with the four-switch control cluster.

Beating the heat

Now for the inside of the other (lower) half. Wow, look at all those thermal-dissipating metal plates (operating in conjunction with the earlier-mentioned passive ventilation array)!

First off, here are the connections to the IR transmitter:

and IR receiver:

Now for the LED power distribution network. The two-wire harness coming from the PCB first routes to the three-LED plate that’s in the lower left, just below the six-LED plate, in the earlier overview photo:

From there it splits in two directions. The “upper” (for lack of a better word) span first routes to the aforementioned six-LED plate:

Then to a series of three mid-span three-LED plates:

And finally, to a three-LED plate at the end in proximity to the IR receiver:

The “lower” span also then cycles through its own set of three three-LED plates, the last of them alongside the IR transmitter, and terminates at the 12-LED cluster:

Dual-frequency LEDs

These one more aspect to this design that I want to make sure I highlight, which keen-eyed readers may have already noticed. Check out this closeup of one of the LED “beads”:

The yellow tint is reflective of the thin phosphor layer applied to the inside of the “bead” dome to assist in generating augmented visible light for user-operation-stupidity-prevention purposes. But peer underneath it…are there two die in there? Indeed, there are. I’d originally thought I was instead just looking at the LED’s leadframe structure:

but, in the comments to a teardown video of a different UV lamp by “Big Clive” (whose always-excellent work I’ve showcased before):

was an enlightening insight from “restorer19”:

I have the 6-led UV panel you did a video on years ago, from the same brand, and it likely uses the same LEDs – I’ve sacrificed once of the LED chips and an additional one of the phosphor domes/blobs. It appears to have two LED dies on each chip, one bonded with two wires to each end of the module, and one bonded directly downward with only one bond wire leading to it. The 2-wire die (presumably 405nm) lights a visible purple at a lower voltage (just under 3V), and the 1-wire die takes greater voltage to light up. The 1-wire die looks identical to the large one in a 365-nm LED flashlight I recently bought – the surface of the die itself seems to phosphoresce in white, and any color from the semiconductor itself is invisible. Looking at an individual LED module under magnification while powered at about 3.2V makes the two different dies obvious without being too bright to look at.

“Big Clive” had done an earlier teardown of a more elementary UV lamp containing these same dual-die LEDs (this video is, I believe, the same one that “restorer19” was referring to):

wherein he’d conjectured (at least as I interpreted his comments) that the white color, i.e. full-visible-spectrum-when-illuminated die inside might purely be for “powered on” visual user-reassurance purposes. However, a Google search using the phrase “dual die UV LED” produced an interesting (at least to me) AI Overview response:

A dual die UV LED refers to a UV-LED light source, often in nail lamps or curing devices, that combines two different LED chips (dies), usually at wavelengths like 365nm and 395nm, to effectively cure a wider range of UV-sensitive gels, including both traditional UV gels and newer LED-only gels, offering faster, more complete curing than single-wavelength lamps. These lamps are popular in nail salons for their versatility, providing professional results by ensuring all gel types, from base coats to builder gels, are fully hardened.

Key Features & Benefits

  • Dual Wavelength: Uses two distinct UV wavelengths (e.g., 365nm for deeper penetration, 395nm for surface cure) for comprehensive curing.
  • Broad Compatibility: Cures all gel types (UV, LED, builder, hard gels).
  • Faster Curing: Significantly reduces curing time compared to older UV-only lamps.
  • User-Friendly: Often includes auto-sensors, timers (15, 30, 60, 90s), and removable bases for pedicures.
  • Professional Quality: Common in salons for consistent, high-quality results.

How it Works

Instead of a single type of UV emitter, a dual die lamp integrates two different LED chips within the same unit, each emitting at a specific UV wavelength, ensuring that various photoinitiators in different gels react and harden the product effectively.

In Summary: A “dual die” UV LED lamp is a modern, efficient solution for curing gel nails, combining multiple LED technologies for faster, more reliable results across the spectrum of gel products.

And, in finalizing this write-up just now prior to submitting it to Aalyia, I revisited the previously mentioned Amazon product page and noticed the following (bolded emphasis is mine):

Specifications:

  • Timer: 10s/30s/60s/99s low heat mode
  • Wattage: 80w(Max)
  • Display: Digital Time Display
  • Lamp beads: 42pcs Dual Dual Light Source
  • Spectrum: 365nm+405nm
  • Lifespan: 50,000H
  • Voltage: 100V-240V 50Hz/60Hz
  • Output: DC12V
  • Lamp Size:235*223*102mm
  • Ideal for: All nail gels

So, I’m guessing we now have our answer! In retrospect, I also realized that one of the earlier “stock” graphics referenced a “dual light source” and included an LED close-up revealing the dual die internal structure. That said, I’ll wrap up for now and await your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Perusing a LED-based gel nail polish UV lamp appeared first on EDN.

Eddy current in focus: A rapid revisit

Mon, 02/16/2026 - 11:32

Eddy currents are not just textbook curiosities; they are the hidden loops that appear whenever metal meets a changing magnetic field. From DIY levitation tricks to clever braking systems, these swirling paths of electrons keep finding new ways to surprise and inspire.

In this rapid revisit, we will zoom in on the essentials, highlight a few practical pointers, and remind ourselves why this classic effect still merits a place in every innovator’s playbook.

Eddy currents: From losses to brakes to rice cookers

Eddy currents are closed loops of electrical current induced in conductors by a changing magnetic field, as described by Faraday’s law of induction. These currents circulate in planes perpendicular to the applied magnetic field.

By Lenz’s law, eddy currents generate their own magnetic field that opposes the change which created them. This opposition manifests as magnetic drag, joule heating, and energy conversion in conductive materials are exposed to time-varying fields.

The interaction between the applied field and induced currents resists motion. A classic demonstration is a magnet falling slowly through a copper tube—its descent dampened by the opposing magnetic force. As eddy currents circulate, they dissipate energy as heat due to the conductor’s resistance. This loss is problematic in devices such as transformers, motors, and induction coils, where unwanted heating reduces efficiency.

At the same time, eddy currents enable useful applications. In magnetic braking systems, for example, a moving object’s kinetic energy is deliberately converted into heat, providing smooth, contactless deceleration.

Figure 1 A generic eddy current brake is shown with rotor eddy currents resisting motion. Source: Author

Eddy currents embody both challenge and opportunity. In power systems, they waste energy as heat and demand careful design measures such as laminated transformer cores or specialized alloys to minimize losses. Yet the same principle enables precise, contactless control in magnetic braking, induction heating, and nondestructive testing.

Léon Foucault discovered eddy currents in the early 1850s; he also demonstrated Earth’s rotation with the Foucault pendulum. From Foucault’s copper disk to today’s rice cookers and industrial drives, eddy currents illustrate how a single electromagnetic effect can hinder efficiency while powering innovation. Their discovery remains a landmark in the history of electromagnetism.

Eddy currents at work: Quick insights

On paper, eddy currents arise from changing magnetic fields. They form when a conductor moves through a magnetic field or when the field around a stationary conductor varies. In short, any change in the intensity or direction of the magnetic field can drive circulating currents. Their strength scales with the rate of flux change, the loop area, and the field’s orientation, while higher conductor resistivity weakens them.

To grasp how this works, inertia makes a useful analogy. In classical mechanics, a moving body tends to keep moving, while a stationary one stays put. Electromagnetism shows a similar stubbornness: when a conductor encounters a changing magnetic field, it responds by generating an opposing flux through induction. That flux manifests as eddy currents. Picture them as invisible coils forming inside the conductor—the material itself acting like a “built-in electromagnet” that resists change.

A familiar example is the eddy current brake used in heavy vehicles and trains. These auxiliary brakes, often engaged on downhill runs, position electromagnets near a drum on the rotating axle. Once energized, the drum develops eddy currents that push back against the changing flux, creating drag. The beauty of this system lies in non-contact braking—no friction, no wear on drums or pads. Of course, the kinetic energy does not vanish; conservation of energy dictates it reemerges as Joule heating, dissipated as heat in the drum.

The same principle appears in everyday life. Induction cooktops and induction heating (IH) rice cookers rely on high-frequency currents in their coils to generate rapidly changing magnetic fields. These fields drive eddy currents in the conductive pot walls, producing Joule heat that cooks food directly and efficiently.

As a side note, eddy current brakes and electric retarders share the same physics but differ in role. An eddy current brake is a general device found in rail systems, roller coasters, or test rigs, providing smooth, non-contact braking. An eddy current electric/electromagnetic retarder, by contrast, is an auxiliary system integrated into heavy vehicles—buses, trucks, and coaches—to control speed on long descents.

Retarders ease the load on friction brakes, preventing overheating and wear, though they still demand cooling since induced currents generate substantial heat. In short, brakes emphasize stopping power, while retarders emphasize sustained drag torque for safe downhill control.

Figure 2 An electromagnetic retarder mounts mid-shaft and delivers non-contact braking for heavy vehicles. Source: Telma

Harnessing eddy currents in dynamometers

Dynamometers often rely on eddy current action in their background to absorb and measure power. In an eddy current dynamometer, a rotating metallic disc or drum is subjected to a magnetic field; as the engine drives the disc, circulating currents are induced in the metal. These eddy currents create a resistive force proportional to speed, effectively loading the engine while converting mechanical energy into heat.

The dynamometer’s role is to provide a controlled, repeatable load while precisely measuring torque and power, enabling accurate evaluation of engine or motor performance. Their application domain spans automotive testing, industrial machinery evaluation, and research laboratories where reliable power measurement is essential.

Figure 3 An eddy current dynamometer, delivering full power at high rotation speeds, is designed for fast-rotating motors. Source: Magtrol

Eddy current sensors: From magnetic fields to motion insight

An eddy current sensor, often referred to as a gap sensor, operates by generating a high-frequency magnetic field through a coil embedded in the sensor head. When a conductive measuring object approaches this field, eddy currents are induced on its surface, altering the impedance of the sensor coil.

By detecting these impedance changes, the sensor translates variations in transmission length into a precise relationship between displacement and output voltage. Their application fields span precision displacement measurement, vibration monitoring, and shaft run-out detection, with widespread use across the automobile, aerospace, and semiconductor industries.

Figure 4 An industrial-grade contactless proximity sensor measures position by interpreting eddy currents. Source: Messotron

Put another way, the eddy current method employs high-frequency magnetic fields generated by driving an alternating current through the coil in the sensor head. When a metallic target enters this field, electromagnetic induction causes magnetic flux to penetrate the object’s surface, producing circulating eddy currents parallel to that surface. These currents modify the coil’s impedance and eddy current displacement sensors detect the resulting oscillation changes to measure distance.

Figure 5 Drawing illustrates the core mechanism of an eddy current displacement sensor. Source: Author

At this point, it’s important to distinguish between an eddy current probe and an eddy current sensor. The probe is the coil assembly that induces and detects eddy currents, typically used in non-destructive testing (NDT), while the sensor integrates the probe with electronics to deliver calibrated displacement or vibration signals in industrial applications.

Also note that the sensing field of a non-contact sensor’s probe engages the target across a defined area, known as the spot size. For accurate measurement, the target must be larger than this spot size; otherwise, special calibration is required.

Spot size is directly proportional to the probe’s diameter. In eddy-current sensors, the magnetic field fully surrounds the end of the probe, creating a comparatively large sensing field. As a result, the spot size extends to many times the diameter of the probe’s sensing coil.

Wrap-up: Bridging theory and practice in eddy currents

Time for a quick break, yet so many details remain in the fascinating world of eddy currents. I am not covering every nuance here because eddy current methods are broad and specialized, with deeper dives best reserved for dedicated sections. To anchor the essentials: eddy current examination is a nondestructive testing method based on electromagnetic induction.

When applied to detect surface-breaking flaws in components and welds, it’s known as surface eddy current testing. Specially designed probes are used for this inspection, with coils mounted near one end of a plastic housing. During inspection, the technician guides the coil end of the probe across the surface of the component, scanning for variations that reveal discontinuities.

Well, now switch on your eddy current soldering iron—or set up one yourself—and start doing something practical, like building your own probes, sensors, or experimental rigs. Hands-on exploration is the best way to connect theory with practice, and this is the perfect moment to make the leap from reading to making.

For curious makers, eddy current soldering irons are not just another tool, they are a gateway into experimenting with induction heating itself. A coil generates a rapidly changing magnetic field, inducing circulating currents in the conductive tip or sleeve. These eddy currents encounter resistance and dissipate energy as heat, delivering rapid warm-up and stable temperature exactly where it is needed.

Whether you pick up a ready-made station or build a DIY rig, you will be blending theory with practice in the most tangible way. It’s a perfect project to showcase how electromagnetic principles—Faraday’s law and Lenz’s law in action—can power real-world innovation.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Eddy current in focus: A rapid revisit appeared first on EDN.

Safe operating area

Fri, 02/13/2026 - 15:00

Any semiconductor has limits on how much voltage, how much current, and for how much time combinations of voltage and current can be supported in normal usage. Sometimes that information is provided as part of the device’s datasheet, and sometimes that information is NOT provided. In either case, though, there ARE limits which MUST be observed.

Any switching semiconductor device must address voltage and current issues. Drive considerations aside, from the standpoint of “safe operating area” or SOA, the voltage Vds and the current Ids of a power MOSFET and the Vce and Ic of a bipolar transistor are at issue.

Please consider the following unwisely designed circuit in Figure 1.

Figure 1 A badly designed switching circuit requiring the 2N2222 at Q1 to repeatedly dump the charge of capacitor C1 of 0.01 µF. Source: John Dunn

What we’ve done wrong here is require the 2N2222 at Q1 to repeatedly dump the charge of capacitor C1 of 0.01 µF. The Vce and the Ic versus time burdens on Q1 are as shown. The current peak of nearly 500 mA is pretty big, and to our dismay, it occurs while the value of Vce is still fairly high, which means that there is a substantial peak power dissipation demand placed on Q1.

Having constructed a Lissajous pattern of Vce versus Ic as shown in Figure 2, we process that pattern.

Figure 2 The voltage versus current Lissajous pattern for Q1. Source: John Dunn

Just one comment about obtaining that Lissajous pattern. The oscilloscope simulations in the Multisim-SPICE I was using do not support “x” versus “y” capability, and therefore cannot provide the Lissajous pattern. I made the pattern you see here by reading out the voltage and current values at each time step of the oscilloscope display and then plotting them using GWBASIC. There were 240 datums for each, a total of 480 readings, which were pretty tedious to acquire. Ordinarily, I can’t concentrate on work and listen to music at the same time, but this time, listening to some Petula Clark recordings through all of this did help to ease the monotony.

In all my years of acquaintance with the 2N2222, I have never seen any specification or any datasheet that presented the SOA boundaries for that device. In fact, I’ve never seen the SOA boundaries for any TO-18 packaged device. In the TO-5 and TO-39 packages, the one and only time I have ever come across SOA boundary information was for the 2N3053 and 2N3053A, and even today, some datasheets omit that information.

As a result, we just have to make do with what we’ve got, which for now is this partial reconstruction of the 2N3053 and 2N3053A SOA chart taken from a very old datasheet from RCA that I stashed away long ago (Figure 3).

Figure 3 Safe operating area reconstruction of the 2N3053 and 2N3053A SOA chart taken from a very old datasheet from RCA. Source: John Dunn

We replot the Vce versus Ic data using logarithmic scaling, and then we overlay that result with the SOA boundaries of our NPN, but we encounter a difficulty (Figure 4).

Figure 4 SOA examination using logarithmic scaling. Source: John Dunn

The 2N2222 has a peak power rating of 1.2 watts, while the 2N2219, a first cousin to the 2N2222, has a peak power rating of 3 watts, versus a 7 watts rating for the 2N3053. I would therefore imagine that the 2N2222 SOA boundaries are quite a bit lower than those of the 2N3053. We note that the SOA curve of Q1 operating in this circuit moves outside of the DC operating boundary for the 2N3053 and thus, in all likelihood, it moves well outside of the 2N2222 equivalent limits.

Voltage and current excursions toward the upper right of this diagram are NOT a good thing.

The 2N2222 as used here can well be expected to fail, maybe sooner, maybe later, but it is set up for eventual calamity. Regardless of other factors that may apply to this design, remedial SOA measures should be considered.

The first is to reduce the capacitance of C1 (Figure 5).

Figure 5 The effects of reducing the capacitance of C1. Source: John Dunn

Using a smaller value of C1, or perhaps using no C1 at all, will lower the peak collector current and will make the switching events occur more quickly. This will take us away from the upper right corner of the SOA plot, and from that standpoint, this is a very good thing to do.

Adding R3, as shown in Figure 6, can also reduce the peak collector current.

Figure 6 The effects of including a collector resistance. Source: John Dunn

Although using R3 will slow down the C1 discharge rate for each discharge event, doing so will keep the peak collector current down, and that is a desirable SOA outcome.

If, for some reason, C1 has to be there, omitting R3 is not a good idea.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Safe operating area appeared first on EDN.

A tutorial on instrumentation amplifier boundary plots—Part 2

Fri, 02/13/2026 - 12:31

The first installment of this series introduced the boundary plot, an often-misunderstood plot found in instrumentation amplifier (IA) datasheets. It also discussed various IA topologies: traditional three operational amplifier (op amp), two op amp, two op amp with a gain stage, current mirror, current feedback with super-beta transistors, and indirect current feedback.

Part 1 also included derivations of the internal node equations and transfer function of a traditional three-op-amp IA.

The second installment will introduce the input common-mode and output swing limitations of op amps, which are the fundamental building blocks of IAs. Modifying the internal node equations from Part 1 yields equations that represent each op amp’s input common-mode and output swing limitation at the output of the IA as a function of the device’s input common-mode voltage.

The article will also examine a generic boundary plot in detail and compare it to plots from device datasheets to corroborate the theory.

Op-amp limitations

For an op amp to output a linear voltage, the input signal must be within the device’s input common-mode range specification (VCM) and the output (VOUT) must be within the device’s output swing range specification. These ranges depend on the supply voltages, V+ and V– (Figure 1).

Figure 1 Op-amp input common-mode (green) and output swing (red) ranges depend on supplies. Source: Texas Instruments

Figure 2 depicts the datasheet specifications and corresponding VCM and VOUT ranges for an op amp, such as TI’ OPA188, given a ±15V supply. For this device, the output swing is more restrictive than the input common-mode voltage range.

Figure 2 Op-amp VCM and VOUT ranges are shown for a ±15 V supply of the OPA188 op amp. Source: Texas Instruments

The boundary plot

The boundary plot for an IA is a representation of all internal op-amp input common-mode and output swing limitations. Figure 3 depicts a boundary plot. Operating outside the boundaries of the plot violates at least one input common-mode or output swing limitation of the internal amplifiers. Depending on the severity of the violation, the output waveform may depict anything from minor distortion to severe clipping.

Figure 3 Here is how an IA boundary plot looks like for the INA188 instrumentation amplifier. Source: Texas Instruments

This plot is specified for a particular supply voltage (VS = ±15 V), reference voltage (VREF = 0 V), and gain of 1 V/V.

Figure 4 illustrates the linear output range given two different input common-mode voltages. For example, if the common-mode input of the IA is 8 V, the output will be valid only from approximately –11 V to +11 V. If the common-mode input is mid supply (0 V), however, an output swing of ±14.78 V is available.

Figure 4 Output voltage range is shown for different common-mode voltages. Source: Texas Instruments

Notice that the VCM (blue arrows) ranges from –15 V to approximately +13.5 V. Both the mid-supply output swing and VCM ranges are consistent with the op-amp ranges depicted in Figure 2.

Each line in the boundary plot corresponds to a limitation—either VCM or VOUT—of one of the three internal amplifiers. Therefore, it’s necessary to review the internal node equations first derived in Part 1. Figure 5 depicts the standard three-op-amp IA, while Equations 1 through 6 define the voltage at each internal node.

Figure 5 Here is how a three-op-amp IA looks like. Source: Texas Instruments

(1)(2) (3) (4) (5) (6)  In order to plot the node equation limits on a graph with VCM and VOUT axes, solve Equation 6 for VD, as shown in Equation 7:

(7)

Substituting Equation 7 for VD in Equations 1 through 6 and solving for VOUT yields Equations 8 through 13. These equations represent each amplifier’s input common-mode (VIA) and output (VOA) limitation at the output of the IA, and as a function of the device’s input common-mode voltage.

(8)

(9)

(10)

(11)

(12)

(13)

One important observation from Equations 8 and 9 is that the IA limitations from the common-mode range of A1 and A2 depend on the gain of the input stage, GIS. These output limitations do not depend on GIS, however, as shown by Equations 11 and 12.

Plotting each of these equations for the minimum and maximum input common-mode and output swing limitations for each op amp (A1, A2 and A3) yields the boundary plot. Figure 6 depicts a generic boundary plot. The linear operation of the IA is the interior of all plotted equations.

Figure 6 Here is an example of a generic boundary plot. Source: Texas Instruments

The dotted lines in Figure 6 represent the input common-mode limitations for A1 (blue) and A2 (red). Notice that the slope of the dotted lines depends on GIS, which is consistent with Equations 8 and 9.

Solid lines represent the output swing limitations for A1 (blue), A2 (red) and A3 (green). The slope of these lines does not depend on GIS, as shown by Equations 11 through 13.

Figure 6 doesn’t show the line for VIA3 because the R2/R1 voltage divider attenuates the output of A2; A2 typically reaches the output swing limitation before violating A3’s input common-mode range.

The lines plotted in quadrants one and two (positive common-mode voltages) use the maximum input common-mode and output swing limits for A1 and A2, whereas the lines plotted in quadrants three and four (negative common-mode voltages) use the minimum input common-mode and output swing limits.

Considering only positive common-mode voltages from Figure 6, Figure 7 depicts the linear operating region of IA when G = 1 V/V. In this example, the input common-mode limitation of A1 and A2 is more restrictive than the output swing.

Figure 7 The input common-mode range limit of A1 and A2 defines the linear operation region when G = 1 V/V. Source: Texas Instruments

Increasing the gain of the device changes the slope of VIA1 and VIA2 (Figure 8). Now both the input common-mode and output swing limitations define the linear operating region.

Figure 8 The input common-mode range and output swing limits of A1 and A2 define the linear operating range when G > 1 V/V. Source: Texas Instruments

Regardless of gain, the output swing always limits the linear operating region when it’s more restrictive than the input common-mode limit (Figure 9).

Figure 9 The output swing limit of A1 and A2 define the linear operating region independent of gain. Source: Texas Instruments

Datasheet examples

Figure 10 illustrates the boundary plot from the INA111 datasheet. Notice that the output swing limit of A1 and A2 define the linear operating region. Therefore, the output swing limitations of A1 and A2 must be equal to or more restrictive than the input common-mode limitations.

Figure 10 Boundary plot for the INA111 instrumentation amplifier shows output swing limitations. Source: Texas Instruments

Figure 11 depicts the boundary plot from the INA121 datasheet. Notice that the linear operating region changes with gain. At G = 1 V/V, the input common mode must limit the linear operating region. However, as gain increases, the linear operating region is limited by both the output swing and input common-mode limitations (Figure 8).

Figure 11 Boundary plot is shown for the INA121 instrumentation amplifier. Source: Texas Instruments

Third installment coming

The third installment of this series will explain how to use these equations and concepts to develop a tool that automates the drawing of boundary plots. This tool enables you to adjust variables such as supply voltage, reference voltage, and gain to ensure linear operation for your application.

Peter Semig is an applications manager in the Precision Signal Conditioning group at TI. He received his bachelor’s and master’s degrees in electrical engineering from Michigan State University in East Lansing, Michigan.

Related Content

The post A tutorial on instrumentation amplifier boundary plots—Part 2 appeared first on EDN.

AI-powered MCU elevates vehicle intelligence

Fri, 02/13/2026 - 00:04

The Stellar P3E automotive MCU from ST features built-in AI acceleration, enabling real-time AI applications at the edge. Designed for the next generation of software-defined vehicles, it simplifies multifunction integration, supporting X-in-1 electronic control units from hybrid/EV systems to body zonal architectures.

According to ST, the Stellar P3E is the first automotive MCU with an embedded neural network accelerator. Its Neural-ART accelerator, a dedicated neural processing unit (NPU) with an advanced data-flow architecture, offloads AI workloads from the main cores, speeding up inference execution and delivering real-time, AI-based virtual sensing.

The MCU incorporates 500-MHz Arm Cortex-R52+ cores, delivering a CoreMark score exceeding 8000 points. Its split-lock feature lets designers balance functional safety with peak performance, while smart low-power modes go beyond conventional standby. The device also includes extensible xMemory, with up to twice the density of standard embedded flash, plus rich I/O interfaces optimized for advanced motor control.

Stellar P3E production is scheduled to begin in the fourth quarter of 2026.

Stellar P3E product page 

STMicroelectronics

The post AI-powered MCU elevates vehicle intelligence appeared first on EDN.

Gate drivers emulate optocoupler inputs

Fri, 02/13/2026 - 00:04

Single-channel isolated gate drivers in the 1ED301xMC121 series from Infineon are pin-compatible replacements for optocoupler-based designs. They replicate optocoupler input characteristics, enabling drop-in use without control circuit changes, while using non-optical isolation internally to deliver higher CMTI and improved switching performance for SiC applications.

Their opto-emulator input stage uses two pins and integrates reverse voltage blocking, forward voltage clamping, and an isolated signal transmitter. With CMTI exceeding 300 kV/µs, 40-ns propagation delay, and 10-ns part-to-part matching, the devices deliver robust, high-speed switching performance.

The series includes three variants—1ED3010, 1ED3011, and 1ED3012—supporting Si and SiC MOSFETs as well as IGBTs. Each delivers up to 6.5 A of output current to drive power modules and parallel switch configurations in motor drives, solar inverters, EV chargers, and energy storage systems. The drivers differ in UVLO thresholds: 8.5 V, 11 V, and 12.5 V for the 1ED3010, 1ED3011, and 1ED3012, respectively.

The 1ED3010MC121, 1ED3011MC121, and 1ED3012MC121 drivers are available in CTI 600, 6-pin DSO packages with more than 8 mm of creepage and clearance.

Infineon Technologies 

The post Gate drivers emulate optocoupler inputs appeared first on EDN.

IC enables precise current sensing in fast control loops

Fri, 02/13/2026 - 00:03

Allegro Microsystems’ ACS37017 Hall-effect current sensor achieves 0.55% typical sensitivity error across temperature and lifetime. High accuracy, a 750‑kHz bandwidth, and a 1‑µs typical response time make the ACS37017 suitable for demanding control loops in automotive and industrial high-voltage power conversion.

Unlike conventional sensors whose accuracy suffers from drift, the ACS37017 delivers long-term stability through a proprietary compensation architecture. This technology maintains precise measurements, ensuring control loops remain stable and efficient throughout the operating life of the vehicle or power supply.

The ACS37017 features an integrated non-ratiometric voltage reference, simplifying system architecture by eliminating the need for external precision reference components. This integration reduces BOM costs, saves board space, and removes a major source of system-level noise and error.

The high-accuracy ACS37017 expands Allegro’s current sensor portfolio, complementing the ACS37100 (optimized for speed) and the ACS37200 (optimized for power density). Request the preliminary datasheet and engineering samples on the product page linked below.

ACS37017 product page

Allegro Microsystems

The post IC enables precise current sensing in fast control loops appeared first on EDN.

Microchip empowers real-time edge AI

Fri, 02/13/2026 - 00:03

Microchip provides a full-stack edge AI platform for developing and deploying production-ready applications on its MCUs and MPUs. These devices operate at the network edge, close to sensors and actuators, enabling deterministic, real-time decision-making. Processing data locally within embedded systems reduces latency and improves security by limiting cloud connectivity.

The full-stack application portfolio includes pretrained, production-ready models and application code that can be modified, extended, and deployed across target environments. Development and optimization are performed using Microchip’s embedded software and ML toolchains, as well as partner ecosystem tools. Edge AI applications include:

  • AI-based detection and classification of electrical arc faults using signal analysis
  • Condition monitoring and equipment health assessment for predictive maintenance
  • On-device facial recognition with liveness detection for secure identity verification
  • Keyword spotting for consumer, industrial, and automotive command-and-control interfaces

Microchip is working with customers deploying its edge AI solutions, providing model training guidance and workflow integration across the development cycle. The company is also collaborating with ecosystem partners to expand available software and deployment options. For more information, visit the Microchip Edge AI page.

Microchip Technology 

The post Microchip empowers real-time edge AI appeared first on EDN.

AI agent automates front-end chip workflows

Fri, 02/13/2026 - 00:03

Cadence has launched the ChipStack AI Super Agent, an agentic AI solution for front-end silicon design and verification. The platform automates key design and test workflows—including coding, test plan creation, regression testing, debugging, and issue resolution—offering significant productivity gains for chip development teams. It leverages multiple AI agents that work alongside Cadence’s existing EDA tools and AI-based optimization solutions.

The ChipStack AI Super Agent supports both cloud-based and on-premises AI models, including NVIDIA NeMo models that can be customized for specific workflows, as well as OpenAI GPT. By combining agentic AI orchestration with established simulation, verification, and AI-assistant tools, the platform streamlines complex semiconductor workflows.

Early deployments at leading semiconductor companies have demonstrated measurable reductions in verification time and improvements in workflow efficiency. The platform is currently available in early access for customers looking to integrate AI-driven automation into front-end chip design and verification processes.

Additional information about the ChipStack AI Super Agent can be found on the Cadence AI for Design page.

Cadence

The post AI agent automates front-end chip workflows appeared first on EDN.

Wearables for health analysis: A gratefulness-inducing personal experience

Thu, 02/12/2026 - 15:00

What should you do if your wearable device tells you something’s amiss health-wise, but you feel fine? With this engineer’s experience as a guide, believe the tech and get yourself checked.

Mid-November was…umm…interesting. After nearly two days with an elevated heart rate, which I later realized was “enhanced” by cardiac arrhythmia, I ended up overnighting at a local hospital for testing, medication, procedures, and observation. But if not for my wearable devices, I never would have known I was having problems, to my potentially severe detriment.

I felt fine the entire time; the repeated alerts coming from my smart watch and smart ring were my sole indication to seek medical attention. I’ve conceptually discussed the topic of wearables for health monitoring plenty of times in the past. Now, however, it’s become deeply personal.

Late-night, all-night alerts

Sunday evening, November 16, 2025, my Pixel Watch smartwatch began periodically alerting me to an abnormally high heart rate. As you can see from the archived reports from Fitbit (the few-hour data gaps each day reflect when the Pixel Watch is on the charger instead of my wrist):

and my Oura Ring 4:

for the prior two days, my normal sleeping heart rate is in the low-to-mid 40s bpm (beats per minute) range. However, during the November 16-to-17 overnight cycle, both wearable devices reported that I was spiking the mid-140s, along with a more general bpm elevation-vs-norm:

ECG-expanding condition understanding

By Monday evening, I was sufficiently concerned that I shared with my wife what was going on. She recommended that in addition to continued monitoring of my pulse rate and trend, I should also use the ECG (i.e., EKG, for electrocardiogram) app that was built into her Apple Watch Ultra. I first checked to see whether there was a similar app on my Pixel Watch. And indeed, there was: Fitbit ECG. A good overview video is embedded within some additional product documentation:

Here’s an example displayed results screenshot directly from my watch, post-hospital visit, when my heart was once again thankfully beating normally:

I didn’t think to capture screenshots that Monday night—my thoughts were admittedly on other, more serious matters—but here’s a link to the Fitbit-generated November 17 evening report as a PDF, and here’s the captured graphic:

The average bpm was 110. And the report summary? “Atrial Fibrillation: Your heart rhythm shows signs of atrial fibrillation (AFib), an irregular heart rhythm.”

The next morning (PDF, again), when I re-did the test:

my average bpm was now 140. And the conclusion? “Inconclusive high heart rate: If your heart rate is over 120 beats per minute, the ECG app can’t assess your heart rhythm.”

The data was even more disconcerting this time, and the overall trend was in a discouraging direction. I promptly made an emergency appointment for that same afternoon with my doctor. She ran an ECG on the office equipment, whose results closely (and impressively so) mirrored those from my Pixel Watch. Then she told me to head directly to the closest hospital; had my wife not been there to drive me, I probably would have been transported in an ambulance.

Thankfully, as you may have already noticed from the above graphs, after bouts of both atrial flutter and fibrillation, my heart rate began to return to its natural rhythm by late that same evening. Although the Pixel Watch battery had died by ~6 am on Wednesday morning, my recovery was already well away:

and the Oura Ring kept chugging along to document the normal heartbeat restoration process:

Too high…too low….just right

I was discharged on Wednesday afternoon with medication in-hand, along with instructions to make a follow-up appointment with the cardiologist I’d first met at the hospital emergency room. But the “excitement” wasn’t yet complete. The next morning, my Pixel Watch started yelling at me again, this time because my heart rate was too low:

My normal resting heart rate when awake is in the low-to-mid 50s. But now it was ~10 points below that. I had an inkling that the root cause might be a too-high medication dose, and a quick call to the doctor confirmed my suspicion. Splitting each tablet in two got things back to normal:

As I write this, I’m nearing the end of a 30-day period wearing a cardiac monitor; a quite cool device, the details of which I’ll devote to an upcoming blog post. My next (and ideally last) cardiologist appointment is a month away; I’m hopeful that this arrhythmia event was a one-time fluke.

Regardless, my unplanned hospital visit, specifically the circumstances that prompted it, was more than a bit of a wakeup call for this former ultramarathoner and broader fitness activity aficionado (admittedly a few years and a few pounds ago). And that said, I’m now a lifelong devotee and advocate of smart watches, smart rings and other health monitoring wearables as effective adjuncts to traditional symptoms that, as my case study exemplifies, might not even be manifesting in response to an emerging condition…assuming you’re paying sufficient ongoing attention to your body to be able to notice them if they were present.

Thoughts on what I’ve shared today? As always, please post ‘em in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post Wearables for health analysis: A gratefulness-inducing personal experience appeared first on EDN.

How to design a digital-controlled PFC, Part 2

Thu, 02/12/2026 - 15:00

In Part 1 of this article series, I explained the system block diagram and each of the modules of digital control. In this second installment, I’ll talk about how to write firmware to implement average current-mode control.

Average current-mode control

Average current-mode control, as shown in Figure 1, is common in continuous-conduction-mode (CCM) power factor correction (PFC). It has two loops: a voltage loop that works as an outer loop and a current loop that works as an inner loop. The voltage loop regulates the PFC output voltage (VOUT) and provides current commands to the current loop. The current loop forces the inductor current to follow its reference, which is modulated by the AC input voltage.

Figure 1 Average current-mode control is common in CCM PFC, where a voltage loop regulates the PFC output voltage and provides current commands to the current loop. Source: Texas Instruments

Normalization

Normalizing all of the signals in Figure 1 will enable the ability to handle different signal scales and prevent calculations from overflowing.

For VOUT, VAC, and IL, multiply their analog-to-digital converter (ADC) reading by a factor of , (assuming a 12-bit ADC):

For VREF, multiply its setpoint by a factor of):

where R1 and R2 are the resistors used in Figure 4 from Part 1 of this article series.

After normalization, all of the signals are in the range of (–1, +1). The compensator GI output d is in the range of (0, +1), where 0 means 0% duty and 1 means 100% duty.

Digital voltage-loop implementation

As shown in Figure 1, an ADC senses VOUT for comparison to VREF. Compensator GV processes the error signal, which is usually a proportional integral (PI) compensator, as I mentioned in Part 1. The output of this PI compensator will become part of the current reference calculations.

VOUT has a double-line frequency, which couples to the current reference and affects total harmonic distortion (THD). To reduce this ripple effect, set the PFC voltage-loop bandwidth much lower than the AC frequency; for example, around 10Hz. This low voltage-loop bandwidth will cause VOUT to dip too much when a heavy load is applied, however.

Meeting the load transient response requirement will require a nonlinear voltage loop. When the voltage error is small, use a small Kp, Ki gain. When the error exceeds a threshold, using a larger Kp, Ki gain will rapidly bring VOUT back to normal. Figure 2 shows a C code example for this nonlinear voltage loop.

Figure 2 C code example for this nonlinear voltage-loop gain. Source: Texas Instruments

Digital current-loop implementation takes 3 steps:

Step 1: Calculating the current reference

As shown in Figure 1, Equation 3 calculates the current-loop reference, IREF:

where A is the voltage-loop output, C is the AC input voltage a,nd B is the square of the AC root-mean-square (RMS) voltage.

Using the AC line-measured voltage subtracted by the AC neutral-measured voltage will obtain the AC input voltage (Equation 4 and Figure 3):

Figure 3 VAC calculated by subtracting AC neutral-measured voltage from AC line-measured voltage. Source: Texas Instruments

Equation 5 defines the RMS value as:

With Equation 6 in discrete format:

where V(n) represents each ADC sample, and N is the total number of samples in one AC cycle.

After sampling VAC at a fixed speed, it is squared, then accumulated in each AC cycle. Dividing the number of samples in one AC cycle calculates the square of the RMS value.

In steady state, you can treat both voltage-loop output A and the square of VAC RMS value B as constant; thus, only C (VAC) modulates IREF. Since VAC is sinusoidal, IREF is also sinusoidal (Figure 4).

Figure 4 Sinusoidal current reference IREF due to sinusoidal VAC. Source: Texas Instruments

Step 2: Calculating the current feedback signal

If you compare the shape of the Hall-effect sensor output in Figure 5 from Part 1 and IREF in Figure 4 from this installment, they have the same shape. The only difference is that the Hall-effect sensor output has a DC offset; therefore, it cannot be used directly as the feedback signal. You must remove this DC offset before closing the loop.

Figure 5 Calculating the current feedback signal. Source: Texas Instruments

Also, the normalized Hall-effect sensor output is between (0, +1); after subtracting the DC offset, its magnitude becomes (–0.5, +0.5). To maintain the (–1, +1) normalization range, multiply it by 2, as shown in Equation 7 and Figure 5:

Step 3: Closing the current loop

Now that you have both the current reference and feedback signal, let’s close the loop. During the positive AC cycle, the control loop has standard negative feedback control. Use Equation 8 to calculate the error going to the control loop:

During the negative AC cycle, the higher the inductor current, the lower the value of the Hall-effect sensor output; thus, the control loop needs to change from negative feedback to positive feedback. Use Equation 9 to calculate the error going to the control loop:

Compensator GI processes the error signal, which is usually a PI compensator, as mentioned in Part 1. Sending the output of this PI compensator to the pulse-width modulation (PWM) module will generate the corresponding PWM signals. During a positive cycle, Q2 is the boost switch and controlled by D; Q1 is the synchronous switch and controlled by 1-D. Q4 remains on and Q3 remains off for the whole positive AC half cycle. During a negative cycle, the function of Q1 and Q2 swaps: Q1 becomes the boost switch controlled by D, while Q2 works as a synchronous switch controlled by 1-D. Q3 remains on, and Q4 remains off for the whole negative AC half cycle.

Loop tuning

Tuning a PFC control loop is similar to doing so in an analog PFC design, with the exception that here you need to tune Kp, Ki instead of playing pole-zero. In general, Kp determines how fast the system responds. A higher Kp makes the system more sensitive, but a Kp value that’s too high can cause oscillations.

Ki removes steady-state errors. A higher Ki removes steady-state errors more quickly, but can lead to instability.

It is possible to tune PI manually through trial and error – here is one such tuning procedure:

  1. Set Kp, Ki to zero.
  2. Gradually increase Kp until the system’s output starts to oscillate around the setpoint.
  3. Set Kp to approximately half the value that caused the oscillations.
  4. Slowly increase Ki to eliminate any remaining steady-state errors, but be careful not to reintroduce oscillations.
  5. Make small, incremental adjustments to each parameter to achieve the intended system performance.

Knowing the PFC Bode plot makes loop tuning much easier; see reference [1] for a PFC tuning example. One advantage of a digital controller is that it can measure the Bode plot by itself. For example, the Texas Instruments Software Frequency Response Analyzer (SFRA) enables you to quickly measure the frequency response of your digital power converter [2]. The SFRA library contains software functions that inject a frequency into the control loop and measure the response of the system. This process provides the plant frequency response characteristics and the open-loop gain frequency response of the closed-loop system. You can then view the plant and open-loop gain frequency response on a PC-based graphic user interface, as shown in Figure 6. All of the frequency response data is exportable to a CSV file or Microsoft Excel spreadsheet, which you can then use to design the compensation loop.

Figure 6 The Texas Instruments SRFA tool allows for the quick frequency response measurement of your power converter. Source: Texas Instruments

System protection

You can implement system protection through firmware. For example, to implement overvoltage protection (OVP), compare the ADC-measured VOUT with the OVP threshold and shut down PFC if VOUT exceeds this threshold. Since most microcontrollers also have integrated analog comparators with a programmable threshold, using the analog comparator for protection can achieve a faster response than firmware-based protection. Using an analog comparator for protection requires programming its digital-to-analog converter (DAC) value. For an analog comparator with a 12-bit DAC and 3.3V reference, Equation 10 calculates the DAC value as:

where VTHRESHOLD is the protection threshold, and R1 and R2 are the resistors used in Figure 4 from Part 1.

State machine

From power on to turn-off, PFC operates at different states at different conditions; these states are called the state machine. The PFC state machine transitions from one state to another in response to external inputs or events. Figure 7 shows a simplified PFC state machine.

Figure 7 Simplified PFC state machine that transitions from one state to another in response to external inputs or events. Source: Texas Instruments

Upon power up, PFC enters an idle state, where it measures VAC and checks if there are any faults. If no faults exist and the VAC RMS value is greater than 90V, the relay closes and the PFC starts up, entering a ramp-up state where the PFC gradually ramps up its VOUT by setting the initial voltage-loop setpoint equal to the measured actual VOUT voltage, then gradually increasing the setpoint. Once VOUT reaches its setpoint, the PFC enters a regulate state and will stay there until an abnormal condition occurs, such as overvoltage, overcurrent or overtemperature. If any of these faults occur, the PFC shuts down and enters a fault state. If the VAC RMS value drops below 85V, triggering VAC brownout protection, the PFC also shuts down and enters an idle state to wait until VAC returns to normal.

Interruption

A PFC has many tasks to do during normal operation. Some tasks are urgent and need processing immediately, some tasks are not so urgent and can be processed later, and some tasks need processing regularly. These different task priorities are handled by interruption. Interruptions are events detected by the digital controller that cause a preemption of the normal program flow by pausing the current program and transferring control to a specified user-written firmware routine called the interrupt service routine (ISR). The ISR processes the interrupt event, then resumes normal program flow.

Firmware structure

Figure 8 shows a typical PFC firmware structure. There are three major parts: the background loop, ISR1, and ISR2.

Figure 8 PFC firmware structure with three major parts: the background loop, ISR1, and ISR2.. Source: Texas Instruments

The firmware starts from the function main(). In this function, the controller initializes its peripherals, such as configuring the ADC, PWM, general-purpose input/output, universal asynchronous receiver transmitter (UART), setup protection threshold, configure interrupt, initialize global variable, etc. The controller then enters a background loop that runs infinitely. This background loop contains non-time-critical tasks and tasks that do not need processing regularly.

ISR2 is an interrupt service routine that runs at 10KHz. The triggering of ISR2 suspends the background loop. The CPU jumps to ISR2 and starts executing the code in ISR2. Once ISR2 finishes, the CPU returns to where it was upon suspension and resumes normal program flow.

The tasks in ISR2 that are time-critical or processed regularly include:

  • Voltage-loop calculations.
  • PFC state machine.
  • VAC RMS calculations.
  • E-metering.
  • UART communication.
  • Data logging.

ISR1 is an interrupt service routine running at every PWM cycle: for example, if the PWM frequency is 65KHz, then ISR1 is running at 65KHz. ISR1 has a higher priority than ISR2, which means that if ISR1 triggers when the CPU is in ISR2, ISR2 suspends, and the CPU jumps to ISR1 and starts executing the code in ISR1. Once ISR1 finishes, the CPU goes back to where it was upon suspension and resumes normal program flow.

The tasks in ISR1 are more critical than those in ISR2 and need to be processed more quickly. These include:

  • ADC measurement readings.
  • Current reference calculations.
  • Current-loop calculations.
  • Adaptive dead-time adjustments.
  • AC voltage-drop detection.
  • Firmware-based system protection.

The current loop is an inner loop of average current-mode control. Because its bandwidth must be higher than that of the voltage loop, put the current loop in faster ISR1, and put the voltage loop in slower ISR2.

AC voltage-drop detection

In a server application, when an AC voltage drop occurs, the PFC controller must detect it rapidly and report the voltage drop to the host. Rapid AC voltage-drop detection becomes more important when using a totem-pole bridgeless PFC.

As shown in Figure 9, assuming a positive AC cycle where Q4 is on, the turn-on of synchronous switch Q1 discharges the bulk capacitor, which means that it is no longer possible to guarantee the holdup time.

Figure 9 The bulk capacitor discharging after the AC voltage drops. Source: Texas Instruments

To rapidly detect an AC voltage drop, you can use a firmware phase-locked loop (PLL) [3] to generate an internal sine-wave signal that is in phase with AC input voltage, as shown in Figure 10. Comparing the measured VAC with this PLL sine wave will determine the AC voltage drop, at which point all switches should turn off.

Figure 10 Rapid AC voltage-drop detection by using a firmware PLL to generate an internal sine-wave signal that is in phase with AC input voltage. Source: Texas Instruments

Design your own digital control

Now that you have learned how to use firmware to implement an average current-mode controller, how to tune the control loop, and how to construct the firmware structure, you should be able to design your own digitally controlled PFC. Digital control can do much more. In the third installment of this article series, I will introduce advanced digital control algorithms to reduce THD and improve the power factor.

Bosheng Sun is a system engineer and Senior Member Technical Staff at Texas Instruments, focused on developing digitally controlled high-performance AC/DC solutions for server and industry applications. Bosheng received a Master of Science degree from Cleveland State University, Ohio, USA, in 2003 and a Bachelor of Science degree from Tsinghua University in Beijing in 1995, both in electrical engineering. He has published over 30 papers and holds six U.S. patents.

Related Content

References

  1. Sun, Bosheng, and Zhong Ye. “UCD3138 PFC Tuning.” Texas Instruments application report, literature No. SLUA709, March 2014.
  2. Texas Instruments. n.d. SFRA powerSUITE digital power supply software frequency response analyzer tool for C2000™ MCUs. Accessed Dec. 9, 2025.
  3. Bhardwaj, Manish. “Software Phase Locked Loop Design Using C2000™ Microcontrollers for Single Phase Grid Connected Inverter.” Texas Instruments application report, literature No. SPRABT3A, July 2017.

The post How to design a digital-controlled PFC, Part 2 appeared first on EDN.

Edge AI in a DRAM shortage: Doing more with less

Thu, 02/12/2026 - 11:59

Memory is having a difficult year. As manufacturers prioritize DDR5 and high-bandwidth memory (HBM) for data centers and large-scale AI workloads, availability has tightened and costs have risen sharply: up to 3–4x compared to Q3 2025 levels and market signals suggest the peak has not yet been reached.

Even hyperscalers—typically at the frontline—are reportedly receiving only about 70% of their allocated volumes, and analysts expect tight conditions to persist well into 2026 and possibly even 2027.

The strain isn’t evenly distributed, with the steepest price hikes and longest lead times concentrated in higher-capacity modules. Those components sit directly in the path of cloud infrastructure demand, and their pricing reflects it. On the other hand, lower-capacity modules (1-2 GB) have remained accessible and far more stable.

This trend is now influencing how teams think about system design. AI workloads built around large memory footprints now run into procurement challenges; systems engineered to operate within modest memory baselines avoid both the price spikes and the uncertainty. The outcome is important: in a shortage, architecture built for efficiency gives teams more strategic freedom compared to architectures built for abundance.

The most effective solution: DRAM-less AI accelerator

In a constrained memory market, the most robust solution is also the simplest: remove the dependency on external DRAM entirely. Take the case of Hailo-8 and Hailo-8L AI accelerators. By keeping the full inference pipeline on-chip, Hailo-8/8L eliminate the most expensive and supply-constrained component in the system.

In practical terms, avoiding DRAM can reduce bill of materials by up to $100 per device, while also improving power efficiency, latency, and system reliability. Though not every AI application can avoid DRAM.

Generative AI workloads inherently require more memory, and systems that run them will continue to rely on external DRAM. But even in this case, memory constraints strongly favor moving inference closer to the edge.

Running generative AI on the edge allows teams to work with smaller, domain-specific models rather than large, general-purpose ones designed for the cloud. Smaller models translate directly into smaller DRAM requirements, reducing cost, easing procurement, and improving power efficiency. This is where edge-focused accelerators come into play, enabling efficient generative AI inference while keeping memory footprints as lean as possible.

Privacy and latency have long shaped the case for running intelligence on the device. In 2025, another factor cemented it: the expectation that generative AI simply be there. Users now rely on transcription, summarization, audio cleanup, translation, and basic reasoning often with no tolerance for startup delays or network dependency.

Recent cloud outages from AWS, Azure and Cloudflare underscored how fragile cloud-only assumptions can be. When the networks faced disruptions, everyday features across consumer apps and enterprise workflows failed. Even brief interruptions highlighted how a single infrastructure dependency can take down tools that users now rely on dozens of times a day.

As AI moves deeper into everyday workflows and users expect agentic AI capabilities to be available instantly, a hybrid approach proves far more resilient. Keep frequently used intelligence local, either on the device or in a nearby gateway, while using the cloud for heavier or less frequent tasks. And crucially, when models are small enough to operate within 1-2 GB of memory, that hybrid approach becomes far easier to implement using memory configurations that are still readily sourced.

Small models change the equation

Until recently, generative AI required the memory and compute scale of the cloud. A new class of small language models (SLMs) and compact vision language models (VLMs) now deliver strong instruction following, reliable tool use, and competitive benchmark performance at a fraction of the parameters.

Releases like IBM’s Granite 4.0 Nano line demonstrate how far efficient architectures have come. These models show that some generative AI tasks and applications no longer need massive, expensive system memory—they need well-defined domains, optimized inference paths, and efficient pre- and post-processing.

For hardware teams, this evolution has many practical benefits. Smaller models reduce the “memory tax” that has been baked into AI design for years. When an entire intelligence pipeline can operate in 1-2 GB of DRAM, several constraints loosen simultaneously:

  • Costs fall as systems avoid the inflated pricing of high-capacity DRAM.
  • Supply-chain risk drops as lower-capacity memory chips remain easier to procure.
  • Power consumption improves because smaller models with hardware-assisted offload (NPU or AI accelerator) run cooler and more efficiently.
  • System reliability increases as local inference keeps essential features online even during network outages.

An AI architecture designed for efficiency rather than abundance fits squarely within the ethos of edge computing. Many high-value agentic AI tasks—summarizing a conversation, describing an image, or translating speech—do not require massive models. In narrow domains, compact models can deliver faster, more private and consistent results because they operate with fewer unknowns.

The path forward

If the DRAM shortage proves anything, it’s that the most resilient AI systems are the ones designed around constraints, not excess. Teams are re-evaluating assumptions about model size, memory baselines, and what “good enough” looks like for common tasks. They’re recognizing that domain-specific intelligence often performs better than brute-force scale—especially in environments that demand consistency, privacy, and low power draw.

Edge AI fits naturally within this moment. Its memory profile lines up with the DRAM capacities that remain accessible, and its deployment model brings stability to the tasks users rely on most. As supply tightness continues, organizations that invest in leaner model design and hybrid deployment strategies will be better positioned to deliver stable, responsive AI without absorbing high memory costs.

Avi Baum is chief technology officer (CTO) and co-founder of Hailo.

Special Section: AI Design

The post Edge AI in a DRAM shortage: Doing more with less appeared first on EDN.

Self-oscillating sawtooth generator spans 5 decades of frequencies

Wed, 02/11/2026 - 15:00

There are many ways of generating analog sawtooth waveforms with oscillating circuits. Here’s a method that employs a single supply voltage rail to produce a buffered signal whose frequency can be varied over a range from 10Hz to 1MHz (Figure 1).

Figure 1 The sawtooth output waveform is the signal “saw” available at the output of op amp U1a. Its frequency is set by the value of resistor R6 which can vary from 120 Ω to 12 MΩ.

Wow the engineering world with your unique design: Design Ideas Submission Guide

U3, powered through R5, uses Q2 and R6 to create a constant current source. U3 enforces a constant voltage Vref of 1.2 V between its V+ and FB pins. Q2 is a high-beta NPN transistor that passes virtually all of R2’s current Vref/R6 through its collector to charge C3 with a constant current, producing the linear ramp portion of this ground-referenced sawtooth.

Op-amp U1 buffers this signal and applies it to an input of comparator U2a. The comparator’s other input’s voltage causes its output to transition low when the sawtooth rises to 1 volt. U2A, R1, Q1, R8, C1, and U2b produce a 100 ns one-shot signal at the output of U2b, which drives the gate of M1 high to rapidly discharge C3 to ground.

The frequency of the waveform is 1.2 / ( R6 ×  C3 ) Hz. With the availability of U3’s Vref tolerances as low as 0.2% and a 0.1% tolerance for R6, the circuit’s overall tolerances are generally limited by an at best 1% C3 combined with the parasitic capacitances of M1.

Waveforms at several different frequencies are seen in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, and Figure 7.

Figure 2 10 Hz sawtooth for an R6 of 12 MΩ.

Figure 3 100 Hz sawtooth for an R6 of 1.2 MΩ.

Figure 4 1 kHz sawtooth for an R6 of 120 kΩ.

Figure 5 10 kHz sawtooth for an R6 of 12 kΩ.

Figure 6 100 kHz sawtooth for an R6 of 1.2 kΩ.

Figure 7 1 MHz sawtooth for an R6 of 120 Ω.

Figures 3 and 4 show near-ideal sawtooth waveforms. But Figure 2, with its 12 MΩ R6, shows that even when “off,” M1 has a non-infinite drain-source resistance which contributes to the non-linearity of the ramp. It’s also worth noting that although U3’s FB pin typically pulls less than 100 nA, that’s the current that the 12 MΩ R6 is intended to source, so waveform frequency accuracy for this value of resistor is problematic.

Figures 5, 6, and 7 show progressive increases in the effects of the 100nS discharge time for C3 and of the finite recovery time of the op amp when its output saturates near the ground rail.

These circuits do not require any matched-value components. Accuracies are improved by the use of precision versions of R4, R6, R7, and U3, but the circuit’s operation does not necessitate these.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post Self-oscillating sawtooth generator spans 5 decades of frequencies appeared first on EDN.

Full circle current loops: 4mA-20mA to 0mA-20mA

Wed, 02/11/2026 - 15:00

A topic that has recently drawn a lot of interest (!) and no fewer than four separate design articles (!!) here in Design Ideas, is the conversion of 0 to 20mA current sources into industrial standard 4mA to 20mA current loop signals. Here’s the list—so far—in reverse chronological order. Apologies if (as is quite possible) I’ve missed one—or N.

With so much energy already devoted to that one side of this well-tossed coin, it seemed only fair to pay a little attention to the flip side of the conversion function coin. Figure 1 shows the result. Its (fairly) simple circuit performs a precision conversion from 4-20mA to 0-20mA.  Here’s how it works.

Figure 1 The flip side of the current conversion coin: Iout = (IinR1 – 1.24v)/R2 = 1.25(Iin – 4mA).

Wow the engineering world with your unique design: Design Ideas Submission Guide

The core of the circuit is the Vin = IR1 = 1.24 V to 7.20 V developed by the 4-20mA input working into R1 and sensed by the Vref input of Z1. The principle in play is discussed in Figure 1 of “Precision programmable current sink.”

The resulting Z1 cathode current is (Iin R1 – Vref)/R2 = 0 to 20 mA as I increases from 4 mA to 20 mA. Or it would be if not for the phenomenon of Vref modulation by Z1 cathode voltage. The D1, Q2 cascode pair greatly attenuates this effect by holding Z1’s cathode voltage near zero and constant. It also extends Z1’s cathode voltage limit from an inadequate 7 V to the 30 V capability of Q2. Of course, a different choice for Q2 could extend it further.  But if 30 V will do, the >1000 typical beta of the 5089 is good for accuracy.

Current booster Q1 extends Z1’s 15 mA max current limit while also reducing thermal effects. The net result holds Z1’s maximum power dissipation to single-digit milliwatts.

With 0.1% precision R1 and R2 and the ±0.5% tolerance TLV431B, better than 1% accuracy can be achieved with the untrimmed Figure 1 circuit. If this level of precision is still inadequate, manual post-assembly trim can be added with just two extra parts, as shown in Figure 2. Calibration is achieved with one pass.

  1. Set input current to 4.00 mA
  2. Adjust R4 for output current of ~50 µA.  Note this is only 0.25% of full-scale, so don’t worry about hitting it exactly. You probably won’t.
  3. Set input current to 20 mA
  4. Adjust R5 for an output current of 20 mA

Figure 2 R4 and R5 trims allow post-assembly precision optimization.

Input max overhead voltage is 8 V, output overhead is 9 V. Worst case (resistor limited) fault current with 24 V supply = 80 mA.

Readers may notice a capacitor labeled “Ca” in Figures 1 and 2. This is the “Ashu capacitance” that Design Idea (DI) contributor and current source circuitry expert Ashutosh Sapre discovered to be essential for frequency stability of the cascode topology. Thanks, Ashu!

And a closing note. Since the output scale factor is set by and inversely proportional to R2, if any full-scale other than 20 mA is desired, it’s easily achieved by an appropriate choice for R2.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Full circle current loops: 4mA-20mA to 0mA-20mA appeared first on EDN.

Are non-magnetic connectors in your future?

Wed, 02/11/2026 - 13:55

Many years ago, I overheard an engineer, with whom I had some project contact, make a casual remark about an RF connector situation, asking “what’s the big deal, it’s just a connector?” That statement was enough to make me wonder about his overall professional judgment.

Connectors may look simple but they are not, of course, as they must combine electrical requirements with mechanical issues and incorporate suitable materials for both body and contact. The materials and platings of their contacts are especially intricate as they blend metallurgical chemistry with other factors such as manufacturability, flexibility, resilience, and resistance objectives.

In recent years, there’s been an added demand on connectors: the need to be non-magnetic. Technically, this means the connector’s materials exhibit extremely low magnetic susceptibility, as they neither generate magnetic fields nor interact with external ones in any significant way.

Note that the term “magnetic connector” is also used for a connector/cable that relies on a magnetic force to both make and maintain a connection. In this arrangement, the plug and the socket have corresponding magnets or magnetic faces to make a self-aligning connection. They are designed for quick, easy, and, often, “break-away” disconnection to protect ports from wear and damage. But the magnetic/non-magnetic connectors here are not these.

Is it easy to visually distinguish a magnetic connector from a non-magnetic one? Maybe, maybe not. Some non-magnetic connectors have a different surface sheen or glow compared to conventional connectors, while others have different color (Figure 1). Of course, some magnetic ones also have a different color depending on the finish, so it’s not a certainty. Fortunately, magnetism is easy enough to test.

Figure 1 These two RF connectors are non-magnetic; other than their color, they look like magnetic connectors. So, color alone is not a definitive indicator. Source: Rosenberger Group

Even minute amounts of magnetic “interference” can have significant consequences in high-frequency or magnetically sensitive systems. Therefore, the objective of non-magnetic component design is to make these parts “magnetically invisible”. So, they don’t distort the surrounding field or interfere with nearby sensors or measurement instruments.

This is especially crucial in environments where magnetic fields play an active role, such as MRI systems, particle accelerators, and quantum computers:

  • In MRI systems, magnetic components can distort the magnetic field lines, leading to degraded system performance, measurement inaccuracies, and artifacts in imaging results. In contrast, non-magnetic components minimize these disturbances by maintaining field uniformity.
  • In precision RF and microwave metrology, magnetic components can bias sensor readings or create unpredictable phase errors. For example, a magnetic connector near a current probe could influence the magnetic coupling, altering the measured waveform.
  • In systems such as scanning electron microscopes, where magnetic fields steer and direct the electrons to supercolliders, where superconducting magnets keep the particle centered as they are being accurate, the magnetic field must be precisely shaped and controlled.
  • In the “hot” field of quantum computing, the qubits—the quantum bits that carry computational information—are extremely sensitive to external magnetic fields. Even minor magnetic impurities in nearby materials can cause decoherence, leading to computational errors or reduced qubit lifetime.

Non-magnetic connectors provide lowloss signal transmission and maintain stable performance across temperature cycles—without contributing to unwanted magnetic noise. In these cryogenic systems, even small amounts of magnetic interaction could invalidate experimental results.

A non-magnetic connector will typically have a low magnetic susceptibility of less than 10-5 (think back Electromagnetics 101: susceptibility is a dimensionless ratio) and a magnetic field strength of less than 0.1 milligauss. That’s at least one to two orders of magnitude less than standard connectors.

Making the non-magnetic connector

It may seem that all that’s required to make a non-magnetic connector is to use non-magnetic material such as copper. If only it were that easy, as non-magnetic materials have very different mechanical and electrical attributes, which affect connector performance and consistency.

A connector has three elements: the body, usually made of nylon or an engineered plastic and not a magnetic consideration; the contact or terminal pin, usually phosphor bronze, beryllium copper, or brass; and the surface plating(s), which can be copper, nickel, gold, tin, silver, palladium, or other metal.

The plating is the largest challenge, as it’s critical to long-term performance of the contact surfaces. The magnetic metals that are the concern here are iron, cobalt, and nickel, notes the Samtec video “Exploring Non-Magnetic Interconnects” (Figure 2).

Figure 2 Trouble zone in the periodic table: these three elements are the source of most of the magnetic problems. Solid-state physics analysis explains why this is so. Source: Samtec Inc.

The simple solution would be to avoid using these metals and instead use brass or aluminum for connector bodies with silver or gold plating. However, that’s often undesirable for performance reasons.

There are other options. For example, Samtec uses a nickel-phosphorus electrodeposited coating that works as a barrier layer between the copper-alloy base metal and subsequent outer layers. This barrier is needed to prevent migration of the copper to the surface-layer gold or tin of the connector pins, which would degrade the performance of that layer.

But wait—isn’t nickel one of the troublesome metals? Yes, but that’s where metallurgists bring some technical “magic” to the story. By adding phosphorus to the nickel, the ferromagnetism associated with high-purity nickel is reduced. This is because the added phosphorus interrupts the nickel’s atomic dipoles, causing the metal to become non-magnetic.

This is not the only option for going non-magnetic. Palladium provides a non-magnetic layer but is a costly alternative to nickel. Associated fasteners can be made of austenitic stainless steel (grades 304 or 316), which is non-magnetic due to its unique crystalline structure.

Other possibilities are eliminating the nickel completely, but this requires thicker copper and gold layers to slow the migration; use of a copper/tin/zinc alloy (Cu/Sn/Zn) called Tri-M3 as a barrier layer; or use of nickel-tungsten (Ni/W—tradename Xtalics). The goal is to reduce to grain size to nanoparticles and so disrupt the possibilities for alignment of the magnetic domains.

There are several ways to devise and fabricate non-magnetic connectors. It requires pure materials, deep-physics insight, metallurgical expertise, and precise control of production process. Assessing the non-magnetic characteristics involves sophisticated instrumentation to measure the magnetic permeability of the materials and connectors.

Each vendor has its own approach and a set of trade-offs regarding connector performance. Designers have many connector parameters to consider with respect to performance, solderability, number of mating cycles, supply-chain risk, and more.

The good news is that the increasing need for such connector means they are not items only available from one or two specialty suppliers. Nearly every manufacturer of RF connectors also offers non-magnetic versions, so users have many options for their connector needs and bill of materials.

What’s the price difference between magnetic and non-magnetic connectors? A quick, unscientific sampling showed that the non-magnetic ones were two to three times the price of their magnetic counterparts. It’s trivial to say that cost is a secondary concern in the applications where they are needed, but that is likely true.

Have you ever used non-magnetic connectors? Was the need for them identified in advance, or was it recognized after regular connectors were used, with problems identified and then linked to the magnetic connectors?

Certainly, the next time someone says, “it’s just a connector,” you can offer them firm evidence that’s not the case at all.

Related Content

The post Are non-magnetic connectors in your future? appeared first on EDN.

555 VCO revisited

Tue, 02/10/2026 - 16:40

It is well known that a 555 timer in astable mode can be frequency modulated by applying a control voltage (CV) to pin 5. The schematic on the left of Figure 1 shows this classic 555 VCO. 

Figure 1 Classic VCO (left) and new 555 VCO variant (right), where Pin 5 is not modulated, which leads to a constant 50% pulse width, independent of frequency.

Modulating pin 5 has some severe drawbacks: The control voltage (CV) must be significantly > 0 V and < V+, otherwise the oscillation stops.

Wow the engineering world with your unique design: Design Ideas Submission Guide

In contrast to a typical VCO, which outputs 0 Hz or Fmin @ CV=0 and reaches Fmax @ CVmax, the CV behavior of the classic 555 VCO is inverted and nonlinear. This is due to the modulation of the upper and lower Schmitt trigger thresholds, and pulse width changes with frequency. The useful tuning range Fmax/Fmin is limited to about 3.

Stephen Woodward’s “Can a free-running LMC555 VCO discharge its timing cap to zero?” shows some clever improvements: linear-in-pitch CV behavior and an extended 3 octave range, but still suffers from other “pin 5” drawbacks.

The schematic on the right of Figure 1 shows a new variant of the 555 VCO. Pin 5 is not modulated, which leads to a constant 50% pulse width, independent of frequency.

A rising CV results in a higher frequency. CV=0 is allowed and generates Fmin.

The useful tuning range is >10 and ≥100, with some caveats noted below.

Although it uses only 2 resistors and 1 capacitor, like the classic 555 astable configuration, it is a bit harder to understand. The basic function of adding a fraction of the square wave output voltage to the triangle voltage over C, which rises in frequency, is described in my recent Design Idea (DI), “Wide-range tunable RC Schmitt trigger oscillator.”

There, I use a potentiometer to add a fraction of the output to the capacitor voltage.

In the new 555 VCO variant, the potentiometer voltage is replaced by an external CV, which is chopped by the 555 discharge output (pin 7).

When CV is 0, the voltage on the right side of C3 is also 0, and the VCO outputs Fmin. With rising CV, a square wave voltage between 0V (pin 7 discharging) and CV (pin 7 open) appears on the right side of C3. Similar to my above-mentioned DI, this square wave voltage must always be smaller than the hysteresis voltage  (555: Vh=V+/3), otherwise Fmax goes towards infinity. That is why you must watch your CVmax if you want to reach high Fmax/Fmin ratios.

Figure 2 shows a QSPICE simulation of frequency with respect to CV from 0 V to 3.9 V in 100 mV steps.

Figure 2 QSPICE simulation of frequency with respect to CV from 0 V to 3.9 V in 100 mV steps.

A prototype with component values from Figure 1  and V+=12 V has been breadboarded, and a rough frequency-versus-CV curve is measured and marked with a red dot in the QSPICE simulation in Figure 2.

Figure 3 shows a scope screenshot for Fmin. 

Figure 3 A scope screenshot for Fmin, CH1 (yellow) output voltage, CH2 (magenta) CV=0.

In conclusion, the new 555 VCO circuit overcomes some drawbacks of the classic version, like limited CV range, inverted CV/Hz behavior, and changing pulse width, without using more components. Unfortunately, it still shows nonlinear CV/Hz behavior. Maybe using a closed loop, with an opamp and a simple charge pump, can tame it by raising the chip count to 2.

Uwe Schüler is a retired electronics engineer. When he’s not busy with his grandchildren, he enjoys experimenting with DIY music electronics.

Related Content

The post 555 VCO revisited appeared first on EDN.

Simplifying inductive wireless charging

Tue, 02/10/2026 - 15:00
Block diagram of Microchip's 300-W inductive power transfer reference design.

What do e-bikes and laptops have in common? Both can be wirelessly charged by induction.

E-bikes and laptops both use lithium-ion batteries for power, chosen for their light weight, high energy density, and long lifespan. Both systems can be wirelessly recharged via the wireless power transfer (WPT) method that uses electromagnetic induction to transfer energy to the battery without cables.

For e-bikes, there is a wireless charging pad or inductive tile that e-bikes park on to transfer power. For induction charging, one coil is integrated into the static pad or tile (transmitter coil) and the other (the receiver coil) is situated on the bike, often in the kickstand. The charging pad’s coil, fed by AC, creates a magnetic field, which in turn produces current in the bike’s coil. This AC is then converted to DC, to power the bike’s battery.

The principle is the same for laptops, as well as a broad range of consumer and industrial devices, including small robots, drones, power tools, robotic vacuum cleaners, wireless routers, and lawnmowers.

Microchip provides a 300-W electromagnetic inductive wireless electric power transmission reference design that can be incorporated into any type of low-power consumer or industrial system for wireless charging (see block diagram in Figure 1). It consists of a Microchip WP300TX01 power transmitter (PTx) and Microchip WP300RX01 power receiver (PRx). The design operates with efficiency of over 90% at 300-W power and a Z-distance (the distance between pairing coils) of 5−10 mm.

Block diagram of Microchip's 300-W inductive power transfer reference design. Figure 1: Block diagram of the 300-W inductive power transfer reference design (Source: Microchip Technology Inc.)

The transmitter (Figure 2) is nominally powered from a 24-V rail and the receiver regulates the output voltage to nominal 24 V.

Block diagram of the power transmitter in Microchip's 300-W inductive power transfer reference design.Figure 2: Block diagram of the power transmitter (Source: Microchip Technology Inc.)

The design’s operating DC input voltage range is 11 V to 37 V, with input overvoltage and undervoltage protection, as well as overcurrent and thermal protection via a PCB/coil temperature-monitoring functionality. Maximum receiver output current is 8.5 A, and the receiver output voltage is adjustable from 12 V to 36 V.

The design implements a Microchip proprietary protocol, developed after years of research and development and, with patents granted in the U.S., ensuring reliable power transfer with high efficiency. The system also implements foreign object detection (FOD), a safety measure that avoids hazardous situations should a metallic object find its way in the vicinity of the charging field. Once the FOD detects a metallic object near the charging zone, where the magnetic field is generated, it stops the power transfer.

The reference design incorporates this functionality on the main coil, ceasing power from the transmitter until the object is removed. FOD is performed by stopping four PWM drive signals, with four being the maximum to avoid stopping the charging entirely.

This reference design also detects some NFC/RFID cards and tags.

Transmitter and receiver

The WP300TX01 is a fixed-function device designed for wireless power transfer, as is the WP300RX01 chip, designed for receiving wireless power. The two are paired together for a maximum power transfer of 300 W.

The user can configure the input’s under- and overvoltage, as well as the input’s overcurrent and overpower. There are three outputs for general-purpose LEDs and multiple OLED screens, as well as five inputs for interface switches. The design enables OLED display pages to allow viewing and monitoring of live system parameters, and as with the input parameters, the OLED panel’s settings can be configured by the user.

The WP300RX01 device operates from 4.8 V to 5.2 V, in an ambient temperature between −40°C and 85°C. Like with the transmitter controller, this device provides overvoltage, undervoltage, overcurrent, overpower, and overtemperature protection, with added qualification of AEC-Q100 REVG Grade 3 (−40°C to 85°C), which refers to a device’s ability to function reliably within this ambient temperature range.

The reference design simplifies and accelerates WPT system design and eliminates the need to go through the certification process, as it has already been accredited with the CE certification, which signifies that a product meets all the necessary requirements of applicable EU directives and regulations.

Types of wireless charging

There are different types of wireless charging, including resonant, inductive, electric field coupling, and RF. Inductive charging for smartphones and other lower-power electronic devices is guided by the Qi open standard, introduced by the Wireless Power Consortium in 2010, to create a universal, interoperable charging concept for electronic devices.

The Qi open standard promotes interoperability, thus avoiding multiple chargers and cables, as well as market fragmentation into different proprietary solutions. Many manufacturers have adopted this standard in their products, including tech giants like Apple and Samsung.

Since 2023, the Qi 2.0 version brings faster charging to mobile devices to 15 W, certified for interoperability and safety. Qi 2.0 devices feature magnetic attachment technology, which aligns devices and chargers perfectly for improved energy efficiency for faster and safer charging and ease of use. Qi 2.X includes the Magnetic Power Profile (MPP) with an added operating frequency of 360 kHz. With MPP, a magnetic ring ensures the receiver’s coil aligns perfectly with the charger’s coil, thus improving power transfer and reducing heat.

Qi 2.2, released in June 2025, enables 25-W charging, building on the convenience and energy efficiency of Qi while improving the wireless charging time.

Simultaneous charging of two 15-W Qi receivers

In addition to its 300-W electromagnetic inductive wireless electric power transmission reference design reviewed earlier in this article, Microchip also offers the Qi2 dual-pad wireless power transmitter reference design. This dual-pad, multi-coil wireless power transmitter reference design enables simultaneous charging of two 15-W Qi receivers (see Figure 3).

At the heart of the design is a Microchip dsPIC33 digital-signal controller (DSC) that simultaneously controls both charging pads. The dual-pad design is compatible with the Qi 1.3 and Qi 2.x standards, as well as MPP and Extended Power Profile.

The hardware is reconfigurable and supports most transmitter topologies. In addition to MPP, it supports Baseline Power Profile for receivers to 5 W.

Block diagram of Microchip's Qi 2.0 dual-pad wireless power transmitter reference design.Figure 3: Block diagram of the Qi 2.0 dual-pad wireless power transmitter reference design (Source: Microchip Technology Inc.)

The MPP charging pad initiates charge with a 12-kHz inverter switching frequency but will shift to 360 kHz when connected to an MPP PRx. The dsPIC33CK DSC executes two charger instances. To facilitate support for different protocols, real-time decisions based on charging pad and receiver type are required.

The MPP charging pad initiates charge with a 12-kHz inverter switching frequency but will shift to 360 kHz when connected to an MPP PRx. The dsPIC33CK DSC executes two charger instances. To facilitate support for different protocols, real-time decisions based on charging pad and receiver type are required.

The software-based design provides a high level of flexibility to optimize key features of the wireless power system, such as efficiency, charging area, Z-distance, and FOD. To support applications with a wide input voltage range, each PTx includes a front-end four-switch buck-boost (4SWBB) converter for power regulation. The 4SWBB connects to a full-bridge inverter for driving the resonant tank. On the MPP charger, additional resonant capacitor switch networks enable higher resonant frequency. An MP-A13 charger implements a similar coil select circuitry for energizing the coil with the strongest signal possible, enabling a wider area of placement.

This reference design is automotive-grade and includes CryptoAuthentication, hardware-based (on-chip) secure storage for cryptographic keys, to protect communication and data handling. In addition, the design includes a Trust Anchor TA100/TA010 secure storage subsystem. The dsPIC33CK device architecture also allows the integration of additional software stacks, such as automotive CAN stack or NFC stacks for tag detection.

It’s worth noting that the variable-input voltage, fixed-frequency power control topology implemented in the transmitter is ideal for systems that must meet stringent electromagnetic-interference and electromagnetic-compatibility requirements.

In addition to all these features, including FOD through calibrated power loss, the dual-charging reference design also provides measured quality factor/resonant frequency and ping open-air object detection; multiple fast-charge implementations, including for Apple and Samsung; and several receiver modulation types, such as AC capacitive and AC/DC resistive. For added safety, the design includes thermal power foldback and shutdown and overpower protection.

A UART-USB communication interface enables reporting and debugging of data packets, and LEDs indicate system status and coil selection. There is a reset switch and temp sensor inputs for added functionalities.

With the continuously evolving standards for Qi and unique new applications requiring higher-wattage wireless charging, there is plenty of opportunity for innovation and growth in the wireless charging space. Microchip experts can provide you with the right guidance for seamlessly bringing your wireless charging solution to market.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The post Simplifying inductive wireless charging appeared first on EDN.

TP-Link’s Kasa HS103: A smart plug with solid network connectivity

Mon, 02/09/2026 - 23:07

With Amazon’s smart plug teardown “in the books”, our engineer turns his attention to some TP-Link counterparts, this first one the best behaved of the bunch per hands-on testing results.

Two months back, I introduced you to several members of TP-Link’s Kasa and Tapo smart home product lines as successors to Belkin’s then-soon and now (at least as you read these words, a few weeks after I wrote them) defunct Wemo smart plug devices. I mentioned at the time that I’d had particularly good luck, from both initial setup and ongoing connectivity standpoints, with the Kasa HS103:

An example of which, I mentioned at the time, I’d shortly be tearing down both for standalone inspection purposes and subsequent comparison to the smaller but seemingly also functionally flakier Tapo EP10:

Today, I’ll be actualizing my HS103 teardown aspiration, with the EP10 analysis to follow in short order, hopefully sometime next month. What’s inside this inexpensive device, and is it any easier to disassemble than was Amazon’s Smart Plug, which I dissected last month?

Plain is appealing

Let’s find out. As usual, I’ll begin with some outer box shots of the four-pack containing today’s patient. You may call the packaging “boring”. I call it refreshingly simple. As well as recyclable.

Sorry, I couldn’t resist including that last one 😀.

Now for the device inside the box, beginning with a conceptual block diagram. Interestingly, although I’d mentioned back in December that TP-Link now specs the HS103 to handle a current draw of up to 15A, the four-pack (HS103P4) graphic on Amazon’s website still list 12A max:

Its three-pack (HS103P3) graphic counterpart eliminates the current spec entirely, replacing it with the shadowy outline of an AC outlet set, which I suppose is one way to fix the issue!

And now for some real-life shots, as usual (and as with subsequent images) accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

See that seam? I ‘spect that’ll be a key piece for solving the puzzle of the pathway to the insides:

And, last but not least, all those specs that the engineers out there reading this know and love, including the FCC certification ID (2AXJ4KP115):

Cracking (open) the case

Now to get inside. Although I earlier highlighted the topside seam, I decided to focus my spudger attention on the right side to start, specifically the already visible gap between the main chassis and the rubberized gasket ring:

Quickly results-realizing that I was indirectly just pushing the side plate (containing the multi-function switch) out of its normal place, I redirected my attention to it more directly:

Success, at least as a first step!

Now for that gasket…

At this point, however, we only have a visual tease at the insides:

Time for another Amazon-supplied conceptual diagram:

And now for the real thing. This junction overlap gave me a clue of how to start:

It wouldn’t be a proper teardown without at least a bit of collateral damage, yes?

Onward, I endure it all for you, dear readers:

Voilà:

Boring half first:

PCB constituent pieces

Now for the half we all really care about:

As with its Amazon smart plug predecessor, the analog and power portions are “vanilla”:

The off-white relay at far right on the main PCB, for example, is the HF32FV-16 from Hongfa. Perhaps the most interesting aspect of the analog-and-power subsystem, at least to me, is the sizeable fuse below the pass-through ground connection, which I hadn’t noticed in the Amazon-equivalent design (although perhaps I just overlooked it?). The digital mini-PCB abutting the relay, on the other hand, is where all the connectivity and control magic take place…

In the upper left corner is the multicolor LED whose glow (amber or/or blue, and either steady or blinking, depending on the operating mode of the moment) shines through the aforementioned translucent gasket when the switch is powered up (and not switched off):

Those two unpopulated eight-lead IC sites below it are…a titillating tease of what might be in a more advanced product variant? In the bottom left corner is the embedded 2.4 GHz Wi-Fi 1T1R antenna. And to its right is the “brains” of the operation at the other end of the antenna connection, Realtek’s RTL8710, which supports a complete TCIP/IP “stack” and integrates a 166 MHz Arm Cortex M3 processor core, 512 Kbytes of RAM and 1 Mbyte of flash memory.

Stubborn solder

Speaking of power pass-throughs…what about the other side of the main PCB? The obvious first step is to remove the screw whose head you might have already noticed in the earlier shot:

But that wasn’t enough to get the PCB to budge out of the chassis, at least meaningfully:

Recall that in the Amazon smart plug design, not only the back panel’s ground pin but also its neutral blade pass through intact to the front panel slots, albeit with the latter also split off at the source to power the PCB via a separate wire. The line blade is the only one that only goes directly to the PCB, where it’s presumably switched prior to routing to the front panel load slot.

In this design, that same switching scheme may very well be the case. But this time the back panel neutral connection also routes solely to the PCB. Note the two beefy solder points on the main PCB, one directly above the screw location and the other to the right of its solder sibling. I was unable to get either (far from both of) them either successfully unsoldered from above or snipped from below. And all I could discern on the underside of the PCB from peering through the gap were a few scattered additional passive components, anyway.

So, sorry, folks, I threw in the towel and gave up. I’m assuming that those two particular solder points, befitting the necessary stability not only electrically but also mechanically, i.e., physically, leveraged higher-temperature solid or silver solder that my iron just wasn’t up for. Or maybe I just wasn’t sufficiently patient to wait long enough for the solder to melt (hey, it’s happened before). Regardless, and as usual, I welcome your thoughts on what I was able to show you, or anything else related to this product and my teardown of it, for that matter, in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post TP-Link’s Kasa HS103: A smart plug with solid network connectivity appeared first on EDN.

Thumbwheel switches: Turning numbers into control

Sun, 02/08/2026 - 19:00

Thumbwheel switches may evoke early digital design, yet their compact precision and tactile feedback keep them indispensable. From setting circuit-board addresses to configuring embedded parameters, they translate simple rotations into reliable numeric codes.

Whether selecting device IDs, adjusting ranges, or defining system values, thumbwheel switches deliver a straightforward interface that endures across industrial, consumer, and embedded applications.

Thumbwheel switches (often abbreviated as TWS) offer a straightforward, tactile method for setting numerical values in electronic instruments and control systems. Each wheel is marked with digits, allowing users to rotate and lock in precise entries without complex circuitry or software.

Their mechanical reliability, clear visual indication, and ease of use have made them a staple in applications ranging from laboratory test equipment to industrial control panels. By combining compact design with intuitive operation, thumbwheel switches continue to serve as a practical solution where accuracy and simplicity are paramount.

Rolling vs. clicking: Choosing your digital dial

While both convert a physical turn into a digital signal, the choice between a thumbwheel and a push-wheel switch comes down to how you prefer to drive your data. The rotary thumbwheel is the high-speed option, featuring a serrated edge that you roll with your thumb to flick through numbers in a single, fluid motion—ideal for quick adjustments across a broad range.

In contrast, the push-wheel is the precision specialist; it keeps the wheel protected behind a window and uses dedicated ‘+’ and ‘−’ buttons to advance the value one crisp click at a time. While the thumbwheel offers intuitive speed, the push-wheel provides tactile certainty and protection against accidental bumps, making it the go-to for industrial settings where every digit counts.

Figure 1 Rotary thumbwheel and push-button thumbwheel switches adjust numerical inputs by rotation or precision clicks. Source: Author

Sidenote: Although rotary thumbwheel and push‑button thumbwheel (push-wheel) switches differ in operation—one using a rotating wheel, the other plus/minus buttons—the term thumbwheel is widely applied as an umbrella designation for both types of digital input switches in industry.

Switch communication mechanisms

Beneath the surface, these switches speak a specific digital language through their pin configurations, typically utilizing binary coded decimal (BCD) or hexadecimal (Hex) outputs to communicate with your controller.

A BCD switch is the standard for human-readable interfaces, cycling strictly from 0 to 9; it’s the perfect fit for decimal-based inputs like a kitchen timer or a thermostat setpoint. However, if your project requires more density, a hexadecimal switch utilizes the same four output pins to provide 16 distinct positions (0–9 and A–F).

Figure 2 Example maps TWS positions to BCD code chart using 8421 pin logic. Source: Author

While both rely on the same 8-4-2-1 weighted logic—where internal contacts bridge a common pin to specific data lines to represent a value—BCD keeps things simple for the end-user, whereas hexadecimal is the preferred choice for technical tasks like setting device addresses or selecting complex software modes in a space-saving format.

As a quick aside, the 8-4-2-1 weighted logic is the most common form of BCD representation. Each decimal digit (0–9) is encoded into a 4-bit binary number, where the bit positions carry weights of 8, 4, 2, and 1 from left to right (MSB to LSB).

Thumbwheel switch output code variants

In practice, thumbwheel switches provide designers with multiple output code formats to match diverse digital system needs. The most common is BCD, where each decimal digit is encoded into a 4-bit binary value for straightforward interfacing with counters and microcontrollers.

Some switches offer decimal output, directly representing the digit without binary conversion. More specialized variants include BCD + Complement, which supplies both the normal BCD code and its inverted form for redundancy or error checking, and BCD Complement, which outputs only the inverted binary representation.

Certain models also support BCH hexadecimal coding, enabling representation of values 0–F in compact 4-bit hexadecimal form, useful in applications requiring extended coding beyond decimal digits. These output options give engineers flexibility to align switch signals with the encoding schemes of displays, logic circuits, or embedded systems, ensuring compatibility and efficient signal processing.

Thumbwheel switches: Key practical notes

In practice, each push-wheel/thumbwheel switch forms a single vertical segment, and multiple segments can be combined to build assemblies of varying sizes. The wheel or buttons enable digit selection from 0 through 9.

In a BCD thumbwheel switch, the common terminal (C) lies on one side, followed by weighted contacts for 8, 4, 2, and 1. Applying a small voltage, for instance 5 VDC, to the common allows the output value to be read by summing the weights of the contacts driven HIGH. For example, selecting digit 3 energizes contacts 1 and 2, both appearing at the common voltage.

Notably, diodes are incorporated into thumbwheel switches to isolate each weighted contact and prevent back-feeding between lines. This ensures that only the intended logic states contribute to the BCD output, protecting the switch and downstream logic from false readings or short circuits.

Figure 3 A practical example illustrates a BCD TWS with diodes installed. Source: Author

Equally important, pull-up and pull-down resistors establish defined default states for the contacts. A pull-up resistor ties an inactive line to logic HIGH, while a pull-down resistor ties it to logic LOW. Without these resistors, floating inputs could drift unpredictably, resulting in noisy or unstable outputs. Together, diodes and pull-up/pull-down resistors guarantee that BCD thumbwheel switches deliver clean, reliable, and unambiguous digital signals to the system.

Keep note at this point that datasheets for thumbwheel switches consistently caution against exceeding their specified voltage and current limits. These devices are usually intended for logic interfacing, with ratings of only a few volts and currents in the milliampere range. Operating them beyond these limits can lead to contact wear, unstable outputs, or permanent failure. As emphasized in manufacturer specifications, designers should strictly adhere to the stated ratings and apply recommended best practices to ensure reliable performance.

Also, it’s critical to distinguish between the Switch Rating and the Carry Rating when selecting a thumbwheel switch. The Switch Rating defines the maximum current allowed while the dial is in motion; exceeding this causes electrical arcing that can erode the gold plating on the contacts. In contrast, the Carry Rating is significantly higher because it applies only when the dial is stationary and the contacts are firmly seated, eliminating the risk of arcs.

Figure 4 Datasheet snippet highlights the key specifications of a thumbwheel switch. Source: C&K Switches

So, to maximize component life when interfacing with PLC inputs, many engineers employ cold switching. This involves adjusting the thumbwheel only when the circuit is de-energized, allowing the switch to operate within its higher carry capacity rather than its lower switching capacity. This practice eliminates the risk of electrical arcing across the contacts during transitions, thereby preventing signal noise and extending the operational life of the switch.

The click that counts

That marks the end of this quick take on thumbwheel switches. While we have covered a flake of theory and some essential practical pointers, there is always more to explore—from advanced BCD logic to creative modern retrofits. These switches may be a “classic” technology, but their reliability and tactile feedback still offer unique value in a touchscreen world.

What is your take? Are you planning to use thumbwheels in your next project, or do you have a favorite “old-school” component that still outperforms modern alternatives? Leave a comment below and share your experience; I would love to hear how you are putting these switches to work.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

The post Thumbwheel switches: Turning numbers into control appeared first on EDN.

Pages