EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 3 hours 3 min ago

Full circle current loops: 4mA-20mA to 0mA-20mA

Wed, 02/11/2026 - 15:00

A topic that has recently drawn a lot of interest (!) and no fewer than four separate design articles (!!) here in Design Ideas, is the conversion of 0 to 20mA current sources into industrial standard 4mA to 20mA current loop signals. Here’s the list—so far—in reverse chronological order. Apologies if (as is quite possible) I’ve missed one—or N.

With so much energy already devoted to that one side of this well-tossed coin, it seemed only fair to pay a little attention to the flip side of the conversion function coin. Figure 1 shows the result. Its (fairly) simple circuit performs a precision conversion from 4-20mA to 0-20mA.  Here’s how it works.

Figure 1 The flip side of the current conversion coin: Iout = (IinR1 – 1.24v)/R2 = 1.25(Iin – 4mA).

Wow the engineering world with your unique design: Design Ideas Submission Guide

The core of the circuit is the Vin = IR1 = 1.24 V to 7.20 V developed by the 4-20mA input working into R1 and sensed by the Vref input of Z1. The principle in play is discussed in Figure 1 of “Precision programmable current sink.”

The resulting Z1 cathode current is (Iin R1 – Vref)/R2 = 0 to 20 mA as I increases from 4 mA to 20 mA. Or it would be if not for the phenomenon of Vref modulation by Z1 cathode voltage. The D1, Q2 cascode pair greatly attenuates this effect by holding Z1’s cathode voltage near zero and constant. It also extends Z1’s cathode voltage limit from an inadequate 7 V to the 30 V capability of Q2. Of course, a different choice for Q2 could extend it further.  But if 30 V will do, the >1000 typical beta of the 5089 is good for accuracy.

Current booster Q1 extends Z1’s 15 mA max current limit while also reducing thermal effects. The net result holds Z1’s maximum power dissipation to single-digit milliwatts.

With 0.1% precision R1 and R2 and the ±0.5% tolerance TLV431B, better than 1% accuracy can be achieved with the untrimmed Figure 1 circuit. If this level of precision is still inadequate, manual post-assembly trim can be added with just two extra parts, as shown in Figure 2. Calibration is achieved with one pass.

  1. Set input current to 4.00 mA
  2. Adjust R4 for output current of ~50 µA.  Note this is only 0.25% of full-scale, so don’t worry about hitting it exactly. You probably won’t.
  3. Set input current to 20 mA
  4. Adjust R5 for an output current of 20 mA

Figure 2 R4 and R5 trims allow post-assembly precision optimization.

Input max overhead voltage is 8 V, output overhead is 9 V. Worst case (resistor limited) fault current with 24 V supply = 80 mA.

Readers may notice a capacitor labeled “Ca” in Figures 1 and 2. This is the “Ashu capacitance” that Design Idea (DI) contributor and current source circuitry expert Ashutosh Sapre discovered to be essential for frequency stability of the cascode topology. Thanks, Ashu!

And a closing note. Since the output scale factor is set by and inversely proportional to R2, if any full-scale other than 20 mA is desired, it’s easily achieved by an appropriate choice for R2.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Full circle current loops: 4mA-20mA to 0mA-20mA appeared first on EDN.

Are non-magnetic connectors in your future?

Wed, 02/11/2026 - 13:55

Many years ago, I overheard an engineer, with whom I had some project contact, make a casual remark about an RF connector situation, asking “what’s the big deal, it’s just a connector?” That statement was enough to make me wonder about his overall professional judgment.

Connectors may look simple but they are not, of course, as they must combine electrical requirements with mechanical issues and incorporate suitable materials for both body and contact. The materials and platings of their contacts are especially intricate as they blend metallurgical chemistry with other factors such as manufacturability, flexibility, resilience, and resistance objectives.

In recent years, there’s been an added demand on connectors: the need to be non-magnetic. Technically, this means the connector’s materials exhibit extremely low magnetic susceptibility, as they neither generate magnetic fields nor interact with external ones in any significant way.

Note that the term “magnetic connector” is also used for a connector/cable that relies on a magnetic force to both make and maintain a connection. In this arrangement, the plug and the socket have corresponding magnets or magnetic faces to make a self-aligning connection. They are designed for quick, easy, and, often, “break-away” disconnection to protect ports from wear and damage. But the magnetic/non-magnetic connectors here are not these.

Is it easy to visually distinguish a magnetic connector from a non-magnetic one? Maybe, maybe not. Some non-magnetic connectors have a different surface sheen or glow compared to conventional connectors, while others have different color (Figure 1). Of course, some magnetic ones also have a different color depending on the finish, so it’s not a certainty. Fortunately, magnetism is easy enough to test.

Figure 1 These two RF connectors are non-magnetic; other than their color, they look like magnetic connectors. So, color alone is not a definitive indicator. Source: Rosenberger Group

Even minute amounts of magnetic “interference” can have significant consequences in high-frequency or magnetically sensitive systems. Therefore, the objective of non-magnetic component design is to make these parts “magnetically invisible”. So, they don’t distort the surrounding field or interfere with nearby sensors or measurement instruments.

This is especially crucial in environments where magnetic fields play an active role, such as MRI systems, particle accelerators, and quantum computers:

  • In MRI systems, magnetic components can distort the magnetic field lines, leading to degraded system performance, measurement inaccuracies, and artifacts in imaging results. In contrast, non-magnetic components minimize these disturbances by maintaining field uniformity.
  • In precision RF and microwave metrology, magnetic components can bias sensor readings or create unpredictable phase errors. For example, a magnetic connector near a current probe could influence the magnetic coupling, altering the measured waveform.
  • In systems such as scanning electron microscopes, where magnetic fields steer and direct the electrons to supercolliders, where superconducting magnets keep the particle centered as they are being accurate, the magnetic field must be precisely shaped and controlled.
  • In the “hot” field of quantum computing, the qubits—the quantum bits that carry computational information—are extremely sensitive to external magnetic fields. Even minor magnetic impurities in nearby materials can cause decoherence, leading to computational errors or reduced qubit lifetime.

Non-magnetic connectors provide lowloss signal transmission and maintain stable performance across temperature cycles—without contributing to unwanted magnetic noise. In these cryogenic systems, even small amounts of magnetic interaction could invalidate experimental results.

A non-magnetic connector will typically have a low magnetic susceptibility of less than 10-5 (think back Electromagnetics 101: susceptibility is a dimensionless ratio) and a magnetic field strength of less than 0.1 milligauss. That’s at least one to two orders of magnitude less than standard connectors.

Making the non-magnetic connector

It may seem that all that’s required to make a non-magnetic connector is to use non-magnetic material such as copper. If only it were that easy, as non-magnetic materials have very different mechanical and electrical attributes, which affect connector performance and consistency.

A connector has three elements: the body, usually made of nylon or an engineered plastic and not a magnetic consideration; the contact or terminal pin, usually phosphor bronze, beryllium copper, or brass; and the surface plating(s), which can be copper, nickel, gold, tin, silver, palladium, or other metal.

The plating is the largest challenge, as it’s critical to long-term performance of the contact surfaces. The magnetic metals that are the concern here are iron, cobalt, and nickel, notes the Samtec video “Exploring Non-Magnetic Interconnects” (Figure 2).

Figure 2 Trouble zone in the periodic table: these three elements are the source of most of the magnetic problems. Solid-state physics analysis explains why this is so. Source: Samtec Inc.

The simple solution would be to avoid using these metals and instead use brass or aluminum for connector bodies with silver or gold plating. However, that’s often undesirable for performance reasons.

There are other options. For example, Samtec uses a nickel-phosphorus electrodeposited coating that works as a barrier layer between the copper-alloy base metal and subsequent outer layers. This barrier is needed to prevent migration of the copper to the surface-layer gold or tin of the connector pins, which would degrade the performance of that layer.

But wait—isn’t nickel one of the troublesome metals? Yes, but that’s where metallurgists bring some technical “magic” to the story. By adding phosphorus to the nickel, the ferromagnetism associated with high-purity nickel is reduced. This is because the added phosphorus interrupts the nickel’s atomic dipoles, causing the metal to become non-magnetic.

This is not the only option for going non-magnetic. Palladium provides a non-magnetic layer but is a costly alternative to nickel. Associated fasteners can be made of austenitic stainless steel (grades 304 or 316), which is non-magnetic due to its unique crystalline structure.

Other possibilities are eliminating the nickel completely, but this requires thicker copper and gold layers to slow the migration; use of a copper/tin/zinc alloy (Cu/Sn/Zn) called Tri-M3 as a barrier layer; or use of nickel-tungsten (Ni/W—tradename Xtalics). The goal is to reduce to grain size to nanoparticles and so disrupt the possibilities for alignment of the magnetic domains.

There are several ways to devise and fabricate non-magnetic connectors. It requires pure materials, deep-physics insight, metallurgical expertise, and precise control of production process. Assessing the non-magnetic characteristics involves sophisticated instrumentation to measure the magnetic permeability of the materials and connectors.

Each vendor has its own approach and a set of trade-offs regarding connector performance. Designers have many connector parameters to consider with respect to performance, solderability, number of mating cycles, supply-chain risk, and more.

The good news is that the increasing need for such connector means they are not items only available from one or two specialty suppliers. Nearly every manufacturer of RF connectors also offers non-magnetic versions, so users have many options for their connector needs and bill of materials.

What’s the price difference between magnetic and non-magnetic connectors? A quick, unscientific sampling showed that the non-magnetic ones were two to three times the price of their magnetic counterparts. It’s trivial to say that cost is a secondary concern in the applications where they are needed, but that is likely true.

Have you ever used non-magnetic connectors? Was the need for them identified in advance, or was it recognized after regular connectors were used, with problems identified and then linked to the magnetic connectors?

Certainly, the next time someone says, “it’s just a connector,” you can offer them firm evidence that’s not the case at all.

Related Content

The post Are non-magnetic connectors in your future? appeared first on EDN.

555 VCO revisited

Tue, 02/10/2026 - 16:40

It is well known that a 555 timer in astable mode can be frequency modulated by applying a control voltage (CV) to pin 5. The schematic on the left of Figure 1 shows this classic 555 VCO. 

Figure 1 Classic VCO (left) and new 555 VCO variant (right), where Pin 5 is not modulated, which leads to a constant 50% pulse width, independent of frequency.

Modulating pin 5 has some severe drawbacks: The control voltage (CV) must be significantly > 0 V and < V+, otherwise the oscillation stops.

Wow the engineering world with your unique design: Design Ideas Submission Guide

In contrast to a typical VCO, which outputs 0 Hz or Fmin @ CV=0 and reaches Fmax @ CVmax, the CV behavior of the classic 555 VCO is inverted and nonlinear. This is due to the modulation of the upper and lower Schmitt trigger thresholds, and pulse width changes with frequency. The useful tuning range Fmax/Fmin is limited to about 3.

Stephen Woodward’s “Can a free-running LMC555 VCO discharge its timing cap to zero?” shows some clever improvements: linear-in-pitch CV behavior and an extended 3 octave range, but still suffers from other “pin 5” drawbacks.

The schematic on the right of Figure 1 shows a new variant of the 555 VCO. Pin 5 is not modulated, which leads to a constant 50% pulse width, independent of frequency.

A rising CV results in a higher frequency. CV=0 is allowed and generates Fmin.

The useful tuning range is >10 and ≥100, with some caveats noted below.

Although it uses only 2 resistors and 1 capacitor, like the classic 555 astable configuration, it is a bit harder to understand. The basic function of adding a fraction of the square wave output voltage to the triangle voltage over C, which rises in frequency, is described in my recent Design Idea (DI), “Wide-range tunable RC Schmitt trigger oscillator.”

There, I use a potentiometer to add a fraction of the output to the capacitor voltage.

In the new 555 VCO variant, the potentiometer voltage is replaced by an external CV, which is chopped by the 555 discharge output (pin 7).

When CV is 0, the voltage on the right side of C3 is also 0, and the VCO outputs Fmin. With rising CV, a square wave voltage between 0V (pin 7 discharging) and CV (pin 7 open) appears on the right side of C3. Similar to my above-mentioned DI, this square wave voltage must always be smaller than the hysteresis voltage  (555: Vh=V+/3), otherwise Fmax goes towards infinity. That is why you must watch your CVmax if you want to reach high Fmax/Fmin ratios.

Figure 2 shows a QSPICE simulation of frequency with respect to CV from 0 V to 3.9 V in 100 mV steps.

Figure 2 QSPICE simulation of frequency with respect to CV from 0 V to 3.9 V in 100 mV steps.

A prototype with component values from Figure 1  and V+=12 V has been breadboarded, and a rough frequency-versus-CV curve is measured and marked with a red dot in the QSPICE simulation in Figure 2.

Figure 3 shows a scope screenshot for Fmin. 

Figure 3 A scope screenshot for Fmin, CH1 (yellow) output voltage, CH2 (magenta) CV=0.

In conclusion, the new 555 VCO circuit overcomes some drawbacks of the classic version, like limited CV range, inverted CV/Hz behavior, and changing pulse width, without using more components. Unfortunately, it still shows nonlinear CV/Hz behavior. Maybe using a closed loop, with an opamp and a simple charge pump, can tame it by raising the chip count to 2.

Uwe Schüler is a retired electronics engineer. When he’s not busy with his grandchildren, he enjoys experimenting with DIY music electronics.

Related Content

The post 555 VCO revisited appeared first on EDN.

Simplifying inductive wireless charging

Tue, 02/10/2026 - 15:00
Block diagram of Microchip's 300-W inductive power transfer reference design.

What do e-bikes and laptops have in common? Both can be wirelessly charged by induction.

E-bikes and laptops both use lithium-ion batteries for power, chosen for their light weight, high energy density, and long lifespan. Both systems can be wirelessly recharged via the wireless power transfer (WPT) method that uses electromagnetic induction to transfer energy to the battery without cables.

For e-bikes, there is a wireless charging pad or inductive tile that e-bikes park on to transfer power. For induction charging, one coil is integrated into the static pad or tile (transmitter coil) and the other (the receiver coil) is situated on the bike, often in the kickstand. The charging pad’s coil, fed by AC, creates a magnetic field, which in turn produces current in the bike’s coil. This AC is then converted to DC, to power the bike’s battery.

The principle is the same for laptops, as well as a broad range of consumer and industrial devices, including small robots, drones, power tools, robotic vacuum cleaners, wireless routers, and lawnmowers.

Microchip provides a 300-W electromagnetic inductive wireless electric power transmission reference design that can be incorporated into any type of low-power consumer or industrial system for wireless charging (see block diagram in Figure 1). It consists of a Microchip WP300TX01 power transmitter (PTx) and Microchip WP300RX01 power receiver (PRx). The design operates with efficiency of over 90% at 300-W power and a Z-distance (the distance between pairing coils) of 5−10 mm.

Block diagram of Microchip's 300-W inductive power transfer reference design. Figure 1: Block diagram of the 300-W inductive power transfer reference design (Source: Microchip Technology Inc.)

The transmitter (Figure 2) is nominally powered from a 24-V rail and the receiver regulates the output voltage to nominal 24 V.

Block diagram of the power transmitter in Microchip's 300-W inductive power transfer reference design.Figure 2: Block diagram of the power transmitter (Source: Microchip Technology Inc.)

The design’s operating DC input voltage range is 11 V to 37 V, with input overvoltage and undervoltage protection, as well as overcurrent and thermal protection via a PCB/coil temperature-monitoring functionality. Maximum receiver output current is 8.5 A, and the receiver output voltage is adjustable from 12 V to 36 V.

The design implements a Microchip proprietary protocol, developed after years of research and development and, with patents granted in the U.S., ensuring reliable power transfer with high efficiency. The system also implements foreign object detection (FOD), a safety measure that avoids hazardous situations should a metallic object find its way in the vicinity of the charging field. Once the FOD detects a metallic object near the charging zone, where the magnetic field is generated, it stops the power transfer.

The reference design incorporates this functionality on the main coil, ceasing power from the transmitter until the object is removed. FOD is performed by stopping four PWM drive signals, with four being the maximum to avoid stopping the charging entirely.

This reference design also detects some NFC/RFID cards and tags.

Transmitter and receiver

The WP300TX01 is a fixed-function device designed for wireless power transfer, as is the WP300RX01 chip, designed for receiving wireless power. The two are paired together for a maximum power transfer of 300 W.

The user can configure the input’s under- and overvoltage, as well as the input’s overcurrent and overpower. There are three outputs for general-purpose LEDs and multiple OLED screens, as well as five inputs for interface switches. The design enables OLED display pages to allow viewing and monitoring of live system parameters, and as with the input parameters, the OLED panel’s settings can be configured by the user.

The WP300RX01 device operates from 4.8 V to 5.2 V, in an ambient temperature between −40°C and 85°C. Like with the transmitter controller, this device provides overvoltage, undervoltage, overcurrent, overpower, and overtemperature protection, with added qualification of AEC-Q100 REVG Grade 3 (−40°C to 85°C), which refers to a device’s ability to function reliably within this ambient temperature range.

The reference design simplifies and accelerates WPT system design and eliminates the need to go through the certification process, as it has already been accredited with the CE certification, which signifies that a product meets all the necessary requirements of applicable EU directives and regulations.

Types of wireless charging

There are different types of wireless charging, including resonant, inductive, electric field coupling, and RF. Inductive charging for smartphones and other lower-power electronic devices is guided by the Qi open standard, introduced by the Wireless Power Consortium in 2010, to create a universal, interoperable charging concept for electronic devices.

The Qi open standard promotes interoperability, thus avoiding multiple chargers and cables, as well as market fragmentation into different proprietary solutions. Many manufacturers have adopted this standard in their products, including tech giants like Apple and Samsung.

Since 2023, the Qi 2.0 version brings faster charging to mobile devices to 15 W, certified for interoperability and safety. Qi 2.0 devices feature magnetic attachment technology, which aligns devices and chargers perfectly for improved energy efficiency for faster and safer charging and ease of use. Qi 2.X includes the Magnetic Power Profile (MPP) with an added operating frequency of 360 kHz. With MPP, a magnetic ring ensures the receiver’s coil aligns perfectly with the charger’s coil, thus improving power transfer and reducing heat.

Qi 2.2, released in June 2025, enables 25-W charging, building on the convenience and energy efficiency of Qi while improving the wireless charging time.

Simultaneous charging of two 15-W Qi receivers

In addition to its 300-W electromagnetic inductive wireless electric power transmission reference design reviewed earlier in this article, Microchip also offers the Qi2 dual-pad wireless power transmitter reference design. This dual-pad, multi-coil wireless power transmitter reference design enables simultaneous charging of two 15-W Qi receivers (see Figure 3).

At the heart of the design is a Microchip dsPIC33 digital-signal controller (DSC) that simultaneously controls both charging pads. The dual-pad design is compatible with the Qi 1.3 and Qi 2.x standards, as well as MPP and Extended Power Profile.

The hardware is reconfigurable and supports most transmitter topologies. In addition to MPP, it supports Baseline Power Profile for receivers to 5 W.

Block diagram of Microchip's Qi 2.0 dual-pad wireless power transmitter reference design.Figure 3: Block diagram of the Qi 2.0 dual-pad wireless power transmitter reference design (Source: Microchip Technology Inc.)

The MPP charging pad initiates charge with a 12-kHz inverter switching frequency but will shift to 360 kHz when connected to an MPP PRx. The dsPIC33CK DSC executes two charger instances. To facilitate support for different protocols, real-time decisions based on charging pad and receiver type are required.

The MPP charging pad initiates charge with a 12-kHz inverter switching frequency but will shift to 360 kHz when connected to an MPP PRx. The dsPIC33CK DSC executes two charger instances. To facilitate support for different protocols, real-time decisions based on charging pad and receiver type are required.

The software-based design provides a high level of flexibility to optimize key features of the wireless power system, such as efficiency, charging area, Z-distance, and FOD. To support applications with a wide input voltage range, each PTx includes a front-end four-switch buck-boost (4SWBB) converter for power regulation. The 4SWBB connects to a full-bridge inverter for driving the resonant tank. On the MPP charger, additional resonant capacitor switch networks enable higher resonant frequency. An MP-A13 charger implements a similar coil select circuitry for energizing the coil with the strongest signal possible, enabling a wider area of placement.

This reference design is automotive-grade and includes CryptoAuthentication, hardware-based (on-chip) secure storage for cryptographic keys, to protect communication and data handling. In addition, the design includes a Trust Anchor TA100/TA010 secure storage subsystem. The dsPIC33CK device architecture also allows the integration of additional software stacks, such as automotive CAN stack or NFC stacks for tag detection.

It’s worth noting that the variable-input voltage, fixed-frequency power control topology implemented in the transmitter is ideal for systems that must meet stringent electromagnetic-interference and electromagnetic-compatibility requirements.

In addition to all these features, including FOD through calibrated power loss, the dual-charging reference design also provides measured quality factor/resonant frequency and ping open-air object detection; multiple fast-charge implementations, including for Apple and Samsung; and several receiver modulation types, such as AC capacitive and AC/DC resistive. For added safety, the design includes thermal power foldback and shutdown and overpower protection.

A UART-USB communication interface enables reporting and debugging of data packets, and LEDs indicate system status and coil selection. There is a reset switch and temp sensor inputs for added functionalities.

With the continuously evolving standards for Qi and unique new applications requiring higher-wattage wireless charging, there is plenty of opportunity for innovation and growth in the wireless charging space. Microchip experts can provide you with the right guidance for seamlessly bringing your wireless charging solution to market.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The post Simplifying inductive wireless charging appeared first on EDN.

TP-Link’s Kasa HS103: A smart plug with solid network connectivity

Mon, 02/09/2026 - 23:07

With Amazon’s smart plug teardown “in the books”, our engineer turns his attention to some TP-Link counterparts, this first one the best behaved of the bunch per hands-on testing results.

Two months back, I introduced you to several members of TP-Link’s Kasa and Tapo smart home product lines as successors to Belkin’s then-soon and now (at least as you read these words, a few weeks after I wrote them) defunct Wemo smart plug devices. I mentioned at the time that I’d had particularly good luck, from both initial setup and ongoing connectivity standpoints, with the Kasa HS103:

An example of which, I mentioned at the time, I’d shortly be tearing down both for standalone inspection purposes and subsequent comparison to the smaller but seemingly also functionally flakier Tapo EP10:

Today, I’ll be actualizing my HS103 teardown aspiration, with the EP10 analysis to follow in short order, hopefully sometime next month. What’s inside this inexpensive device, and is it any easier to disassemble than was Amazon’s Smart Plug, which I dissected last month?

Plain is appealing

Let’s find out. As usual, I’ll begin with some outer box shots of the four-pack containing today’s patient. You may call the packaging “boring”. I call it refreshingly simple. As well as recyclable.

Sorry, I couldn’t resist including that last one 😀.

Now for the device inside the box, beginning with a conceptual block diagram. Interestingly, although I’d mentioned back in December that TP-Link now specs the HS103 to handle a current draw of up to 15A, the four-pack (HS103P4) graphic on Amazon’s website still list 12A max:

Its three-pack (HS103P3) graphic counterpart eliminates the current spec entirely, replacing it with the shadowy outline of an AC outlet set, which I suppose is one way to fix the issue!

And now for some real-life shots, as usual (and as with subsequent images) accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

See that seam? I ‘spect that’ll be a key piece for solving the puzzle of the pathway to the insides:

And, last but not least, all those specs that the engineers out there reading this know and love, including the FCC certification ID (2AXJ4KP115):

Cracking (open) the case

Now to get inside. Although I earlier highlighted the topside seam, I decided to focus my spudger attention on the right side to start, specifically the already visible gap between the main chassis and the rubberized gasket ring:

Quickly results-realizing that I was indirectly just pushing the side plate (containing the multi-function switch) out of its normal place, I redirected my attention to it more directly:

Success, at least as a first step!

Now for that gasket…

At this point, however, we only have a visual tease at the insides:

Time for another Amazon-supplied conceptual diagram:

And now for the real thing. This junction overlap gave me a clue of how to start:

It wouldn’t be a proper teardown without at least a bit of collateral damage, yes?

Onward, I endure it all for you, dear readers:

Voilà:

Boring half first:

PCB constituent pieces

Now for the half we all really care about:

As with its Amazon smart plug predecessor, the analog and power portions are “vanilla”:

The off-white relay at far right on the main PCB, for example, is the HF32FV-16 from Hongfa. Perhaps the most interesting aspect of the analog-and-power subsystem, at least to me, is the sizeable fuse below the pass-through ground connection, which I hadn’t noticed in the Amazon-equivalent design (although perhaps I just overlooked it?). The digital mini-PCB abutting the relay, on the other hand, is where all the connectivity and control magic take place…

In the upper left corner is the multicolor LED whose glow (amber or/or blue, and either steady or blinking, depending on the operating mode of the moment) shines through the aforementioned translucent gasket when the switch is powered up (and not switched off):

Those two unpopulated eight-lead IC sites below it are…a titillating tease of what might be in a more advanced product variant? In the bottom left corner is the embedded 2.4 GHz Wi-Fi 1T1R antenna. And to its right is the “brains” of the operation at the other end of the antenna connection, Realtek’s RTL8710, which supports a complete TCIP/IP “stack” and integrates a 166 MHz Arm Cortex M3 processor core, 512 Kbytes of RAM and 1 Mbyte of flash memory.

Stubborn solder

Speaking of power pass-throughs…what about the other side of the main PCB? The obvious first step is to remove the screw whose head you might have already noticed in the earlier shot:

But that wasn’t enough to get the PCB to budge out of the chassis, at least meaningfully:

Recall that in the Amazon smart plug design, not only the back panel’s ground pin but also its neutral blade pass through intact to the front panel slots, albeit with the latter also split off at the source to power the PCB via a separate wire. The line blade is the only one that only goes directly to the PCB, where it’s presumably switched prior to routing to the front panel load slot.

In this design, that same switching scheme may very well be the case. But this time the back panel neutral connection also routes solely to the PCB. Note the two beefy solder points on the main PCB, one directly above the screw location and the other to the right of its solder sibling. I was unable to get either (far from both of) them either successfully unsoldered from above or snipped from below. And all I could discern on the underside of the PCB from peering through the gap were a few scattered additional passive components, anyway.

So, sorry, folks, I threw in the towel and gave up. I’m assuming that those two particular solder points, befitting the necessary stability not only electrically but also mechanically, i.e., physically, leveraged higher-temperature solid or silver solder that my iron just wasn’t up for. Or maybe I just wasn’t sufficiently patient to wait long enough for the solder to melt (hey, it’s happened before). Regardless, and as usual, I welcome your thoughts on what I was able to show you, or anything else related to this product and my teardown of it, for that matter, in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post TP-Link’s Kasa HS103: A smart plug with solid network connectivity appeared first on EDN.

Thumbwheel switches: Turning numbers into control

Sun, 02/08/2026 - 19:00

Thumbwheel switches may evoke early digital design, yet their compact precision and tactile feedback keep them indispensable. From setting circuit-board addresses to configuring embedded parameters, they translate simple rotations into reliable numeric codes.

Whether selecting device IDs, adjusting ranges, or defining system values, thumbwheel switches deliver a straightforward interface that endures across industrial, consumer, and embedded applications.

Thumbwheel switches (often abbreviated as TWS) offer a straightforward, tactile method for setting numerical values in electronic instruments and control systems. Each wheel is marked with digits, allowing users to rotate and lock in precise entries without complex circuitry or software.

Their mechanical reliability, clear visual indication, and ease of use have made them a staple in applications ranging from laboratory test equipment to industrial control panels. By combining compact design with intuitive operation, thumbwheel switches continue to serve as a practical solution where accuracy and simplicity are paramount.

Rolling vs. clicking: Choosing your digital dial

While both convert a physical turn into a digital signal, the choice between a thumbwheel and a push-wheel switch comes down to how you prefer to drive your data. The rotary thumbwheel is the high-speed option, featuring a serrated edge that you roll with your thumb to flick through numbers in a single, fluid motion—ideal for quick adjustments across a broad range.

In contrast, the push-wheel is the precision specialist; it keeps the wheel protected behind a window and uses dedicated ‘+’ and ‘−’ buttons to advance the value one crisp click at a time. While the thumbwheel offers intuitive speed, the push-wheel provides tactile certainty and protection against accidental bumps, making it the go-to for industrial settings where every digit counts.

Figure 1 Rotary thumbwheel and push-button thumbwheel switches adjust numerical inputs by rotation or precision clicks. Source: Author

Sidenote: Although rotary thumbwheel and push‑button thumbwheel (push-wheel) switches differ in operation—one using a rotating wheel, the other plus/minus buttons—the term thumbwheel is widely applied as an umbrella designation for both types of digital input switches in industry.

Switch communication mechanisms

Beneath the surface, these switches speak a specific digital language through their pin configurations, typically utilizing binary coded decimal (BCD) or hexadecimal (Hex) outputs to communicate with your controller.

A BCD switch is the standard for human-readable interfaces, cycling strictly from 0 to 9; it’s the perfect fit for decimal-based inputs like a kitchen timer or a thermostat setpoint. However, if your project requires more density, a hexadecimal switch utilizes the same four output pins to provide 16 distinct positions (0–9 and A–F).

Figure 2 Example maps TWS positions to BCD code chart using 8421 pin logic. Source: Author

While both rely on the same 8-4-2-1 weighted logic—where internal contacts bridge a common pin to specific data lines to represent a value—BCD keeps things simple for the end-user, whereas hexadecimal is the preferred choice for technical tasks like setting device addresses or selecting complex software modes in a space-saving format.

As a quick aside, the 8-4-2-1 weighted logic is the most common form of BCD representation. Each decimal digit (0–9) is encoded into a 4-bit binary number, where the bit positions carry weights of 8, 4, 2, and 1 from left to right (MSB to LSB).

Thumbwheel switch output code variants

In practice, thumbwheel switches provide designers with multiple output code formats to match diverse digital system needs. The most common is BCD, where each decimal digit is encoded into a 4-bit binary value for straightforward interfacing with counters and microcontrollers.

Some switches offer decimal output, directly representing the digit without binary conversion. More specialized variants include BCD + Complement, which supplies both the normal BCD code and its inverted form for redundancy or error checking, and BCD Complement, which outputs only the inverted binary representation.

Certain models also support BCH hexadecimal coding, enabling representation of values 0–F in compact 4-bit hexadecimal form, useful in applications requiring extended coding beyond decimal digits. These output options give engineers flexibility to align switch signals with the encoding schemes of displays, logic circuits, or embedded systems, ensuring compatibility and efficient signal processing.

Thumbwheel switches: Key practical notes

In practice, each push-wheel/thumbwheel switch forms a single vertical segment, and multiple segments can be combined to build assemblies of varying sizes. The wheel or buttons enable digit selection from 0 through 9.

In a BCD thumbwheel switch, the common terminal (C) lies on one side, followed by weighted contacts for 8, 4, 2, and 1. Applying a small voltage, for instance 5 VDC, to the common allows the output value to be read by summing the weights of the contacts driven HIGH. For example, selecting digit 3 energizes contacts 1 and 2, both appearing at the common voltage.

Notably, diodes are incorporated into thumbwheel switches to isolate each weighted contact and prevent back-feeding between lines. This ensures that only the intended logic states contribute to the BCD output, protecting the switch and downstream logic from false readings or short circuits.

Figure 3 A practical example illustrates a BCD TWS with diodes installed. Source: Author

Equally important, pull-up and pull-down resistors establish defined default states for the contacts. A pull-up resistor ties an inactive line to logic HIGH, while a pull-down resistor ties it to logic LOW. Without these resistors, floating inputs could drift unpredictably, resulting in noisy or unstable outputs. Together, diodes and pull-up/pull-down resistors guarantee that BCD thumbwheel switches deliver clean, reliable, and unambiguous digital signals to the system.

Keep note at this point that datasheets for thumbwheel switches consistently caution against exceeding their specified voltage and current limits. These devices are usually intended for logic interfacing, with ratings of only a few volts and currents in the milliampere range. Operating them beyond these limits can lead to contact wear, unstable outputs, or permanent failure. As emphasized in manufacturer specifications, designers should strictly adhere to the stated ratings and apply recommended best practices to ensure reliable performance.

Also, it’s critical to distinguish between the Switch Rating and the Carry Rating when selecting a thumbwheel switch. The Switch Rating defines the maximum current allowed while the dial is in motion; exceeding this causes electrical arcing that can erode the gold plating on the contacts. In contrast, the Carry Rating is significantly higher because it applies only when the dial is stationary and the contacts are firmly seated, eliminating the risk of arcs.

Figure 4 Datasheet snippet highlights the key specifications of a thumbwheel switch. Source: C&K Switches

So, to maximize component life when interfacing with PLC inputs, many engineers employ cold switching. This involves adjusting the thumbwheel only when the circuit is de-energized, allowing the switch to operate within its higher carry capacity rather than its lower switching capacity. This practice eliminates the risk of electrical arcing across the contacts during transitions, thereby preventing signal noise and extending the operational life of the switch.

The click that counts

That marks the end of this quick take on thumbwheel switches. While we have covered a flake of theory and some essential practical pointers, there is always more to explore—from advanced BCD logic to creative modern retrofits. These switches may be a “classic” technology, but their reliability and tactile feedback still offer unique value in a touchscreen world.

What is your take? Are you planning to use thumbwheels in your next project, or do you have a favorite “old-school” component that still outperforms modern alternatives? Leave a comment below and share your experience; I would love to hear how you are putting these switches to work.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

The post Thumbwheel switches: Turning numbers into control appeared first on EDN.

Powering AI at scale: How HVDC and GaN are transforming hyperscale data centers

Fri, 02/06/2026 - 15:00

As AI workloads and hyperscale data centers drive unprecedented power demand, operators face mounting pressure to improve efficiency and reduce grid strain. High-voltage direct current (HVDC) distribution is emerging as a critical solution, and GlobalFoundries is enabling this transition with advanced GaN technology that enables high-density, high-efficiency power conversion. This perspective explores how GF’s semiconductor innovations will power the next generation of sustainable, large-scale data centers.

The rapid adoption of AI across consumer and commercial markets is driving unprecedented investment in high-performance computing and networking. As AI models scale and proliferate across diverse applications, demand for compute power keeps rising. To meet this need, the power consumption of heterogeneous processing units (XPUs) is projected to climb from today’s 1–1.5 kW to more than 5 kW by 2030 [1]. This surge in power requirements is driving demand for denser, more efficient power conversion solutions from the grid to the core. 

Emerging power distribution architecture

Distribution of 415-480 VAC within data centers causes a patchwork of electrical conversions. AC power needs to be converted to DC power to support battery backup, and back to AC for further distribution.  But as AI systems scale up, this energy loss is too costly to absorb. A key focus area for the industry is high-voltage direct current (HVDC) distribution, which reduces conduction losses and the number of conversion stages in large clusters.

The main proposed solutions are either ±400 V (Mt. Diablo) or 800 V (Kyber) DC power delivery.  The first phase of HVDC solutions will still rely on 415-480 VAC distribution with a sidecar power rack, thereby reducing some power conversion losses.  This step has fewer power conversion stages than existing systems and reduces conduction losses by delivering HVDC to the adjacent compute rack.  However, to further eliminate power conversion stages, data centers will distribute HVDC throughout the cluster.  Additional energy savings will be achieved by implementing the 800V DC-DC conversion within the system trays in compute racks, reducing busbar conduction losses. This deployment will require a significant step up in density and efficiency. The past few months have seen hyperscalers specifying their general needs [2] of higher rack-power capacity, power efficiency, density, and scalability, as well as vendors responding with proposed converter topologies and considerations to meet those needs [3].  

This marks real progress, and it’s already clear that the key performance goals of the solutions are within reach. The benefits of these next-generation power delivery architectures include:

  1. High conversion ratio – Conversion from HVDC distribution to very low XPU core voltage with as few stages as possible requires a large step-down ratio (>1000:1).  Solutions based on wide bandgap semiconductors such as gallium nitride (GaN) achieve higher conversion ratios due to higher breakdown voltages and reduced conduction and switching losses compared to silicon-based solutions.
  2. Significant density increase compared to current power supply unit (PSU) designs – The increase in XPU power consumption does not come with a corresponding increase in available volume for power electronics. Computer and network architectures impose constraints on physical distance, necessitating more compact power components. Thanks to their excellent switching characteristics, GaN power semiconductors enable higher-frequency operation, allowing smaller energy-storage components such as capacitors, inductors, and transformers.
  3. Extremely high efficiency at scale – The extraordinary growth in data center power consumption means that power losses in every stage translate directly to energy costs. Thus, the conversion ratio and high density must be achieved without sacrificing efficiency. GaN devices offer the best figures of merit—including lower specific on-resistance, minimal switching charge, and better high-frequency FOM—which result in the highest efficiency for a given ratio and density.

How GaN is driving data center innovation

The data center market demands not only advanced performance but also exceptional quality and reliability. Increasingly, industry consensus points to Power GaN as the key enabling technology for HVDC solutions in data centers. 

GlobalFoundries is developing GaN platforms to support this transition, including HV (650 V) and MV (200 V and below) devices. These platforms will offer industry-leading figures of merit (FoM) with the reliability and ruggedness that hyperscalers require to deploy AI at scale.

Opportunities for scaling HVDC architectures

Looking ahead to broad solution deployment, there are several major opportunities that remain, each offering room to drive the next wave of innovation on topology selection and device optimization:

  • Establishing clear safety and isolation requirements: To date, safety and isolation have been discussed only in broad terms, but HVDC architectures will require isolation. Achieving safety and isolation compliance through spacing (creepage and clearance) can come at a high cost to density, while achieving compliance mechanically via conformal coating or potting can degrade thermal performance—both of which complicate serviceability of systems in the field. Defining the right balance represents a major opportunity for innovation in materials, mechanical structures, and system architecture.
  • Defining EMI/EMC requirements for scaling next-generation data centers: With data centers subject to strict electromagnetic interference (EMI) and electromagnetic compatibility (EMC) standards, the industry must determine how topologies can meet them. If bulky filter components are required to scale HVDC solutions, this may prevent density targets from being met, potentially forcing alternate topology selection. It is crucial that these requirements scale to multi-GW data centers, allowing clusters to interoperate, otherwise compatibility and performance are at risk.
  • Converging on optimal step-down ratios and system-level power conversion strategy: Will the industry converge to a 16x or 64x step-down, or, as the HVDC converter moves into the system tray, will system designers optimize the power conversion stages around different voltage levels?  If solutions are customized based on system-level optimization, this will likely lead to a need for regulated HVDC converters as well as unregulated fixed-ratio converters, with the two types having distinct transient requirements. These tradeoffs will affect overall system design in the future, from rack input to XPU.

Enabling scalable, efficient, and sustainable data centers

As these solutions evolve and mature, GF will collaborate with our customers to optimize device development, integrate driver and sensor functionality with power devices, and heterogeneously integrate power devices with additional components.  

It is encouraging that, along with the activity around converter feasibility, industry participants are also extremely active in pursuing open standards, such as the Open Compute Project’s Power Distribution sub-project, which will provide a roadmap for scalable, interoperable HVDC architectures. 

Adoption of HVDC architectures allows operators and OEMs to convert efficiency gains directly into XPU and network-cluster performance—delivering more usable floating-point operations per second (FLOPs) from the same energy footprint while reducing energy losses, lowering operational costs, improving rack-level density, and advancing sustainability goals through more efficient power delivery. Meeting these stringent demands at a massive scale requires solutions that ensure interoperability and long-term ecosystem value remain top priority.

Notes:

  1. Future AI processors said to consume up to 15,360 watts of power — massive power draw will demand exotic immersion and embedded cooling tech | Tom’s Hardware
  2. Asset Share – NVDAM 
  3. Swing Aboard the 800-V Bus: NVIDIA’s AI Power Architecture and the Chips to Drive It | Electronic Design

Related Content

The post Powering AI at scale: How HVDC and GaN are transforming hyperscale data centers appeared first on EDN.

Redriver boosts automotive camera link reliability

Thu, 02/05/2026 - 23:29

Diodes’ PI2MEQX2505Q MIPI D-PHY ReDriver supports data rates up to 2.5 Gbps, making it well suited for ADAS and automotive camera monitoring systems. It provides one clock lane and four differential data lanes. Each data lane features programmable receiver equalization, output swing, and pre-emphasis, configurable via I²C or pin-strap. This helps optimize performance and reduce intersymbol interference across different physical media.

Compliant with MIPI D-PHY 1.2, the device regenerates D-PHY signals for CSI-2 and DSI interfaces over PCB traces, connectors, and cables. This extends trace lengths while minimizing power consumption and maintaining low latency. Activity-detection circuitry allows the redriver to enter a lower-power mode during Ultra-Low Power State (ULPS) and low-power (LP) states.

The PI2MEQX2505Q is AEC-Q100, Grade 2 qualified and operates from a 1.8 V supply over a temperature range of –40 °C to +105 °C. It comes in a compact 3.5 × 5.5 mm W-QFN3555-28/SWP package, supporting high-density channel routing.

Available now, the PI2MEQX2505Q is priced at $0.88 each in lots of 3500 units.

PI2MEQX2505Q product page 

Diodes

The post Redriver boosts automotive camera link reliability appeared first on EDN.

R&S expands mid-range spectrum analysis to 44 GHz

Thu, 02/05/2026 - 23:29

R&S has launched the 44-GHz FPL1044 spectrum analyzer along with a 40-MHz real-time spectrum analysis (RTSA) option for the entire FPL family. With the RTSA option, the FPL1044 can perform real-time measurements across its full frequency range from 10 Hz to 44 GHz. According to R&S, the FPL1044 is the first mid-range spectrum analyzer capable of reaching 44 GHz, making high-frequency testing more accessible.

The FPL1044 is the only model in the FPL family to offer a DC coupling option, enabling analysis of very low-frequency signals starting at 10 Hz. This capability extends measurement coverage from near-DC through the Ka-band. Compact and lightweight, the analyzer occupies minimal bench space, while an optional battery pack allows for portable operation.

The 26.5-GHz to 44-GHz frequency range is particularly important for aerospace and defense applications, including satellite communications, radar, and radio navigation. In these environments, the FPL1044 supports system verification, production quality control, and on-site repair and maintenance of high-frequency components such as filters, amplifiers, and traveling-wave tubes.

Configure and request a quote for any FPL spectrum analyzer, including the FPL1044, using the product page link below.

FPL series product page

Rohde & Schwarz 

The post R&S expands mid-range spectrum analysis to 44 GHz appeared first on EDN.

GaN transistor cuts losses and heat

Thu, 02/05/2026 - 23:28

EPC’s first Gen 7 eGaN power transistor, the 40-V EPC2366, delivers up to 3× better performance than equivalent silicon MOSFETs. Now entering mass production, the device features a typical RDS(ON) of 0.84 mΩ and an optimized RDS(ON) × QG figure of merit of 12.6 mΩ·nC. This enables the EPC2366 to reduce conduction and switching losses while improving thermal performance.

Designed for high-efficiency, high-density power systems, the EPC2366 is suitable for synchronous rectifiers, DC/DC converters, AI server power supplies, and motor drives. It is rated for a drain-to-source voltage (VDS) up to 40 V, transient voltages up to 48 V, and a continuous drain current (ID) of 88 A, with pulsed currents reaching 360 A.

To assist design-in and evaluation, the EPC90167 half-bridge development board integrates two EPC2366 transistors in a low-parasitic layout, with PWM drive signals and flexible input modes.

The EPC2366 comes in a compact 3.3×2.6-mm PQFN package and is priced at $1.56 each in quantities of 3000 units. The EPC90167 development board is available for $211.65 each.

EPC2366 product page 

Efficient Power Conversion 

The post GaN transistor cuts losses and heat appeared first on EDN.

High-density power module fits compact AI servers

Thu, 02/05/2026 - 23:28

Enabling higher power delivery within the same rack space, Microchip’s MCPF1525 power module delivers up to 25 A per device and can be stacked to 200 A. The module integrates a 16-VIN buck converter with programmable PMBus and I²C control, making it well suited for powering PCIe switches and high-compute MPU applications used in AI deployments.

With dimensions of approximately 6.8×7.65×3.82 mm, the MCPF1525’s vertical construction maximizes board space, providing up to a 40% reduction in board area compared to alternative solutions. For improved reliability, the device incorporates multiple diagnostic functions reported over PMBus, including overtemperature, overcurrent, and overvoltage protection to help prevent undetected faults.

Housed in a thermally enhanced package, the MCPF1525 supports a junction temperature range from −40°C to +125°C. An embedded EEPROM enables users to program the default power-up configuration.

The MCPF1525 is available now, priced at $12 each in 1000-unit quantities.

MCPF1525 product page 

Microchip Technology 

The post High-density power module fits compact AI servers appeared first on EDN.

Vishay shrinks inductors, keeps full performance

Thu, 02/05/2026 - 23:28

Four power inductors in 0806 and 1210 case sizes from Vishay offer improved performance for commercial and automotive applications. Compared to competing inductors with similar performance, the devices use considerably less board space—up to 64% smaller in 0806 and 11% smaller in 1210 packages. They also support higher operating temperatures, a wider range of inductance values, and lower DC resistance to enhance efficiency.

The commercial IHLL-0806AZ-1Z and IHLL-1210AB-1Z have terminals plated only on the bottom, enabling smaller land patterns for more compact board spacing. The automotive-grade IHLP-0806AB-5A and IHLP-1210ABEZ-5A feature terminals plated on the bottom and sides, allowing a solder fillet that strengthens the mount against mechanical shock and simplifies joint inspection. These automotive devices are AEC-Q200 qualified for high reliability and elevated operating temperatures.

Samples and production quantities of the IHLL-0806AZ-1Z, IHLL-1210AB-1Z, IHLP-0806AB-5A, and IHLP-1210ABEZ-5A inductors are available now, with lead times of 10 weeks.

Vishay Intertechnology 

The post Vishay shrinks inductors, keeps full performance appeared first on EDN.

Added-conductor and directional audio interconnects: Real-life benefits?

Thu, 02/05/2026 - 15:00

Does vendor-claimed audio cable directionality make theoretical sense, far from delivering real-life perceptible benefit? And what about the number and organization of in-cable conductors?

Within my recently published two-part series on the equipment comprising my newly upgraded home office audio setup, I intentionally left out one key piece of the puzzle: the cables that interconnect the various pieces of gear in each “stack”. Come to think of it, I also didn’t mention the speaker wire that mates each monoblock power amplifier to its companion speaker:

but that’s a hype-vs-reality quagmire all its own! Maybe someday…for now, I’ll tease you with the brief revelation that it’s a 2m (3.3 foot) GearIT 14 AWG banana-plug-based set purchased in like-new condition from Amazon’s Resale (Warehouse) section for $17.18:

Conventional recommendations

Back to today’s quagmire 😉 When spanning the equipment placed on consecutive shelves of each audio “stack”, the 6” cable length is ideal. For the balanced interconnect-based setup located to my left on my desk:

wherein all of the connectors are XLR in form factor, I’ve found Coluber’s cables, available in a variety of connection-differentiating colors as well as as-needed longer lengths, to be excellent:

This particular setup, now based on a Drop + Grace Design SDAC Balanced DAC:

initially instead used Topping’s D10 Balanced DAC:

whose analog line-out connections were ¼” TRSs, not XLRs:

In that earlier gear configuration, I’d relied on a set of WJSTN Suanqi TRS-to-XLR cables to tether the DAC to the headphone amp (the Schiit Lokius equalizer wasn’t yet in the picture, either):

What about the unbalanced (i.e., single-ended) interconnection-based setup to my right?

In this case, I’ve mixed-and-matched RCA-to-RCA cables from WJSTN:

and equally highly-rated CNCESS:

depending on whose were lower-priced at any particular purchase point in time.

A pricier (albeit discounted) experiment

Speaking of economic factors, as regular readers may recall from past case studies (not to mention my allusion by example earlier in this writeup), I regularly troll Amazon’s Resale (formerly Warehouse) site for bargains. Last summer, I came across a set of “acceptable” condition (i.e., packaging-deficient) 0.5-foot-long RCA cables from a company called (believe it or not) “World’s Best Cables”:

and titled as follows:

0.5 Foot RCA Cable Pair – WBC-PRO-Quad Ultra-Silent, Ultra-Flexible, Star-Quad Audiophile & Pro-Grade Audio Interconnect Cable with Amphenol ACPR Gold RCA Plugs – Gray & Red Jacket – Directional

Say that ten times real fast, and without pausing to catch a breath midway through!

They normally sell for $30.99 a pair on the company’s Amazon storefront, which is pretty “salty” considering that the CNCESS and WJSTN alternatives are a third that amount ($10.99 for two). That said, these were discounted to $18.82, nearly half off the original price tag. I took the bait.

Like I said earlier, “packaging-deficient”.

How’d they sound? Fine. But no different, at least in my setup and to my ears, than the brand new but still notably less expensive CNCESS and WJSTN ones. This was the case in spite of the fact that among other things they were claimed to be “directional”, the concluding word in the voluminous product title and the one that had caught my ever-curious eye in the first place.

Directional details

As I’ve groused about plenty of times in the past, the audio world is rife with “snake oil” claims of products and techniques that supposedly improve sound quality but in actuality only succeed in extracting excess cash from naïve enthusiasts’ wallets and bank accounts. My longstanding favorite snake-oil theory, albeit one that mostly only wasted adoptees’ time, was that applying a green magic marker to the edges of an optical audio disc would improve its sound by reducing laser reflections.

Further magnifying this madness, at resultant higher damage- therefore wallet-induced devotee expense, was the practice of beveling (i.e. shaving down) those same edges:

I’ve also come across plenty of cables, both signal and power, and in various shapes and sizes, that claim to benefit from directionality induced by their implementations. Such directionality is, of course, forced on the implementation by USB cables, for example, which (for example, redux) have a Type A connector on one end for tethering to a computer and a Type B connector on the other end for mating with, say, a printer. Both types are shown at right in the following photo:

Conceptually, the same thing occurs with power cords, of course, such as this one:

But that’s not what I’m referring to. I’m talking about claimed directionality introduced within the cable itself—by the materials used to construct it, the conductors within it, etc. For cables that carry digital signals, this is pure hogwash as far as I can tell. But for analog cables like the one I’m showcasing today? There may, it turns out, be some reality behind the hype, depending on what kind of signal the cable’s carrying and for what span length, along with the ambient EMI characteristics of the operating environment. Quoting from the Amazon product page:

Each cable is configured as a “Directional” cable and as such the shield of the cable is connected to the ground only at the signal emitting end. This allows the shield of the cable to work as a Faraday’s cage which rejects external noise that could degrade the signal. The cable will work even if you plug it the opposite direction, but this will diminish the noise rejection capabilities of the directional design. This enhances the noise rejection capabilities of our cables over our competition.

To clarify: when I said earlier that I discerned no difference in the sound between the “World’s Best Cables” interconnect and its more cost-effective alternatives, I was referring to:

  • Short cable spans (6”) transporting
  • Reasonably high-level innate signals (specifically line level, 0.3V to 1.2V)

Would an alternative RCA cable set carrying, for example, the lower magnitude output signal of a turntable cartridge—moving magnet (3-7 mV) and especially moving coil (0.2-0.6 mV)—to a phono preamp be more prone to the corrupting effects of environmentally induced noise, especially in high EMI (with an overlapping spectral profile, to be precise) environments and across long cable runs? Low-level microphone outputs are another example. And would shielding—especially if directional in its nature—be of benefit in such scenarios?

Twist, double up and fan out

Truth be told, I’d originally planned to stop at this point and turn those questions over to you for your thoughts (both on them specifically and on the topic more generally) in the comments. But in looking again at the conceptual cable construction diagram this morning while prepping to dive into writing:

I noticed not only the shielding, which I’d seen before, but that there were four conductors within it. Each RCA connector is normally associated with only two wires, corresponding to the positive and negative per-channel connections to the audio source and destination devices.

Version 1.0.0

Four total wires might make sense, for example, if we were looking at the middle of a unified cable, with both channels’ dual conductors combined within a common shield. And it might also make sense (albeit seemingly still with one spare wire) if the per-channel cable connections were balanced. But these are RCA cables: unbalanced, i.e. single-ended, and only one cable per channel. So why four connectors inside, instead of just two?

My first clue as to the answer came when I then looked at the top of this graphic (table, to be precise):

Followed by my noticing the words “WBC-PRO-Quad” and “Star-Quad” in the aforementioned wordy product title. My subsequent research suggests that the term “Star Quad” isn’t unique to “World’s Best Cables”, although it typically refers to mic and other balanced interconnect applications:

The star quad design is a unique configuration of wires used in microphone cables. Unlike traditional cables that consist of two conductors, the star quad design incorporates four conductors. These conductors are twisted together in a specific pattern, resembling a star shape, hence the name. The layout of the conductors in a star quad cable significantly reduces electromagnetic interference (EMI), resulting in cleaner and more reliable audio transmission.

And how do two connections at each cable end translate into four conductors within the cable?

Star-quad microphone cables are specially designed to provide immunity to magnetic fields. These microphone cables have 4 conductors arranged in a precise geometry that provides immunity to the magnetic fields which easily pass through the outer RF shield. Four conductors are arranged in a four pointed star configuration and the wires at opposite points of the star are connected together at each end of the cable.

When the cables are wired in this manner, the + and – legs of the balanced connection each receive equal induced voltages from any magnetic field. This configuration balances the interference to the + and – legs of the balanced connection. The key to the success of star-quad cable is the fact that the magnetically-induced interference is exactly the same on the + and – legs of the balanced connection. The star-quad geometry of the cable keeps the interference signal identical on both legs no matter what direction the magnetic interference is coming from.

In the “a picture paints a thousand words” spirit, this additional graphic might be of assistance:

Along with this lab equipment- and measurement-flush video:

But again, we’re still talking about long-length, low-level balanced cables and connections used in high-EMI operating environments. How, if at all, do these results translate to the few-inch, comparatively high-level and low-EMI applications that my “World’s Best Cables” target, especially considering that they also include heavily hyped directional shielding? Even audiophiles have mixed opinions on the topic.

And so, at this point, after twice as long a write-up as originally planned, I will now stop and turn these and my prior questions over to you for your thoughts (both on them specifically and on the topic more generally) in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Added-conductor and directional audio interconnects: Real-life benefits? appeared first on EDN.

Designing energy-efficient AI chips: Why power must be an early consideration

Thu, 02/05/2026 - 09:54

AI’s demand for compute is rapidly outpacing current power infrastructure. According to Goldman Sachs Global Institute, upcoming server designs will push this even further, requiring enough electricity to power over 1,000 homes in a space the size of a filing cabinet.

As workloads continue to scale, energy efficiency is now as critical as raw performance. For engineers developing AI silicon, the central challenge is no longer just about accelerating models, but maximizing performance for every watt consumed.

A shift in design philosophy

The escalation of AI workloads is forcing a paradigm shift in chip development. Energy optimization must be addressed from the earliest design phases, influencing decisions throughout concept, architecture, and production. Considering thermal behavior, memory traffic, architectural tradeoffs, and workload characteristics as part of a single power-aware design flow enables the development of systems that scale efficiently without breaching data center or edge-device energy limits.

Traditionally, design teams have primarily focused on timing and performance, only addressing energy consumption at the end of the process. Today, that strategy is outdated.

Synopsys customer surveys across numerous design projects show that addressing power at the architectural stage can yield 30-50% savings, whereas waiting until implementation typically achieves only marginal improvements. Early exploration enables decisions about architecture, memory hierarchy, and workload mapping before they become fixed, allowing trade-offs that balance throughput, area, and efficiency.

Architecture analysis as a power tool

Before RTL is finalized, a comprehensive power analysis flow helps reveal where energy is being spent and what trade-offs exist between voltage, frequency, and performance. Architectural modeling enables rapid evaluation of techniques—such as dynamic voltage and frequency scaling (DVFS), power gating to shut down inactive circuits, and optimizing data flow within the network-on-chip (NoC)—and supports smarter, more energy-efficient design choices.

Transaction-level simulation allows teams to measure expected workloads and predict the impact of configuration changes. This early insight informs hardware-software partitioning, interface sizing, and memory placement, all critical factors in the chip’s overall efficiency.

Data movement: The hidden power sink

Computation isn’t the only factor driving energy use. In many AI chips, data movement consumes more power than the arithmetic itself. Each transfer between memory hierarchies or across chiplets adds significant overhead. This is the essence of the so-called memory wall: compute capability has outpaced memory bandwidth.

To close that gap, designers can reduce unnecessary transfers by introducing compute-in-memory or analog approaches, choosing high-bandwidth memory (HBM) interfaces, or adopting sparse algorithms that minimize data flow. The earlier the data paths are analyzed, the greater the potential savings, because late-stage fixes rarely recover wasted energy caused by poor partitioning.

The growing thermal challenge

As designs move toward multi-die and chiplet architectures, thermal density has become a first-order constraint. Packing several dies into one package creates concentrated heat zones that are difficult to manage later in the flow. Effective thermal planning, therefore, starts with system partitioning: examining how compute blocks are distributed and how heat will flow through the stack or interposer.

By modeling various configurations early, before layout or floor planning, engineers can avoid thermally stressed regions and plan for cooling strategies that support consistent performance under load.

Optimizing the real workload

Unlike traditional semiconductors, AI chips are rarely general-purpose. Whether a device runs edge inference, data center training, or specialized analytics, its efficiency depends on how closely the hardware matches the target workload. Simulation, emulation, and prototyping before tapeout make it possible to test representative use cases and fine-tune hardware parameters accordingly.

Profiling multiple operating modes, from idle to sustained training, exposes inefficiencies that might otherwise remain hidden until silicon returns from the fab. And it helps ensure the design can maintain high utilization and consistent energy performance across all conditions.

Extending efficiency beyond tapeout

Energy monitoring and management must persist even after chips are manufactured. Variability, aging, and environmental factors can shift operating characteristics over time. Integrating on-chip telemetry and control using silicon lifecycle management (SLM) solutions allows engineers to track power behavior in the field and apply adjustments to sustain optimal performance per watt throughout the product’s lifecycle.

The next breakthroughs in AI hardware will come not just from faster chips, but from smarter engineering that treats power as a foundational design dimension, not an afterthought. For today’s AI hardware, efficiency is performance.

Godwin Maben is a Synopsys Fellow.

Special Section: AI Design

The post Designing energy-efficient AI chips: Why power must be an early consideration appeared first on EDN.

Classic constant current cascode

Wed, 02/04/2026 - 15:00

An important figure of merit for all precision constant current sources is their active impedance.  Which is to say, just how “constant” is their output held against changes in applied voltage?  Frequent and expert Design Idea (DI) commentator Ashutosh Sapre (Ashu) was kind enough to measure this parameter for a design of mine and share his results. The circuit, applied as a 4 to 20mA current mirror, is shown in Figure 1 and discussed in “Combine two TL431 regulators to make versatile current mirror.”

Figure 1 A 4 to 20mA current mirror with poor active impedance.

Said Ashutosh: “I tried the fig.2 circuit for 4-20mA mirroring, with R1 and R2 of 100E, and using a Tl431 (2.5V). It worked quite well. One issue I found was that the output impedance (di/dv) was quite low; there was a change of 40uA over a supply swing of 20V (if I remember correctly), not linear with supply voltage change. It is possibly due to the 2.5V reference voltage modulation with cathode voltage swing.

It could be compensated for, but some error will remain due to the non-linearity.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

His observation and analysis were both absolutely correct. Table 6.6 in the TL431 datasheet reveals a maximum reference-voltage error of up to 2 mV per volt of cathode-to-anode voltage swing, consistent with the mediocre 20V/40µA = 500k active impedance he observed.

Fortunately, a simple and effective remedy is available and waiting in the pages of the common cookbook of current mirror circuits: the cascode. Figure 2 shows how it can be added (as D1 + Q2) to Figure 1.

Figure 2 D1/Q2 cascode reduces reference modulation error, improving active impedance by orders of magnitude.

The effect of the added parts is to isolate Z1’s cathode/anode voltage from voltage variation at the I2 node, thus holding the cathode/reference differential near zero and constant to within millivolts.

The resultant orders of magnitude reduction of reference modulation should produce a proportional increase in active impedance.

Thanks, Ashu!  Another example of the magic of editor Aalyia Shaukat’s DI kitchen collaboration in action!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Classic constant current cascode appeared first on EDN.

Silicon coupled with open development platforms drives context-aware edge AI

Wed, 02/04/2026 - 10:12

Edge AI reached an inflection point in 2025. What had long been demonstrated in controlled pilots—local inference, reduced latency, and improved system autonomy—began to transition into scalable, production-ready deployments across industrial and embedded markets. This shift has exposed a deeper architectural reality: many existing silicon platforms and development environments are poorly matched to the demands of modern, context-aware edge AI.

As AI workloads move from centralized cloud infrastructure to distributed edge devices, design priorities have fundamentally changed. Edge systems must execute increasingly complex models under strict constraints on power, thermal envelope, cost, and real-time determinism. Addressing these requirements demands both a new class of AI-native silicon and a development platform that is open, extensible, and aligned with modern machine learning workflows.

Why legacy architectures are no longer sufficient

Conventional microprocessors and application processors were not designed for sustained AI workloads at the edge. While they can support inference through software or add-on accelerators, their architectures typically lack three essential characteristics required for modern Edge AI:

  1. Dedicated AI acceleration capable of efficiently executing convolutional, transformer-based, and multimodal workloads.
  2. Deterministic real-time processing for latency-sensitive industrial and embedded applications.
  3. Energy efficiency at scale, enabling always-on intelligence without excessive thermal or power budgets.

As edge AI applications expand beyond simple classification toward sensor fusion, contextual reasoning, and on-device generative inference, these limitations become more pronounced. The result is a growing gap between what software frameworks can express and what deployed hardware can efficiently execute.

Edge AI design as a full value chain

Successful edge AI deployment requires a system-level view spanning the entire design value chain:

Data collection and preprocessing

Industrial edge systems, for example, operate in noisy, variable environments. Training data must reflect real-world conditions such as lighting changes, mechanical vibration, sensor drift, and interference.

Hardware-accelerated execution

Today’s edge designs rely on heterogeneous compute architectures: AI-native NPUs handle dense matrix and tensor operations, while CPUs, GPUs, DSPs, and real-time cores manage control logic, signal processing, and exception handling.

Model training, adaptation, and optimization

Although training is often performed off-device, edge deployment constraints must be considered early. Transfer learning and hybrid model architectures are commonly used to balance accuracy, explainability, and compute efficiency. Hardware-aware compilation enables models to be transformed to match accelerator capabilities while maintaining deterministic performance characteristics.

The role of open development platform

Historically, edge AI development has been fragmented across proprietary toolchains, closed runtimes, and framework-specific optimizations. This fragmentation has slowed adoption and increased development risk, particularly as model architectures evolve rapidly.

An open development platform addresses fragmentation challenges with:

  • Framework diversity: Edge developers increasingly rely on PyTorch, ONNX, JAX, TensorFlow, and emerging toolchains. Supporting this diversity requires compiler infrastructures that are framework-agnostic.
  • Rapid model evolution: The rise of transformers and large language models (LLMs) has introduced new operator patterns that closed toolchains struggle to support efficiently.
  • Long product lifecycles: Industrial and embedded devices often remain in service for a decade or more, requiring platforms that can adapt to new models without hardware redesign.

Additionally, open compiler and runtime infrastructures based on standards such as MLIR and RISC-V enable a separation between model expression and hardware execution. This decoupling allows silicon to evolve while preserving software investment.

Figure 1 Synaptics’ open edge AI development platform features Astra SoCs, the Torq compiler, and the industry’s first deployment of Google’s Coral NPU. Source: Synaptics

Context-aware AI and the move toward multimodal inference

A defining trend of edge AI in 2025 was the transition from single-sensor inference toward context-aware, multimodal systems. Rather than processing isolated data streams, edge devices increasingly combine vision, audio, motion, and environmental inputs to build a richer understanding of their surroundings.

This shift places new demands on edge platforms which must now support:

  • Heterogeneous data types and operators
  • Efficient execution of attention mechanisms and transformer-based models
  • Low-latency fusion of multiple sensor streams

Figure 2 The Grinn OneBox AI-enabled industrial single-board computer (SBC), designed for embedded edge AI applications, leverages a Grinn AstraSOM compute module and the Synaptics SL1680 processor. Source: Grinn Global

Designing for scalability and future workloads

One of the key architectural challenges in edge AI is scalability—not only across product tiers, but across time. AI-native silicon must scale from low-power endpoints to higher-performance systems while maintaining software compatibility.

This is typically achieved through:

  • Modular accelerator architectures that scale performance without changing programming models.
  • Heterogeneous compute integration, allowing workloads to migrate between NPUs, CPUs, and GPUs as needed.
  • Standardized toolchains that preserve model portability across devices.

For designers, this approach reduces risk by allowing a single software stack to span multiple products and generations.

Testing, validation, and long-term reliability

Edge AI systems operate continuously and often autonomously. Validation must extend beyond functional correctness to include:

  • Worst-case latency and power analysis
  • Thermal stability under sustained workloads
  • Behavior under degraded or unexpected inputs

Monitoring and logging capabilities at the edge enable post-deployment diagnostics and iterative model improvement. As models become more complex, explainability and auditability will become increasingly important, particularly in regulated environments.

Looking ahead

In 2026, AI is expected to move further into mainstream embedded system design. The focus is shifting from proving feasibility to optimizing performance, reliability, and lifecycle cost. This transition highlights the importance of aligning silicon architecture, software openness, and system-level design practices.

A new class of AI-native silicon, coupled with an open and extensible development platform, provides a foundation for this next phase. For system designers, the challenge—and opportunity—is to treat edge AI not as an add-on feature, but as a core architectural element spanning the entire design value chain.

Neeta Shenoy is VP of marketing at Synaptics.

Special Section: AI Design

The post Silicon coupled with open development platforms drives context-aware edge AI appeared first on EDN.

EDN announces Product of the Year Awards

Tue, 02/03/2026 - 20:30

EDN has announced the winners of the annual Electronic Products Product of the Year Awards in the January/February digital magazine. Now in its 50th year, EDN editors looked at over 100 products across 13 component categories to select the best new components. These categories include analog/mixed-signal ICs, development kits, digital ICs, electromechanical devices, interconnects, IoT platforms, modules, optoelectronics, passives, power, RF/microwave, sensors, and test and measurement.

These award-winning products demonstrate a significant advancement in a technology or its application, an exceptionally innovative design, a substantial achievement in price/performance, improvements in design performance, and/or the potential for new product designs and opportunities. This year, the awards have two ties, in the categories of power and sensors.

Also in the January/February issue, we look at some of the most advanced electronic components launched at the Consumer Electronics Show (CES). This year’s show highlighted the rise of AI across applications from automotive to smart glasses. Chipmakers are placing big bets on edge AI as a key growth area along with robotics, IoT, and automotive.

A few new AI chip advances announced at CES include Ambarella Inc.’s CV7 edge AI vision system-on-chip, optimized for a wide range of AI perception applications, and Ambiq Micro’s industry-first ultra-low-power neural processing unit built on its Subthreshold Power Optimized Technology platform and designed for real-time, always-on AI at the edge.

Though chiplets hold big promises in delivering more compute capacity and I/O bandwidth, design complexity has been a challenge. Cadence Design Systems Inc. and its IP partners may have made this a bit easier with pre-validated chiplets, targeting physical AI, data center, and high-performance-computing applications. At CES, Cadence announced a partner ecosystem to deliver pre-validated chiplet solutions, based on the Cadence physical AI chiplet platform. The new chiplet spec-to-packaged parts ecosystem is designed to reduce engineering complexity and accelerate time to market for developing chiplets while reducing risk.

We also spotlight the top 10 edge AI chips with an updated ranking, curated by AspenCore’s resident AI expert, EE Times senior reporter Sally Ward-Foxton. As highlighted by several CES product launches, more and more AI chips are being designed for every application niche as edge devices become AI-enabled. These devices range from handling multimodal large language models in edge devices to those designed for vision processing and minimizing power consumption for always-on applications.

Giordana Francesca Brescia, contributing writer for Embedded.com, looks at microcontrollers with on-chip AI and how they are transforming embedded hardware into intelligent nodes capable of analyzing and generating information. In addition to hardware innovations, she also covers software development and key areas of application such as biomedical and industrial automation.

We also spotlight several emerging trends in 2026, from 800-VDC power architectures in AI factories and battery energy storage systems (BESSes) to advances in autonomous farming and power devices for satellites.

The wide adoption of AI models has led to a redesign of data center infrastructure, according to contributing writer Stefano Lovati. Traditional data centers are being replaced with AI factories to meet the computational capacity and power requirements needed by today’s machine-learning and generative AI workloads.

However, a single AI factory can integrate several thousand GPUs, reaching power consumption levels in the megawatt range, Lovati said. This has led to the design of an 800-VDC power architecture, which is designed to support the multi-megawatt power demand required by the compute racks of next-generation AI factories.

Lovati also discusses how wide-bandgap semiconductors such as silicon carbide and gallium nitride can deliver performance and efficiency benefits when implementing an 800-VDC architecture.

The adoption of BESSes is primarily being driven by the need to improve efficiency and stability in power distribution networks. BESSes can balance supply and demand by storing energy from both renewable sources and the conventional power grid, Lovati said. This helps stabilize power grids and optimize power uses.

Lovati covers emerging trends in BESSes, including advances in battery technologies, hybrid energy storage systems—integrating batteries with alternative energy storage technologies such as supercapacitors or flywheels—and AI-based solutions for optimization. Some of the alternatives to lithium-ion discussed include flow batteries and sodium-ion and aluminum-ion batteries.

We also look at the challenges of selecting the right power supply components for satellites. Not only do they need to be rugged and small, but they must also be configurable for customization.

The configurability of power supplies is an important factor for meeting a variety of space mission specifications, according to Amit Gole, marketing product manager for the high-reliability and RF business unit at Microchip Technology.

Voltage levels in the electrical power bus are generally standardized to certain values; however, the voltage of the solar array is not always standardized, Gole said, which calls for a redesign of all of the converters in the power subsystems, depending on the nature of the mission.

Because this redesign can result in cost and development time increases, it is important to provide DC/DC converters and low-dropout regulators across the power architecture that have standard specifications while providing the flexibility for customization depending on the system and load voltages, he said.

Gole said functions such as paralleling, synchronization, and series connection are of key importance for power supplies when considering the specifications of different space missions.

We also look at the latest advances in smart farming. With technological innovations required to improve the agricultural industry and to meet the growing global food demands, smart farming has emerged to support farming operations thanks to the latest advancements in robotics, sensor technology, and communication technology, according to Liam Critchley, contributing writer for EE Times.

One of the key trends in smart farming is the use of drones, which help optimize a variety of farming operations. These include monitoring the health of the crops and soil and communicating updates to the farmer and active operations such as planting seeds and field-spraying operations. Drones leverage technologies such as advanced sensors, communication, IoT technologies and, in some cases, AI.

Critchley said one of the biggest developing areas is the integration of AI and machine learning. While some drones have these features, many smart drones will soon use AI to identify various pests and diseases autonomously, eliminating the need for human intervention.

Cover image: Adobe Stock

The post EDN announces Product of the Year Awards appeared first on EDN.

EDN announces winners of the 2025 Product of the Year Awards

Tue, 02/03/2026 - 15:05
Electronic Products of the Year 2025 logo.

The annual awards, now in its 50th year, recognizes outstanding products that represent any of the following qualities: a significant advancement in a technology or its application, an exceptionally innovative design, a substantial achievement in price/performance, improvements in design performance, and the potential for new product designs/opportunities. EDN editors evaluated 100+ products across 13 categories. There are two ties, in the power and sensors categories. Here are this year’s winners:

  • Allegro MicroSystems Inc. and SensiBel (Sensors)
  • Ambiq (Development Kits)
  • Cree LED (Optoelectronics)
  • Circuits Integrated Hellas (Modules)
  • Empower Semiconductor and Ferric Corp. (Power)
  • Littelfuse Inc. (Passives)
  • Marvell Technology Inc. (Interconnects)
  • Morse Micro Ltd. (IoT Platforms)
  • Renesas Electronics Corp. (Digital ICs)
  • Rohde & Schwarz (Test & Measurement)
  • Semtech Corp. (RF/Microwave)
  • Sensata Technologies (Electromechanical)
  • Stathera Inc. (Analog/Mixed-Signal ICs)
Allegro MicroSystems Inc. Sensors: ACS37100 magnetic current sensor

Allegro MicroSystems’ ACS37100 is a fully integrated tunneling magnetoresistive (TMR) current sensor that delivers high accuracy and low noise for demanding control loop applications. Marking a critical inflection point for magnetic sensors, it is the industry’s first commercially available magnetic current sensor to achieve 10-MHz bandwidth and 50-ns response time, the company said.

The ACS37100 magnetic current sensor, based on Allegro’s proprietary XtremeSense TMR technology, is 10× faster and generates 4× lower noise than alternative Hall-based sensors. This performance solves challenges in high-voltage power conversion, especially related to gallium nitride (GaN) and silicon carbide (SiC) solutions. The ACS37100 helps power system designers leverage the full potential of fast-switching GaN and SiC FETs by providing precise current measurement and integrated overcurrent fault detection.

The current sensor delivers a low noise of 26-mA root mean square across the full 10-MHz bandwidth, enabling precise, high-speed current measurements for more accurate and responsive system performance.

While GaN and SiC promise greater power density and efficiency, the faster switching speeds of wide-bandgap semiconductors create significant control challenges. At sub-megahertz frequencies, conventional magnetic current sensors lack the speed and precision to provide the high-fidelity, real-time data required for stable control and protection loops, Allegro MicroSystems said.

Target applications include electric vehicles, clean-energy power conversion systems, and AI data center power supplies, in which the 10-MHz bandwidth and 50-ns response time provide the high-fidelity data needed. The operating temperature range is –40°C to 150°C.

Allegro MicroSystems’ ACS37100 TMR magnetic current sensor.Allegro MicroSystems’ ACS37100 TMR magnetic current sensor (Source: Allegro MicroSystems Inc.) Ambiq Development Kits: neuralSPOT AI development kit

Ambiq’s neuralSPOT software development kit (SDK) is designed specifically for embedded AI on the company’s ultra-low-power Apollo system-on-chips (SoCs). It helps AI developers handle the complex process of model integration with a streamlined and scalable workflow.

The SDK provides a comprehensive toolkit comprising Ambiq-optimized libraries, feature extractors, device drivers, and pre-trained AI models, making it easier for developers to quickly prototype, test, and deploy models using real-world sensor data while integrating optimized static libraries into production applications. This reduces both development effort and energy consumption.

The neuralSPOT SDK and Toolkit bridge the gap between AI model creation, deployment, and optimization, Ambiq said, enabling developers to move from concept to prototype in minutes, not days. This is thanks in part to its intuitive workflow, pre-validated model templates, and seamless hardware integration.

The latest neuralSPOT V1.2.0 Beta release includes ready-to-use example implementations of popular AI applications, such as human activity recognition for wearable and fitness analytics, ECG monitoring, keyword spotting, speech enhancement, and speaker identification.

Key challenges that the neuralSPOT SDK addresses include high power consumption, energy limits, limited development tools, and complex setup. This is particularly important when enabling AI on compact, battery-powered edge devices in which manufacturers must balance performance, power efficiency, and usability.

The SDK provides a unified, developer-friendly toolkit with Ambiq-optimized libraries, drivers, and ready-to-deploy AI models, which reduces setup and integration time from days to hours. It also simplifies model validation for consistent results and quicker debugging and provides real-time insights into energy performance, helping developers meet efficiency goals early in the design process.

Ambiq’s neuralSPOT for the Apollo5 SoCs.Ambiq’s neuralSPOT for the Apollo5 SoCs (Source: Ambiq) Circuits Integrated Hellas Modules: Kythrion antenna-in-package

The Kythrion chipset from Circuits Integrated Hellas (CIH) is called a game-changer for satellite communications. It is the first chipset to integrate transmit, receive, and antenna functions into a proprietary 3D antenna-in-package and system-in-package architecture.

By vertically stacking III-V semiconductors (such as gallium arsenide and GaN) with silicon, Kythrion achieves more than 60% reductions in size, weight, power, and cost compared with traditional flat-panel antenna modules, according to the company. This integration eliminates unnecessary printed-circuit-board (PCB) layers by consolidating RF, logic, and antenna elements into a dense 3D chip for miniaturization and optimized thermal management within the package. This also simplifies system complexity by combining RF and logic control on-chip.

CIH said this leap in miniaturization allows satellites to carry more advanced payloads without increasing mass or launch costs, while its 20× bandwidth improvement delivers real-time, high-throughput connectivity. These features deliver benefits to aerospace, defense, and commercial networks, with applications in satellite broadband, 5G infrastructure, IoT networks, wireless power, and defense and aviation systems.

Compared with traditional commercial off-the-shelf phased-array antennas, which typically require hundreds of separate chips (e.g., 250 transmit and 250 receive chips) and require a larger footprint around 4U, Kythrion reduces the module count to just 50 integrated modules, fitting into a compact, 1U form factor. This results in a weight reduction from 3 kg to 4 kg, down to approximately 1.5 kg, while power consumption is lowered by 15%. Cost per unit is also significantly reduced, CIH said.

The company also considered sustainability when designing the Kythrion antenna-in-package. It uses existing semiconductor processes to eliminate capital-intensive retooling, which lowers carbon impact. In addition, by reducing satellite mass, each kilogram saved in satellite payload can reduce up to 300 kg of CO2 emissions per launch, according to CIH.

CIH’s Kythrion antenna-in-package.CIH’s Kythrion antenna-in-package (Source: Circuits Integrated Hellas) Cree LED, a Penguin Solutions brand Optoelectronics: XLAMP XP-L Photo Red S Line LEDs

Advancing horticulture lighting, Cree LED, a Penguin Solutions brand, launched the XLAMP XP-L Photo Red S Line LEDs, optimized for large-scale growing operations, including greenhouses and vertical farms, with higher efficiency and durability.

Claiming a new standard in efficiency and durability for horticultural LED lighting, the XLAMP XP-L Photo Red S Line LEDs provide a 6% improvement in typical wall-plug efficiency over the previous generation, reaching 83.5% at 700 mA and 25°C. Horticultural customers can reduce operating costs with the same output with less power consumption, or they can lower initial costs with a redesign that cuts the number of Photo Red LEDs required by up to 35%, Cree LED said.

Thanks to its advanced S Line technology, the XP-L Photo Red LEDs offer high sulfur and corrosion resistance that extend their lifespan and deliver reliable performance. These features reduce maintenance costs while enabling the devices to withstand harsh greenhouse environments, the company said.

Other key specifications include a maximum drive current of 1,500 mA, a low thermal resistance of 1.15°C/W, and a wide viewing angle of 125°. The LEDs are binned at 25°C. They are RoHS- and REACH-compliant.

These LEDs also provide seamless upgrades in existing designs with the same 3.45 × 3.45-mm XP package as the previous XP-G3 Photo Red S Line LEDs.

Cree LED’s XLamp XP-L Photo Red S Line LEDs.Cree LED’s XLamp XP-L Photo Red S Line LEDs (Source: Cree LED, a Penguin Solutions brand) Empower Semiconductor Power: Crescendo vertical power delivery

Empower Semiconductor describes Crescendo as the industry’s first true vertical power delivery platform designed for AI and high-performance-computing processors. The Crescendo chipset sets a new industry benchmark with 20× faster response and breakthrough sustainability and enables gigawatt-hours in energy savings per year for a typical AI data center.

The vertical architecture achieves multi-megahertz bandwidth, 5× higher power density, and over 20% lower delivery losses while minimizing voltage droop and accelerating transient response. The result is up to 15% lower xPU power consumption and a significant boost in performance per watt, claiming a new benchmark for efficiency and scalability in AI data center systems.

The Crescendo platform is powered by Empower’s patented FinFast architecture. Scalable beyond 3,000 A, Crescendo integrates the regulators, magnetics, and capacitors into a single, ultra-thin package that enables direct placement underneath the SoC. This relocates power conversion to where it’s needed most for optimum energy and performance, according to the company.

Empower said the Crescendo platform is priced to be on par with existing power delivery solutions while offering greater performance, energy savings, and lower total cost of ownership for data centers.

Empower’s Crescendo vertical power delivery.Empower’s Crescendo vertical power delivery (Source: Empower Semiconductor) Ferric Corp. Power: Fe1766 DC/DC step-down power converter

Ferric’s Fe1766 160-A DC/DC step-down power converter offers industry-leading power density and performance in an ultra-compact, 35-mm2 package with just 1-mm height. The Fe1766 is a game-changer for high-performance computing, AI accelerators, and data center processors with its extremely compact form factor, high power density, and 100× faster switching speeds for precise, high-bandwidth regulation, Ferric said.

Integrating inductors, capacitors, FETs, and a controller into a single module, the Fe1766 offers 4.5-A/mm2 power density, which makes it 25× smaller than traditional alternatives, according to the company. The integrated design translates into a board area reduction of up to 83%.

The FE1766 switches at 30 to 100 MHz, ensuring extremely fast power conversion with high-bandwidth regulation and 30% better efficiency than conventional solutions and 20% reduced cost compared with existing designs. Other features include real-time telemetry (input voltage, output voltage, current, and temperature) and comprehensive fault protection (UVLO, OVP, UVP, OCP, OTP, etc.), providing both reliability and performance.

However, the most significant feature is its scalability, with gang operation of up to 64 devices in parallel for a power delivery exceeding 10 kA directly to the processor core. This makes it suited for next-generation multi-core processors, GPUs, FPGAs, and ASICs in high-density and high-performance systems, keeping pace with growth in computing power and core counts, particularly in AI, machine learning, and data centers.

Ferric’s Fe1766 DC/DC step-down power converter.Ferric’s Fe1766 DC/DC step-down power converter (Source: Ferric Corp.) Littelfuse Inc. Passives: Nano2 415 SMD fuse

The Littelfuse Nano2 415 SMD fuse is the industry’s first 277-VAC surface-mount fuse rated for a 1,500-A interrupting current. Previously, this was achievable only with larger through-hole fuses, according to the company. It allows designers to upgrade protection and transition to automated reflow processes, reducing assembly costs while improving reliability and surge-withstand capability.

The Nano2 415 SMD fuse bridges the gap between legacy cartridge and compact SMD solutions while advancing both performance and manufacturability, Littelfuse said. Its compact, 15 × 5-mm footprint and time-lag characteristic protect high-voltage, high-fault-current circuits while enabling reflow-solder assembly. It is compliant with UL/CSA/NMX 248-1/-14 and EN 60127-1/-7.

The Nano2 415 SMD Series offers high I2t performance. It is halogen-free and RoHS-compliant. Applications include industrial power supplies, inverters, and converters; appliances and HVAC systems; EV chargers and lighting control; and smart building and automation systems.

Littelfuse’s Nano2 415 SMD Fuse.Littelfuse’s Nano2 415 SMD Fuse (Source: Littelfuse Inc.) Marvell Technology Inc. Interconnects: 3-nm 1.6-Tbits/s PAM4 Interconnect Platform

The Marvell 3-nm 1.6-Tbits/s PAM4 Interconnect Platform claims the industry’s first 3-nm process node optical digital-signal processor (DSP) architecture, targeting bandwidth, power efficiency, and integration for AI and cloud infrastructure. The platform integrates eight 200G electrical lanes and eight 200G optical lanes in a compact, standardized module form factor.

The new platform sets a new standard in optical interconnect technology by integrating advanced laser drivers and signal processing in a single, compact device, Marvell said. This reduces power per bit and simplifies system design across the entire AI data center network stack.

The 3-nm PAM4 platform addresses the I/O bandwidth bottleneck by combining next-generation SerDes technology and laser driver integration to achieve higher bandwidth and power performance. It leverages 200-Gbits/s SerDes and integrated optical modulator drivers to reduce 1.6-Tbits/s optical module power by over 20%. The energy-efficiency improvement reduces operational costs and enables new AI server and networking architectures to meet the requirements for higher bandwidth and performance for AI workloads, within the significant power constraints of the data center, Marvell said.

The 1.6-Tbits/s PAM4 DSP enables low-power, high-speed optical interconnects that support scale-out architectures across racks, rows, and multi-site fabrics. Applications include high-bandwidth optical interconnects in AI and cloud data centers, GPU-to-GPU and server interconnects, rack-to-rack and campus-scale optical networking, and Ethernet and InfiniBand scale-out AI fabrics.

The DSP platform reduces module design complexity and power consumption for denser optical connectivity and faster deployment of AI clusters. With a modular architecture that supports 1.6 Tbits/s in both Ethernet and InfiniBand environments, this platform allows hyperscalers to future-proof their infrastructure for the transition to 200G-per-lane signaling, Marvell said.

Morse Micro Pty. Ltd. IoT Platforms: MM8108 Wi-Fi HaLow SoC

Morse Micro claims that the MM8108 Wi-Fi HaLow SoC is the smallest, fastest, lowest-power, and farthest-reaching Wi-Fi chip. The MM8108, built on the IEEE 802.11ah standard, establishes a new benchmark for performance, efficiency, and scalability in IoT connectivity. It delivers data rates up to 43.33 Mbits/s using the industry’s first sub-gigahertz, 256-QAM modulation, combining long-range operation with true broadband throughput.

The MM8108 Wi-Fi HaLow extends Wi-Fi’s reach into the sub-1-GHz spectrum, enabling multi-kilometer connectivity, deep penetration through obstacles, and support for 8,000+ devices per access point. Outperforming proprietary LPWAN and cellular alternatives while maintaining full IP compatibility and WPA3 enterprise security, the wireless platform reduces deployment cost and power consumption by up to 70%, accelerates certification, and expands Wi-Fi’s use beyond homes and offices to cities, farms, and factories, Morse Micro said.

The MM8108 SoC’s integrated 26-dBm power amplifier and low-noise amplifier achieve “outstanding” link budgets and global regulatory compliance without external SAW filters. It also simplifies system design and reduces power draw with a 5 × 5-mm BGA package, USB/SDIO/SPI interfaces, and host-offload capabilities. This allows devices to run for years on a coin-cell or solar battery, Morse Micro said.

The MM8108-RD09 USB dongle complements the SoC, enabling fast HaLow integration with existing Wi-Fi 4/5/6/7 infrastructure. It demonstrates plug-and-play Wi-Fi HaLow deployment for industrial, agricultural, smart city, and consumer applications. The dongle is fully IEEE 802.11ah–compliant and Wi-Fi CERTIFIED HaLow-ready, allowing developers to test and commercialize Wi-Fi HaLow solutions quickly.

Together, the MM8108 and RD09 combine kilometer-scale range, 100× lower power consumption, and 10× higher capacity than conventional Wi-Fi while maintaining the simplicity, interoperability, and security of the wireless standard, the company said.

Applications range from smart cities (lighting, surveillance, and environmental monitoring networks spanning kilometers) and industrial IoT (predictive maintenance, robotics, and asset tracking in factories and warehouses) to agriculture (solar-powered sensors for crop, irrigation, and livestock management), retail and logistics (smart shelves, POS terminals, and real-time inventory tracking), and healthcare (long-range, low-power connectivity for remote patient monitoring and smart appliances).

Morse Micro’s MM8108 Wi-Fi HaLow SoC.Morse Micro’s MM8108 Wi-Fi HaLow SoC (Source: Morse Micro Pty. Ltd.) Renesas Electronics Corp. Digital ICs: RA8P1 MCUs

Renesas’s RA8P1 group is the first group of 32-bit AI-accelerated microcontrollers (MCUs) powered by the high-performance Arm Cortex-M85 (CM85) with Helium MVE and Ethos-U55 neural processing unit (NPU). With advanced AI, it enables voice, vision, and real-time-analytics AI applications on a single chip. The NPU supports commonly used networks, including DS-CNN, ResNet, Mobilenet, and TinyYolo. Depending on the neural network used, the Ethos-U55 provides up to 35× more inferences per second than the Cortex-M85 processor on its own, according to the company.

The RA8P1, optimized for edge and endpoint AI applications, uses the Ethos-U55 NPU to offload the CPU for compute-intensive operations in convolutional and recurrent neural networks to deliver up to 256 MACs per cycle, delivering 256 GOPS of AI performance at 500 MHz and breakthrough CPU performance of over 7,300 CoreMarks, Renesas said.

The RA8P1 MCUs integrate high-performance CPU cores with large memory, multiple external memory interfaces, and a rich peripheral set optimized for AI applications.

The MCUs, built on the advanced, 22-nm ultra-low-leakage process, are available in single- and dual-core options, with a Cortex-M33 core embedded on the dual-core MCUs. Single- and dual-core devices in 224- and 289-BGA packages address diverse use cases across broad markets. This process also enables the use of embedded magnetoresistive RAM, which offers faster write speeds, in the new MCUs.

The MCUs also provide advanced security. Secure Element–like functionality, along with Arm TrustZone, is built in with advanced cryptographic security IP, immutable storage, and tamper protection to enable secure edge AI and IoT applications.

The RA8P1 MCUs are supported by Renesas’s Flexible Software Package, a comprehensive set of hardware and software development tools, and RUHMI (Renesas Unified Heterogenous Model Integration), a highly optimized AI software platform providing all necessary tools for AI development, model optimization, and conversion, which is fully integrated with the company’s e2 studio integrated design environment.

Renesas Electronics’ RA8P1 MCU group.Renesas Electronics’ RA8P1 MCU group (Source: Renesas Electronics Corp.) Rohde & Schwarz Test & Measurement: FSWX signal and spectrum analyzer

The Rohde & Schwarz FSWX is the first signal and spectrum analyzer with multichannel spectrum analysis, cross-correlation, and I/Q preselection. It features an internal multi-path architecture and high RF performance, with an internal bandwidth of 8 GHz, allowing for comprehensive analysis even of complex waveforms and modulation schemes.

According to Rohde & Schwarz, this represents a fundamental paradigm shift in signal-analysis technology. Cross-correlation cancels the inherent noise of the analyzer and gives a clear view of the device under test, pushing the noise level down to the physical limit for higher dynamic range in noise, phase noise, and EVM measurements.

By eliminating its own noise contribution (a big challenge in measurement science), the FSWX reveals signals 20–30 dB below what was previously measurable, enabling measurements that were impossible with traditional analyzers, the company said.

Addressing critical challenges across multiple industries, the multichannel FSWX offers the ability to measure two signal sources simultaneously with synchronous input ports, each featuring 4-GHz analysis bandwidth, opening phase-coherent measurements of antenna arrays used in beamforming for wireless communications, as well as in radar sensors and electronic warfare systems. For 5G and 6G development, the cross-correlation feature enables accurate EVM measurements below –50 dB that traditional analyzers cannot achieve, according to Rohde & Schwarz.

In radar and electronic warfare applications, the dual channels can simultaneously measure radar signals and potential interference from 5G/Wi-Fi systems. In addition, for RF component makers, the FSWX performs traditional spectrum analyzer measurements, enabling Third Order Intercept measurements near the thermal noise floor without any internal or external amplification.

The FSWX uses broadband ADCs with filter banks spanning the entire operating frequency range, allowing for pre-selected signal analysis while eliminating the need for YIG filters. This solves “a 50-year-old compromise between bandwidth and selectivity in spectrum analyzer design,” according to the company, while providing improved level-measurement accuracy and much faster sweep times.

No other manufacturer offers dual synchronous RF inputs with phase coherence, cross-correlation for general signals, 8-GHz preselected bandwidth, and multi-domain triggering across channels, according to Rohde & Schwarz. This makes it an architectural innovation rather than an incremental improvement.

Rohde & Schwarz’s FSWX signal and spectrum analyzer.Rohde & Schwarz’s FSWX signal and spectrum analyzer (Source: Rohde & Schwarz) Semtech Corp. RF/Microwave: LR2021 RF transceiver

The LR2021 is the first transceiver chip in Semtech’s LoRa Plus family, leveraging its fourth-generation LoRa IP that supports both terrestrial and SATCOM across sub-gigahertz, 2.4-GHz ISM bands, and licensed S-band. The transceiver is designed to be backward-compatible with previous LoRa devices for seamless LoRaWAN compatibility while featuring expanded physical-layer modulations for fast, long-range communication.

The LR2021 is the first transceiver to unify terrestrial (sub-gigahertz, 2.4-GHz ISM) and satellite (licensed S-band) communications on a single chip, eliminating the traditional requirement for separate radio platforms. This enables manufacturers to deploy hybrid terrestrial-satellite IoT solutions with single hardware designs, reducing development complexity and inventory costs for global deployments.

The LR2021 also delivers a high data rate of up to 2.6 Mbits/s, enabling the transmission of higher data-rate content with outstanding link budget and efficiency. The transceiver enables the use of sensor-collected data to train AI models, resulting in better control of industrial applications and support of new applications.

This represents a 13× improvement over Gen 3 LoRa transceivers (Gen 3 SX1262: maximum 200-kbits/s LoRa data rate), opening up new application categories previously impossible with LPWAN technology, including real-time audio classification, high-resolution image recognition, and edge AI model training from battery-powered sensors.

It also offers enhanced sensitivity down to –142 dBm @ SF12/125 kHz, representing a 6-dBm improvement over Gen 3 devices (Gen 3 SX1262: –148-dBm maximum sensitivity at lower spreading factors, typically –133-dBm operational sensitivity). The enhanced sensitivity extends coverage range and improves deep-indoor penetration for challenging deployment environments.

Simplifying global deployment, the transceiver supports multi-region deployment via a single-SKU design. The integration reduces bill-of-material costs, PCB footprint, and power consumption compared with previous LoRa transceivers. The increased frequency offset tolerance eliminates TCXO requirements and large thermal requirements, eliminating components that traditionally added cost and complexity to multi-region designs.

The device is compatible with various low-power wireless protocols, including Amazon Sidewalk, Meshtastic, W-MBUS, Wi-SUN FSK, and Z-Wave when integrated with third-party stack offerings.

Semtech’s LR2021 RF transceiver.Semtech’s LR2021 RF transceiver (Source: Semtech Corp.) Sensata Technologies Inc. Electromechanical: High Efficiency Contactor

Sensata claims a breakthrough electromechanical solution with its High Efficiency Contactor (HEC), designed to accelerate the transition to next-generation EVs by enabling seamless compatibility between 400-V and 800-V battery architectures. As the automotive industry moves toward ultra-fast charging and higher efficiency, the HEC targets vehicles that can charge rapidly at both legacy and next-generation charging stations.

By enabling the seamless reconfiguration between 400-V and 800-V battery systems, this capability allows EVs to charge efficiently at both legacy 400-V charging stations and emerging 800-V ultra-fast chargers, ensuring compatibility and eliminating infrastructure barriers for OEMs and end users.

A key differentiator is its ability to dramatically reduce system complexity and cost. By integrating three high-voltage switches into a single, compact device, the HEC achieves up to a 50% reduction in component count compared with traditional battery-switching solutions, according to Sensata, simplifying system integration and lowering costs.

The HEC withstands short-circuit events up to 25 kA and mechanical shocks greater than 90 g while maintaining ultra-low contact resistance (~50 μΩ) for minimal energy loss.

The HEC features a unique mechanical synchronization that ensures safer operation by eliminating the risk of short-circuit events (a critical safety advancement for high-voltage EV systems). It also offers a bi-stable design and ultra-low contact resistance that contribute to greater energy efficiency during both charging and driving.

The bi-stable design eliminates the need for holding power, further improving energy efficiency, Sensata said.

 

The HEC targets automotive, truck, and bus applications including vehicle-to-grid, autonomous driving, and megawatt charging scenarios. It is rated to ASIL-D.

Sensata’s High Efficiency Contactor.Sensata’s High Efficiency Contactor (Source: Sensata Technologies) SensiBel Sensors: SBM100B MEMS microphone

SensiBel’s SBM100B optical MEMS digital output microphone delivers 80-dBA signal-to-noise ratio (SNR) and 146-dB SPL acoustic overload point (AOP). Leveraging its patented optical sensing technology, the SBM100B achieves performance significantly surpassing anything that is available on the market today, according to the company. It delivers the same audio recording quality that users experience with professional studio microphones but in a small-form-factor microphone.

The 80-dB SNR delivers cleaner audio, reducing hiss and preserving clarity in quiet recordings. It is a significant achievement in noise and dynamic range performance for MEMS microphones, and it’s a level of audio performance that capacitive and piezo MEMS microphone technologies cannot match, the company said.

The SBM100B is also distortion-proof in high-noise environments. Offering an AOP of up to 146-dB SPL, the SBM100B delivers high performance, even in very loud environments, which often have high transient peaks that easily exceed the overload point of competitive microphones, SensiBel said.

The microphone offers studio-quality performance in a compact MEMS package (6 × 3.8 × 2.5-mm, surface-mount, reflow-solderable, bottom-port). With a dynamic range of 132 dB, it prevents distortion in loud environments while still capturing subtle audio details. It supports standard PDM, I2S, and TDM digital interfaces.

The SBM100B also supports multiple operational modes, which optimizes performance and battery life. This allows designers to choose between the highest performance or optimized power while still operating with exceptional SNR. It also supports sleep mode with very low current consumption. An optional I2C interface is available for customization of built-in microphone functions, including bi-quad filters and digital gain.

Applications include general conferencing systems, industrial sound detection, microphone arrays, over-the-ear and true wireless stereo headsets and earbuds, pro audio devices, and spatial audio, including VR/AR headsets, 3D soundbars, and field recorders.

SensiBel’s SBM100B MEMS microphone.SensiBel’s SBM100B MEMS microphone (Source: sensiBel) Stathera Inc. Analog/Mixed-Signal ICs: STA320 DualMode MEMS oscillator

Stathera’s ST320 DualMode MEMS oscillator, in a 2.5 × 2.0 × 0.95-mm package, is a timing solution that generates both kilohertz and megahertz signals from a single resonator. It is claimed to be the first DualMode MEMS timing device capable of replacing two traditional oscillators.

The DualMode capability provides both the kilohertz clock (32.768 kHz) for low-power mode and megahertz (configurable 1–40 MHz) clock for control and communication. This simplifies embedded system design and enhances performance and robustness, along with an extended battery life and a reduction of PCB footprint space and system costs.

Key specifications include a frequency stability of ±20 ppm, a voltage range of 1.62 to 3.63 V, and an operating temperature of –40°C to 85°C. Other features include LVCMOS output and four configurable power modes. This device can be used in consumer, wearables, IoT, edge AI, and industrial applications.

Stathera’s ST320 DualMode MEMS oscillator.Stathera’s ST320 DualMode MEMS oscillator (Source: Stathera Inc.)

The post EDN announces winners of the 2025 Product of the Year Awards appeared first on EDN.

Short push, long push for sequential operation of multiple power supplies

Tue, 02/03/2026 - 15:00

Industrial systems normally use both analog and digital circuits. While digital circuits include microcontrollers that operate at 5 VDC, analog circuits operate generally at either 12 or 15 VDC. In some systems, it may be necessary to switch on power supplies in sequence, first 5 VDC to digital circuits and then 15 VDC to analog circuits.

Wow the engineering world with your unique design: Design Ideas Submission Guide

During switch-off, first 15 VDC and then 5 VDC. In such requirements, Figure 1’s circuit comes in handy.

Figure 1 Single pushbutton switches on or off 5 V and 15 V supplies sequentially. LEDs D1, D2 indicate the presence of 5 V and 15 V supplies. Adequate heat sinks may be provided for Q2 and Q3, depending upon the load currents. Suitable capacitors may be added at the outputs of 5 V and 15 V.

A video explanation of this circuit can be found below:

When you push the button momentarily once, 5 VDC is applied to digital circuits, including microcontroller circuits, and then 15 VDC to analog circuits after a preset delay. When you push the button SW1 for a long time, say 2 seconds, the 15-V supply is withdrawn first, and then the 5-V supply. Hence, one push button does both (sequential) ON and OFF functions.

This Design Idea (DI) is intended for MCU-based projects. No additional components/circuitry are needed to implement this function. When you push SW1 (2-pole push button) momentarily, 5 VDC is extended to the digital circuit through the closure of the first pole of SW1. The microcontroller code should now load HIGH to the output port bit PB0. Due to this, Q1 conducts, pulling the gate of Q2 to LOW. Hence, Q2 now conducts and holds 5 VDC to the digital circuit even after releasing SW1.

Next, the code should be to load HIGH to the output port bit PB1 after a preset delay. This will make Q4 conduct and pull the gate of Q3 to LOW. Hence, Q3 is conducted, and 15 VDC is extended to the analog circuit. Now, the MCU can do its other intended functions.

To switch off the supplies in sequence, push SW1 for a long time, say 2 seconds. Through the second pole of SW1, input port line PB2 is pulled LOW. This 2+ seconds LOW must be detected by the microcontroller code, either by interrupt or by polling, and start the switch-off sequence by loading LOW to the port bit PB1, which switches off Q4 and hence Q3, removing the 15-V supply to the analog circuit. Next, the code should load LOW to PB0 after a preset delay.  This will switch off Q1 and hence Q2, so that 5 VDC is disconnected from the digital/microcontroller circuit.

Thus, a single push button switches on and switches off 5-V and 15-V supplies in sequence. This idea can be extended to any number of circuits and sequences, as needed. This idea is intended for use in MCU-based projects without introducing extra components/circuitry. In this design, ATMEGA 328P MCU and IRF4435 P-channel MOSFETs are used.  For circuits without an MCU, I will offer a scheme to do this function in my next DI.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

The post Short push, long push for sequential operation of multiple power supplies appeared first on EDN.

Why power delivery is becoming the limiting factor for AI

Tue, 02/03/2026 - 11:50

The sheer amount of power needed to support the expansion of artificial intelligence (AI) is unprecedented. Goldman Sachs Research suggests that AI alone will drive a 165% increase in data center power demand by 2030. While power demands continue to escalate, delivering power to next-generation AI processors is becoming more difficult.

Today, designers are scaling AI accelerators faster than the power systems that support them. Each new processor generation increases compute density and current demand while decreasing rail voltages and tolerances.

The net result? Power delivery architectures from even five years ago are quickly becoming antiquated. Solutions that once scaled predictably with CPUs and early GPUs are now reaching their physical limits and cannot sustain the industry’s roadmap.

If the industry wants to keep up with the exploding demand for AI, the only way forward is to completely reconsider how we architect power delivery systems.

Conventional lateral power architectures break down

Most AI platforms today still rely on lateral power delivery schemes where designers place power stages at the periphery of the processor and route current across the PCB to reach the load. At modest current levels, this approach works well. At the thousands of amps characteristic of AI workloads, it does not.

As engineers push more current through longer copper traces, distribution losses rise sharply. PCB resistance does not scale down fast enough to offset the increase. Designers therefore lose power to I2R heating before energy ever reaches the die, which forces higher input power and complicates thermal management (Figure 1). As current demands continue to grow, this challenge only compounds.

Figure 1 Conventional lateral power delivery architectures are wasteful of power and area. Source: Empower Semiconductor

Switching speed exacerbates the problem. Conventional regulators operate in the hundreds of kilohertz range, which requires engineers to use large inductors and bulky power stages. While these components are necessary for reliable operation, they impose placement constraints that keep conversion circuitry far from the processor.

Then, to maintain voltage stability during fast load steps, designers must surround the die with dense capacitor networks that occupy the closest real estate to the power ingress point to the processor: the space directly underneath it on the backside of the board. These constraints lock engineers into architectures that scale inadequately in size, efficiency, and layout flexibility.

Bandwidth, not efficiency, sets the ceiling

Engineers often frame power delivery challenges around efficiency. But, in AI systems, control bandwidth is starting to define the real limit.

When a regulator cannot respond fast enough to sudden load changes, voltage droop follows. To ensure reliable performance, designers raise the voltage so that the upcoming droop does not create unreliable operations. That margin preserves performance but wastes extra power continuously and erodes thermal headroom that could otherwise support higher compute throughput.

Capacitors act as a band aid to the problem rather than fix it. They act as local energy reservoirs that mitigate the slow regulator response, but they do so at the cost of space and parasitic complexity. As AI workloads become more dynamic and burst-driven, this trade-off becomes harder to justify, as enormous magnitudes of capacitance (often in tens of mF) are required.

Higher control bandwidth changes the relationship and addresses the root-cause. Faster switching allows designers to simultaneously shrink inductors, reduce capacitor dependence, and tighten voltage regulation. At that point, engineers can stop treating power delivery as a static energy problem and start treating it as a high-speed control problem closely tied to signal integrity.

High-frequency conversion reshapes power architecture

Once designers push switching frequencies into the tens or hundreds of megahertz, the geometry of power delivery changes.

For starters, magnetic components shrink dramatically, to the point where engineers can integrate inductors directly into the package or substrate. The same power stages that used to be bulky can now fit into ultra-thin profiles as low as hundreds of microns (µm).

Figure 2 An ultra-high frequency IVR-based PDN results in a near elimination of traditional PCB level bulk capacitors. Source: Empower Semiconductor

At the same time, higher switching frequencies mean control loops can react orders of magnitude faster, achieving nanosecond-scale response times. With such a fast transient response, high-frequency conversion completely removes the need for external capacitor banks, freeing up a significant area on the backside of the board.

Together, these space-saving changes make entirely new architectures possible. With ultra-thin power stages and dramatically reduced peripheral circuitry, engineers no longer need to place power stages beside the processor. Instead, for the first time, they can place them directly underneath it.

Vertical power delivery and system-level impacts

By placing power stages directly beneath the processor, engineers can achieve vertical power-delivery (VPD) architectures with unprecedented technical and economic benefits.

First, VPD shortens the power path, so high current only travels millimeters to reach the load (Figure 3). As power delivery distance drops, parasitic distribution losses fall sharply, often by as much as 3-5x. Lower loss reduces waste heat, which expands the usable thermal envelope of the processor and lowers the burden placed on heatsinks, cold plates, and facility-level cooling infrastructure.

Figure 3 Vertical power delivery unlocks more space and power-efficient power architecture. Source: Empower Semiconductor

At the same time, eliminating large capacitor banks and relocating the complete power stages in their place, frees topside board area that designers can repurpose for memory, interconnect, or additional compute resources, thereby increasing performance.

Higher functional density lets engineers extract more compute from the same board footprint, which improves silicon utilization and system-level return on hardware investment. Meanwhile, layout density improves, routing complexity drops, and tighter voltage regulation is achievable.

These combined effects translate directly into usable performance and lower operating cost, or simply put, higher performance-per-watt. Engineers can recover headroom previously consumed by lateral architectures through loss, voltage margining, and cooling overhead. At data-center scale, even incremental gains compound across thousands of processors to save megawatts of power and maximize compute output per rack, per watt, and per dollar.

Hope for the next generation of AI infrastructure

AI roadmaps point toward denser packaging, chiplet-based architectures, and increasing current density. To reach this future, power delivery needs to scale along the same curve as compute.

Architectures built around slow, board-level regulators will struggle to keep up as passive networks grow larger and parasitics dominate behavior. Instead, the future will depend on high-frequency, vertical-power delivery solutions.

Mukund Krishna is senior manager for product marketing at Empower Semiconductor.

Special Section: AI Design

The post Why power delivery is becoming the limiting factor for AI appeared first on EDN.

Pages