EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 40 min ago

VNAs perform production test up to 9 GHz

Thu, 03/05/2026 - 19:06

With typical measurement speeds of 25 µs/point, Copper Mountain’s three SC series VNAs enable efficient testing in both R&D and manufacturing environments. The SC0402, SC0602, and SC0902 two-port analyzers cover a common frequency start of 9 kHz, with upper ranges of 4.5 GHz, 6.5 GHz, and 9 GHz, respectively.

These instruments offer a typical dynamic range of 130 dB (10 Hz IF BW) for precise characterization of RF components and complex systems. Output power can be adjusted from -50 dBm to +5 dBm, with up to 500,001 measurement points/sweep. Measured parameters include S11, S21, S12, and S22.

Standard software capabilities, available without a paid license, include linear and logarithmic sweeps, power sweeps, and time-domain conversion with gating. Additional functions include S-parameter embedding and de-embedding, limit testing, frequency offset, and vector mixer calibration.

Automation is supported through LabVIEW, Python, MATLAB, .NET, and other programming environments, allowing up to 16 independent channels with 16 traces/channel. A manufacturing test plug-in is available as an add-on to integrate the VNA software into existing automated manufacturing and QA processes.

The SC series VNAs carry MSRPs of $13,995 (SC0402), $15,995 (SC0602), and $17,995 (SC0902).

Copper Mountain Technologies 

The post VNAs perform production test up to 9 GHz appeared first on EDN.

MCU brings USB-C power to embedded devices

Thu, 03/05/2026 - 19:05

Infineon’s EZ-PD PMG1-B2 MCU integrates a single-port USB Type-C PD controller with a 55-V buck-boost controller for charging 2- to 12-cell Li-ion battery packs. Compliant with the latest USB Type-C and PD specifications, the device accepts an input voltage range of 4.5 V to 55 V with switching frequencies programmable from 200 kHz to 700 kHz.

The MCU targets USB-C-powered embedded devices in consumer, industrial, and communications markets, where devices make use of its integrated functions. Typical applications include cordless power and gardening tools, vacuum cleaners, kitchen appliances, e-bikes, drones, and robots.

The EZ-PD PMG1-B2 features a 32-bit Arm Cortex-M0 processor with 128 KB of flash and 8 KB of SRAM for customizable embedded applications. It integrates analog and digital peripherals—including ADCs, PWMs, UART/I2C/SPI interfaces, and timers—reducing PCB space and BOM. A comprehensive SDK and software suite simplify development and system design.

Production of the EZ-PD PMG1-B2 is expected to begin in the second quarter of 2026. Samples, technical documentation, and evaluation boards are available upon request.

EZ-PD PMG1-B2 product page 

Infineon Technologies 

The post MCU brings USB-C power to embedded devices appeared first on EDN.

Passive limiter shields electronics from RF threats

Thu, 03/05/2026 - 19:05

Teledyne Microwave UK’s B3LT98026 is a passive wideband limiter designed to protect sensitive receiver front ends in defense and military communication systems. It operates from 0.1 GHz to 20 GHz and withstands up to 10 W peak input power under defined pulse width and duty cycle conditions.

The device enhances the survivability of Radar Electronic Support Measures (R-ESM) and Electronic Warfare (EW) systems operating in complex threat environments. It provides continuous, always-on protection against high-power RF and emerging Directed Energy Weapons (DEWs).

Across the operating band, the limiter maintains a maximum insertion loss/noise figure of 2.0 dB and a maximum input/output VSWR of 1.5:1. A fast 40-ns recovery time enables rapid return to nominal sensitivity following high-power events. The device operates over a temperature range of −20°C to +85°C, supporting deployment in demanding environments.

The compact SMA-based housing supports straightforward integration into existing architectures without requiring system redesign. The B3LT98026 is also compatible with Teledyne’s Phobos mast top unit and can accommodate additional RF elements, such as filters, when required.

The B3LT98026 is now available for evaluation in defense and EW systems.

B3LT98026 product page 

Teledyne Microwave UK 

The post Passive limiter shields electronics from RF threats appeared first on EDN.

Nordic debuts multiple cellular IoT products

Thu, 03/05/2026 - 19:05

Nordic Semiconductor expands its ultra-low-power cellular IoT portfolio with Cat 1 bis, satellite NTN, and advanced LTE-M/NB-IoT with edge AI. Leveraging the proven nRF91 series, the nRF92 and nRF93 deliver a scalable, secure platform for global connectivity.

The nRF92 LTE-M/NB-IoT and satellite NTN series introduces the company’s smallest, most highly integrated, and power-efficient cellular solution. It combines a high-performance application MCU with Axon neural processing units, a multi-constellation GNSS receiver, Wi-Fi positioning, and sensor coprocessing. Lead customer sampling is underway, with general availability expected in early 2027.

The nRF93M1 is an LTE Cat 1 bis cellular IoT module with integrated MCU, LTE modem, GNSS receiver, and Wi-Fi positioning. It supports up to 10 Mbps downlink and 5 Mbps uplink, offers global LTE coverage, and is designed for low-power, compact applications. The module is compatible with nRF Cloud for device management, firmware updates, and location services. Lead customers are currently developing products with the nRF93M1, with general availability starting mid-2026.

Additionally, Nordic has enhanced the nRF91 LTE-M/NB-IoT series with 3GPP-compliant GEO and LEO satellite NTN connectivity and sub-GHz fallback to maintain connectivity when public networks are unavailable. The company also introduced the nRF91M1 module, a compact Smart Modem that simplifies adding cellular connectivity to host–modem designs.

Nordic Semiconductor 

The post Nordic debuts multiple cellular IoT products appeared first on EDN.

EV system design from components to modules to software

Thu, 03/05/2026 - 15:01

Electric vehicle (EV) design at the system level is a rapidly evolving landscape encompassing components, hardware modules, and software platforms. So, on the first day of Automotive Tech Forum 2026, which was dedicated to EV designs, a panel titled “Powering the Electric Vehicle: From Semiconductors to Systems” took a deep dive into the system-level intricacies of EV designs.

Carsten Himmele, marketing manager for automotive at Allegro MicroSystems, highlighted the growing presence of silicon carbide (SiC) in traction inverters due to its ability to deliver higher bandwidth and efficiency. However, while talking about motor control for EV traction, he also mentioned challenges in operating in harsher electrical environments.

“SiC brings in higher bandwidth for motor control, but it also makes the electrical environment somewhat harsher,” he said. Himmele added that advanced phase-current sensing and inductive rotor-position sensing are essential for overcoming these challenges. “Moreover, system-grade building blocks reduce the number of external components and improve design efficiency,” he concluded.

That’s where gallium nitride (GaN) offers key advantages, said Alex Lidow, CEO and co-founder of Efficient Power Conversion (EPC). “GaN is smaller, more efficient, and more rugged compared to silicon and SiC,” he said. “It’s particularly effective in 48-V systems, which complement the emerging 800-V architectures.”

Lidow added that while EVs with 48-V systems are now leading the way, GaN devices are 5 to 7 times more efficient than their MOSFET ancestors. “GaN is powering onboard chargers, DC/DC converters, battery cooling pumps, steering systems, and infotainment.”

Rohan Samsi, VP of GaN Business Division at Renesas, also talked about the paradigm shift GaN brings to power converters, enabling simplified single-stage designs. “The bidirectional switch allows you to take out something that was a multi-stage converter and replace it with a single stage.” To achive integration synergy, Samsi emphasized that GaN’s strengths in current sensing, temperature sensing, and gate drive enable holistic EV solutions.

Finally, Kerry Grand, marketing manager for Simulink Automotive at MathWorks, turned the discussion toward the software aspects of design. He was asked to inform the panel on the latest developments in EV traction from a system-integration standpoint. And what does hardware testing uncover about the present and future of EV drivetrain?

Grand began with an insight into EV system-level design through simulation and model-based design. Then he identified enduring challenges in EV system design, including high-voltage isolation, battery life optimization, and thermal management. “Simulating detailed thermal systems offers automotive OEMs the ability to trade off temperature limits without compromising system performance.”

At a time when EV design building blocks like traction inverters and battery management systems (BMS) are continually adding functionality, system-level challenges are a critical area to watch. The panel discussion in Automotive Tech Forum 2026 provides a glimpse of design challenges and viable solutions in this design realm.

You can watch this session along with all sessions from the Automotive Tech Forum 2026 virtual event on demand at www.automotiveforum.eetimes.com.

Related Content

The post EV system design from components to modules to software appeared first on EDN.

Cardiac monitors: Inconspicuous, robust data collectors

Thu, 03/05/2026 - 15:00

As follow-up to last month’s narrative of a cardiac abnormality thankfully detected by wearable devices, this engineer details the monitoring system he subsequently donned for a month.

Two-plus years ago, my contributor-colleague John Dunn described his most recent experience with a wearable cardiac monitor. And, as any of you who read one of my last-month blog posts already know, I more recently followed in his footsteps. I don’t yet know the outcome of my heart health study; my follow-up appointment with the cardiologist is a week away as I type these words. Regardless, I thought you might still find it interesting to learn about the gear I toted around, stuck to my chest (and in my pocket) for 30 days, and my experiences using it.

The system I used was Philips’ MCOT (Mobile Cardiac Telemetry), specifically its “patch” variant:

Here’s an overview video; others, plus documentation, are at the product support page:

I took several “selfies” of the sensor in place on my chest but ultimately decided to save you all the abject horror of seeing any of them. Instead, I’ll stick with these stock images:

My initial scheduled meeting with the cardiologist took place on December 12, 3+ weeks after our “introduction” at the emergency room. I’d been on both beta blockers (to regulate my heartbeat) and blood thinners (in case my prior irregular rhythm had resulted in the formation of a clot) since my initial visit to the hospital in mid-November. The cardiologist ordered the monitor, which arrived a bit more than a week later; I began wearing it the day after Christmas.

Here’s the box that the system comes in:

Open sesame:

The first thing I saw was the initial sensor patch, along with the return shipping packaging bag. Below it was the template I used for proper placement each time I stuck a patch on my chest:

The bulk of the contents were contained in two inner boxes, the first labeled “Getting Started” and the second referred to as “Monitoring”. Inside the first:

were several primary items:

along with installation and operation overview instructions:

The monitoring device, both here and in subsequent photos accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

whose dimensions and Android operating system foundation, along with the legacy presence of an analog headphone jack alongside the USB-C port:

and a multi-camera rear array in a specific arrangement:

suggest it to be a custom-software derivative of Samsung’s Galaxy A52 smartphone, introduced in March 2021:

It came with the translucent green case pre-installed, by the way. Here are some other overview images of the smartphone…err…monitoring device (its left side was unmemorable so I didn’t bother):

Next up was a small scrub pad used to further prepare my chest for patch application, after initial hair shaving. And, of course, there was the sensor itself:

Its edge arrived already abraded; I’m guessing that it had already been popped open, with its rechargeable battery subsequently replaced, at least once prior to its arrival at my residence:

Now for box #2:

More instructions, of course:

along with more patches, a more detailed instruction booklet, and the dual-charging unit:

The AC/DC adapter has two USB-A outputs:

which can be used in parallel:

One, connected to a red USB-A to USB-C cable, is used for daily recharge of the “monitoring device” (smartphone). The other (black, this time) cable terminates in a charging dock for the sensor, which I used every five days in conjunction with (and in-between) the patch removal and replacement steps:

Here’s how the initial “monitoring device” bootup went (since this was a custom Android-plus-app build, I wasn’t able to grab screenshots directly from the smartphone, perhaps obviously):

After initial charging of both the monitoring device and sensor, I continued the setup process:

Here’s what a patch looks like when you first take it out of the package; top:

and bottom:

Pressing down on the sensor while aligned with the patch base snaps it into place:

A briefly illuminated LED subsequently indicates that the sensor is correctly installed, at which point the monitoring device is able to “see” it (broadcasting over Bluetooth, presumably Low Energy):

At this point, you can peel away the protective clear plastic cover over the back side adhesive:

All that’s left is to press it into place on your chest…and then peel off the existing patch, pop out and recharge the sensor and redo the installation process five days later:

Lather, rinse, and repeat until the total 30-day cycle is over, which the system thoughtfully tracks on your behalf. Then ship it all back to the manufacturer.

The monitoring device, which regularly receives data transmissions from the sensor, periodically then uploads the data to the “cloud” server over an LTE or EV-DO cellular data connection.

If you forget to keep the monitoring device close by, data won’t be lost, at least for a while. There’s an unknown amount of memory onboard the sensor (yes, I searched for a teardown, alas unsuccessfully), albeit presumably not the full 2 GBytes allocated to this alternative device designed solely for local data logging. But the monitoring device will still alert you (both visually and audibly) to the lost wireless (again, presumably Bluetooth’s LE variant) connection:

You’ll also be alerted if the sensor’s integrated battery drops to a low level and recharge is necessary (I proactively did this every five days, as previously noted, since I’d received six total patches):

If you feel like something’s amiss with your “ticker” (heart pounding, fatigue, etc.) you can tap on the icon at the center of the display and the monitoring device will send an alert “flag” for subsequent correlation with the potential cardiac arrythmia data collected at that same time:

And in closing, here are some shots of other monitoring device display screens that I captured:

By the time you see this, assuming I don’t need to reschedule for some reason, I will have met with my cardiologist and gotten the (hopefully positive) results. I’ll follow up in the comments. And please also share your thoughts there! Thanks as always for reading.

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post Cardiac monitors: Inconspicuous, robust data collectors appeared first on EDN.

MWC 2026: Apple, Google, Samsung and Other Contending Contestants

Wed, 03/04/2026 - 21:46

Ever imagine that memory supply (translating to system capacity and price) concerns would ever dominate multiple companies’ announcements? “And so it goes”, to quote Kurt Vonnegut.

The Mobile World Congress (MWC) show, held each year in Barcelona, Spain (one of my favorite cities in the world) and in progress as I write these words, doesn’t have quite the same cachet as previously. Two primary reasons rationalize this impermanence: the cellphone market has subsequently (and notably so) consolidated, and it’s increasingly common for the market participants that remain to announce new products at their own events.

That said, these go-it-alone suppliers still often chronologically cluster their announcements at or near the MWC timeframe. Plus, the conference organizers have broadened the scope of the show beyond just cellphones (nowadays: smartphones) to also encompass other mobile devices such as tablets and laptop computers…although classifying a static desktop-based, AC-powered robot as “mobile” is a stretch, no matter how dynamic its joints and display may be:

Apple, Google, and Samsung were among the companies who made notable(-ish) news over the past week. I’ll cover them chronologically in the following sections.

Mountain View gets the jump on Cupertino (once again)

Last spring, Google unveiled its then-latest cost-focused phone, the Pixel 9a, a few weeks after Apple had rolled out its initial (albeit iPhone 16-numbered) “e” rebrand of prior “SE” multi-gen economical-tuned offerings. I subsequently bought a Pixel 9a for myself, replacing (and leveraging a then-lucrative trade-in value promotion for) my prior backup handset, a Pixel 6a.

That said, Google had already flip-flopped prior longstanding fast-follower precedence with the late summer 2024 launch of the mainstream Pixel 9 and high-end Pixel 9 Pro, which predated their iPhone 16 competitors by a month (versus the historical cadence of being a month belated). The same thing happened last year. And now, Google has extended its “eager beaver” behavior to the entry-level end of its smartphone product suite with the Pixel 10a, which the company sneak-peeked in early February, with a full unveil two weeks later complete with a pre-order opportunity, and shipments starting later this week.

Good news: skyrocketing DRAM and NAND flash memory prices haven’t led to handset price increases (or, alternatively, either integrated memory capacity decreases or the culling of lower-capacity product variants); the Pixel 10a price ($499) is unchanged from its Pixel 9a predecessor. Bad news (albeit good news for me, no longer FOMO-fraught): unless you’re insistent on a completely flat backside absent any camera “bumps”, the design is largely unchanged as well. Same chipset. Same memory generations and speed bins. The display is modestly enhanced—peak brightness, bezel thickness, and cover glass shock resistance—as are the wired and wireless charging power, therefore speeds, but that’s basically it. Oh…and still no Qi magnet inclusion. Hold that thought.

A higher-end attack

A week later, and a week ago, Samsung rolled out its Galaxy S26 product line, which competes against Apple’s iPhone 17 series launched last September, along with new-generation earbuds (but no new smart ring; was Oura’s legal-pressure campaign effective?):

Here again, not much has changed from the year-prior Galaxy S25 predecessors. The “adder” that seemingly got all the media attention, Privacy Display, derives from an OLED display tweak and is only available on the high-end Ultra variant. Unlike Google, Samsung is generationally raising prices, predominantly blaming memory cost increases as the root cause, and is also not offering comparable low-end storage capacity options as with S25-series predecessors. The memory blame assignment is particularly ironic in this case because the Samsung parent company also has a semiconductor (memory, specifically) division under its corporate umbrella.

That said, as my colleague Majeed recently wrote about at length and I’d also noted in my earlier 2026-forecast coverage, HBM memory is AI-cultivating the lion’s share of customer demand (therefore also supplier attention) right now, versus the DDR4- and DDR5-generation DRAM technologies found in computers, smartphones, tablets, and the like. Speaking of AI, Samsung Mobile (like Google, and in partnership with Google, along with Perplexity) is betting on it as a trend-setting differentiator from Apple’s underperforming alternative, no matter that it ended up not being a broadly effective sales pitch motivator last year. That Apple has now partnered with Google, too, must have been a hard pill for Cupertino to swallow. Oh, and by the way, once again, no Qi magnets, although the argument is pretty pervasive, at least to me. Paraphrasing: “Why bother doing so, bumping up the bill-of-materials cost in the process, since most everybody also uses phone cases anyway, and they already come with magnets?”

Not a one-trick pony

All of which leads us to Apple itself, which yesterday (as I’m writing these words on Tuesday afternoon, March 3) released its latest entry-level smartphone, the iPhone 17e:

Minutia first: a year ago, I gave the company grief for busting through the $500 price barrier while, as the original MagSafe innovator, bafflingly leaving magnets off its wireless charging implementation. First World problem solved: unlike with Google and Samsung, as earlier mentioned, they’re there in the iPhone 17e. We can all now once again sleep soundly.

Now, for memory, specifically (in this case) flash memory. Like Samsung but unlike Google, Apple lopped the prior-generation 128 GByte storage capacity option off the low end of the product suite. But unlike both Samsung and Google, the capacity increase comes with no associated price increase; Apple has stuck with $599 for the now-256 GByte variant this time. The SoC is also upgraded, from the A18 to A19 (the same generation as in the iPhone 17), albeit with only 4 GPU cores (versus 5 with the iPhone 17), as is the cellular modem (the newer C1X). And a few other tweaks: a third color option (pink) and updated Ceramic Shield 2 front glass protection.

Since, as I mentioned at the beginning, MWC has expanded beyond phones into tablets (among other things), I’ll also lump into today’s coverage the latest M4 SoC-based generation of the iPad Air, which Apple also announced yesterday.

As before, it comes in both 11” and 13” variants; the N1 networking and C1X cellular chips are also on board for the ride this time. Echoing back to my earlier highlight of the iPhone 17-vs-17e A19 SoC core-count discrepancy, the version of the M4 SoC in the new iPad Air is also downbinned from the ones in the various versions of the M4 iPad Pro, albeit this time from both CPU (both performance and efficiency, in fact) and GPU core-count standpoints, with requisite benchmarking-results impacts. And once again, memory is the most notable news (IMHO, at least) with these devices. But this time, DRAM is in the spotlight. Likely with locally stored AI model sizes in mind, the low-end M4 iPad Air variants deliver a 50% capacity increase (from 8 GBytes to 12 GBytes), still with no corresponding price increase…

…which circles us back to my memory-related comments that kicked off this piece. If volatile (DRAM) and nonvolatile (flash memory) supplies are constrained, and prices are therefore skyrocketing, why is Google able to hold steady on its device pricing, and Apple to go even further, holding prices while simultaneously boosting on-device capacities? Right now, I suspect, both companies’ sizes have enabled them to negotiate favorable pricing and volume contracts with memory suppliers. And further to the “sizes” point, even after those contracts time out, I suspect that both companies will be willing (albeit not necessarily delighted) to endure short-term profit margin pain in order to squeeze smaller, less profitable competitors out of the long-term market.

More to come

When I saw yesterday that Apple had released new public beta versions of its next operating system updates for phones and tablets, but not for computers, I suspected that this delay was only temporary and related to new computers planned for announcement today. And right on schedule, they (therefore it) came this morning; updated versions of the 14” and 16” MacBook Pro, based on the new Pro and Max variants of last fall’s M5 SoC (now also inside the MacBook Air), along with a duet of new displays.

I doubt we’re done; a new low-end MacBook (likely named the Neo) based on the iPhone 16 Pro’s A18 Pro SoC is rumored to still be on queue for Apple’s “big week ahead”, for example, and I can’t help but wonder if we’ll also get a M5-based Mac mini (last updated in November 2024). Stay tuned for more coverage to come from yours truly, hopefully later this week. And until then, let me know your so-far thoughts in the comments!

p.s…Two more MWC-related tidbits. Qualcomm has a promising next-generation SoC for smart watches and other wearables on the way. And speaking of Qualcomm, ready or not, 6G is coming

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post MWC 2026: Apple, Google, Samsung and Other Contending Contestants appeared first on EDN.

Stretching a bit

Wed, 03/04/2026 - 15:00

I love Design Ideas (DIs) with a backstory.  Recently, frequent DI contributor Jayapal Ramalingam published an engaging tale of engineering ingenuity coping with a design feature requirement added unexpectedly and very (very!) late in product development: “Using a single MCU port pin to drive a multi-digit display.”

Jayapal writes, “Imagine a situation where you have only one port line left out, and you are suddenly required to add a four-digit display.”

Yikes!  Add a looming delivery deadline to build suspense, and this becomes a classic nightmare scenario. It could easily develop, from an engineering standpoint, into a horror story straight out of the pages of Stephen King. Well, okay. Almost.

Wow the engineering world with your unique design: Design Ideas Submission Guide

But in a clever plot twist, engineer Jayapal shows how a bit (no pun!) of ingenuity turns this tale of terror into an opportunity for some cool circuit design. In his DI, different durations of software-generated pulses on that lonely port line become the control signals necessary for running the newly needed decimal display.

Crisis and calamity averted.

So I wondered how the same basic plot could make a basis for a more generalized storyline. In this version, not just four digits of numerical binary-coded decimal (BCD), but N bits of arbitrary parallel binary outputs would be driven in a similar solitary serial fashion. And all this would be achieved by the same singleton GPIO port bit. Figure 1 shows how the story takes shape.

Figure 1 A lonely GPIO bit loads a lengthy serial string of parallel registers. 

Incoming pulses of variable length on GPIO are buffered by noninverting gate U1a and drive three sets of inputs. 

  1. Timing circuits U1b (400us R1C3 SER input zero/one discriminator),
  2. U1cd (2.4ms R4C2 parallel RCLK clock AC coupled Schmidt trigger),
  3. SRCLK shift registers serial clock.

As illustrated in Figure 2, the interpulse (idle) state of the GPIO is high = 1. 

Figure 2 GPIO pulse timing.

A serial bit transfer pulse starts when the GPIO goes low = 0, releasing the timing RCs. Whether the pulse shifts to a 0 or 1 bit depends on its duration.  If < 100 μs (T0), the R1C3 timeconstant will still hold SER low when the rising edge of SRCLK clocks the serial registers. This will cause a 0 bit to be shifted in. If > 400 μs (T1), the opposite will occur, and the shift register gets a one.

In this way, a data rate between 2 kbps and 10 kbps (depending on the relative frequencies of ones and zeros) can be maintained as long as the idle period between pulses remains less than 600 μs. Completion of data transfer is signaled by allowing GPIO to remain idle for > TR = 3.5 ms.  This allows R4C2 to time out and a transfer pulse to occur on RCLK, commanding a broadside parallel data transfer from the shift registers to the parallel output bits.

Note that, going back to the original horror story, four BCD digits = 16 bits, two 8-bit shift registers, and 12 ms would be enough logic and time. I think that makes for a pretty good ending for a yarn about a far stretch of a single bit.

Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974.  They have included best Design Idea of the year in 1974 and 2001.

Related Content

The post Stretching a bit appeared first on EDN.

EV design: The truth about 400-V to 800-V battery transition

Wed, 03/04/2026 - 14:45

In electric vehicle (EV) designs, the shift from 400-V to 800-V battery systems is now a pressing issue. So, the panel discussion on the first day of Automotive Tech Forum 2026 was a good venue for a reality check on the future of 800-V EV architectures.

The panel titled “Powering the Electric Vehicle: From Semiconductors to Systems” explored the latest in battery management system (BMS) designs and what battery modeling tells us about the design challenges as we move toward 800-V systems. And how design building blocks like motor control in EV traction are coping with this transition.

The panelists discussed how 800-V EV architectures could reshape vehicle power distribution. Jerry Shi, sector general manager for EV, HEV, and Powertrain at Texas Instruments, spoke about the emerging 800-V EV design landscape, specifically from a drivetrain standpoint. He also outlined critical design challenges and viable solutions in this design arena.

Carsten Himmele, marketing manager for Automotive at Allegro MicroSystems, cautioned about the industry-wide adoption of 800-V battery systems. “The 400-V battery systems will still dominate mainstream markets due to cost and complexity trade-offs.”

Rohan Samsi, VP of GaN Business Division at Renesas, echoed similar sentiments while envisioning a deeper adoption of 800-V architectures to address range anxiety and efficiency concerns. He acknowledged the challenges such as cost, complexity, and consumer preferences. “The trade-offs between 400-V and 800-V architectures relate to component complexity and service warranty costs.”

So, in the 400-V to 800-V transition, there was a consensus that 800-V systems offer advantages in fast charging and reduced weight. However, for now, panelists expect that 400-V systems will remain dominant in mainstream markets due to their affordability.

Related Content

The post EV design: The truth about 400-V to 800-V battery transition appeared first on EDN.

Custom design PWM filters easily

Tue, 03/03/2026 - 15:00

It’s well known that the main job of a pulse width modulator’s filter is to limit the maximum peak-to-peak amplitude of the fPWM Hz-induced ripple. It attenuates this to a specified fraction—Frac of the full-scale PWM output—while passing PWMavg, the average value of the PWM signal.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Although the duty cycle can change instantaneously, the filter’s response to that change takes time to settle. It’s convenient to define the settling time Tfrac to be that after which the transient response remains within ± Frac of PWMavg. (After fully settling, the response variations will be from the ripple only and will remain within ± Frac/2 of PWMavg.) And it’s generally true that the more the ripple is attenuated, the larger Tfrac is. But for a given filter with two or more poles, there is an infinite number of combinations of component values that will limit the maximum ripple to Frac. (Think of the number of poles as being the number of capacitors in an R – C filter.) And yet the value of Tfrac is typically different for each combination. So we have a filter optimization problem: find the component value combination that minimizes Tfrac while satisfying the ripple requirement Frac.

I’ve addressed this issue before in a Design Idea (DI), but the procedure’s complexity was perhaps off-putting and inadequately flexible. I’ve since revisited the problem, finding a somewhat improved and analytically optimal solution. But that improvement alone does not justify a new DI.

So, why this new DI?

What I think does justify this is a spreadsheet that offers greater flexibility in terms of filter requirements and automates all the work for you. Download the files from https://github.com/Christopherrpaul/Customizable-PWM-Filter .

If you use OneDrive or something like it, you must install the files outside the OneDrive folder. (Safely ensconced there, OneDrive doesn’t “see” them and can’t interfere with the spreadsheet’s query of the paths to where certain files are stored locally.)

Open the spreadsheet. In the following, the yellow-highlighted parameters here and on the spreadsheet are inputs to be supplied by the user; the green-highlighted ones are spreadsheet outputs. Tell it your PWM frequency, in Hz, specify the required value of Frac, and press the “Calculate” button.

The Visual Basic Application (VBA)-driven spreadsheet takes that information and determines the values of the filter’s real and complex pole pairs ( the Q and ω0 of the latter ), which give you the optimal, smallest Tfrac, which it also displays.

To produce an implementable filter, it then combines this information with the (default) values of the filter’s capacitors c1, c2, and c3 . (These you can change and again press Calculate.) From all of this, it determines both the exact and the closest standard E96 values for the resistors r1, r2, and r3 needed to complete the filter. The filter itself is the third-order Sallen-Key low-pass depicted in the schematic portion of the spreadsheet screenshot seen in Figure 1.

Figure 1 A screenshot of the spreadsheet that runs the show. See the text.

And since we all like graphs, two have been provided. The one on top shows how ω0 and the real portion of all poles vary with Q. More importantly, it also shows that TFrac generally gets worse (larger) as Q is increased (not surprising, with the concomitant increase in oscillatory amplitudes).

The other graph shows the decay with time of PWMavg minus the absolute value of the transient response, with the voltage displayed on a logarithmic scale. The bumps are evidence of a damped oscillatory behavior.

But as they say in the late-night TV commercials (or at least they used to), “But wait! There’s more!”

How do I know this thing works?

You might ask how you can confirm that this filter will perform as advertised. The answer is easy if you’ve installed LTspice on your computer and you tell the spreadsheet the path starting from the root directory to the LTspice.exe file. Mine’s in C:\Users\chris\AppData\Local\Programs\ADI\LTspice\LTspice.exe.

Don’t worry if you can’t see the entire entry in the Excel cell provided. (NOTE – With the discussions surrounding the ongoing changes in LTspice versions 26.x.y, these files have been developed for use with the stable and still widely used LTspice 17.1.15. This version can still be downloaded and installed: https://ltspice.analog.com/software/LTspice64.exe. I haven’t checked if the files work with the 26.x.y versions.)

Press the “LTspice: Exact…” button. It will automatically launch a simulation using the exact resistor values derived and plot the filter’s response to the two biggest transients: a “full” one from 0 to 100% duty cycle (no PWM ripple) and a “half” one from 0 to 50% duty cycle (maximum possible ripple). See Figure 2 for a sample LTspice run.

Figure 2 An LTspice run using Exact component values for a sample filter.

The responses have been offset to reach their final values at 0 V. Tfrac appears on the plot as a vertical line along with two horizontal lines, which are at ± Frac. You can zoom in to see that the value of Tfrac is indeed correct; it crosses a ± Frac line exactly at the point that the full transition response does. (The full transition always takes a little longer to settle than the half-step transition.)

But alas, alack; this assumes perfect components with 0% tolerances. So the “LTspice Standard…” button launches a simulation of 100 Monte Carlo runs with capacitor and resistor tolerances of 1% and 1% using the E96 resistor values. (You can change all three of these default values and re-run the simulation. In fact, it’s worth considering the overall reduced settling time that can be had with suitably chosen 0.1% resistors added in series with small 1% resistors to more closely approach the exactly calculated values. Better tolerance capacitors would also help, but they tend to be prohibitively expensive.)

As you’ll see, non-zero tolerance variations lead to settling times longer than Tfrac. But by performing an extended number of Monte Carlo runs, you’ll be able to determine the time beyond which even filters made out of real-world components will have settled to Frac.

Filter design constraints

The real portions of all poles in the filter have been constrained to be identical. The reason for this is that these values control the decay rates of the half- and full-step transients, either of which could dictate the overall settling time. Given that the total ripple attenuation is the product of the real parts of both poles, if one were smaller than the other, it would extend the overall settling time beyond that achieved with identical poles. This constraint also simplifies the optimization problem in that there is only one real and one imaginary value of poles to consider, rather than one imaginary and two real values.

Calculated resistor values 

Depending on certain inputs to the spreadsheet, the derived values of the filter resistances might be smaller or larger than you’d like. In that case, the input values of the capacitors could be multiplied by a constant K of your choosing to obtain new resistor values divided by that K.

The spreadsheet’s default capacitor values are in the “Goldilocks” range—large enough that op amp input and PCB capacitances will affect them minimally, but small enough that NPO/COG type capacitors (whose stability with temperature and DC voltages are demanded in filter designs) are not prohibitively expensive. The ratios of one to the other of the default capacitor values have been shown to consistently result in realizable filters. Feel free to experiment with other values and ratios, but be aware that it might not be possible to realize filters with those changes.

Filter drivers

Do not drive the filter from a microprocessor directly. Its non-PWM functions draw currents that lead to small voltage drops across the IC-to-package-pin bonding wires. These induce errors by preventing signals from getting close to the ground and the supply rail. Instead, buffer the microprocessor with dedicated SN74AC04 logic inverters, which will swing to the rails, since they have no other currents to deal with and their outputs are minimally loaded. For a reasonably accurate reference voltage supplying the SN74AC04, consider the REF35.

SN74AC04-induced errors

It’s been pointed out that all digital drivers have different logic high and low resistances. These differences are sources of error that are worst at a 50% duty cycle. The part’s data sheet says that at a 3-V supply, the logic high voltage drop under a 12-mA load over the industrial temperature range could be as high as 560 mV, with a resistance of 45 ohms.

The logic low resistance maximum is a bit better, but there is no spec for the difference. The safe but admittedly ridiculous possibility is that the logic low resistance is 0 ohms, leaving us with a 45-ohm difference. This can be mitigated by paralleling G gates to reduce the drive resistance by that factor to produce a difference of Rdiff = 45/G.

Since no DC current can flow through the filter’s passive components, the fractional full-scale error at 50% is:

.5 · r1 / ( r1 + Rdiff) – .5 = – .5 · Rdiff / ( r1 + Rdiff)

For a b-bit PWM, you’d probably want the error to be less than half of one LSbit or 2-b-1. So you’d require that r1 > Rdiff · 2b.

Consider G = 5. For b = 8, r1 > 2300 ohms. For b = 12, it’s 37 kohms, and for b = 16, 590 kohms. But this brings up a second point: a large b means a relatively small fPWM and therefore a large TFrac. Fortunately, there’s a way around this.

Double up

Summing the contributions of two 8-bit PWMs, one of whose signals’ amplitude is 256 times that of the other, allows both to have an fPWM 256 times larger than that of a single 16-bit PWM. This yields a TFrac reduced by the same factor. Figure 3 shows one way to employ this approach.

Figure 3 Configuration with independent most significant (MSbit) and least significant (LSbit) 8-bit PWMs, the latter contributing 1/256 of the former, to replace a single 16-bit PWM. This arrangement reduces the settling time by a factor of 256.

Op-amp considerations

Figures 1 and 3 lead to the question of which op amp to use. A rail-to-rail input and output unit is warranted. The OPA376 family of singles, duals, and quads is a good answer.

It’s 25 µV at 25°C, ±1 µV/°C from -40 to +85°C, and barely disturbs the accuracy of even a 16-bit PWM. Its input bias current of 10 pA maximum at 25°C, and its typical (no maximum spec) of less than 50 pA at 85°C, introduces errors on par with its offset voltage. Consider the op amp’s output rail-to-rail limitations, however. Either avoid PWM duty cycles at the extremes, or extend the op amp’s supply rails a few tens of millivolts (see its data sheet) beyond those of the PWM.

In approaching your design, you might find the following nomograph in Figure 4 useful.

Figure 4 The above nomograph can aid in selecting the operating point of your design.

Problems, gripes, suggestions, requests, and accolades

The spreadsheet employs VBA numerical iteration routines to find the Q, ω0 pairs and the filter resistors. Although I’ve tested these routines extensively, it’s always possible that one or the other will fail to converge with some combination of input values.

In that case, please let me know by adding a note to the “Comments” section of this DI. This will generate an automatic email alert and will allow the inclusion in our conversation of others who might be interested. Please do not email me unless you have a comment that is truly meant to be private (a marriage proposal?) I encourage feedback of all kinds.

A grudging acknowledgement

I’d be remiss if I did not mention the help I got from a certain widely available AI program in developing this project. This ranged from deriving Inverse Laplace transforms and Newton-Raphson iteration algorithms to VBA coding.

But working with this AI wasn’t all lollipops and rainbows. In the course of the effort, I was reminded of Ronald Reagan’s admonition to “Trust, but verify.” But as things progressed, I dropped the “trust” part.

I found I had to break tasks down into sections, understand each that was provided, test assiduously, and make corrections before proceeding to the next step. Setting a multi-step task was a recipe for disaster. Still, AI is a valuable tool, and I find it even more valuable now that I better understand how to work with it.

I’d be interested in hearing about others’ experiences.

Related Content

 

The post Custom design PWM filters easily appeared first on EDN.

Plant pulse sensors: From soil probes to tree tattoos

Tue, 03/03/2026 - 11:40

Plants do not just grow—they signal. From the subtle moisture shifts in soil to the faint electrical rhythms coursing through leaves and stems, botanical sensors are turning greenery into living data networks.

What began with rugged soil probes has evolved into delicate tree tattoos that map physiological responses in real time. This convergence of biology and electronics is redefining how engineers, agronomists, and hobbyists alike monitor plant health, optimize yields, and even explore new frontiers in bio-inspired design.

Botanical sensors: Giving plants a voice

The term botanical sensor is best understood as an umbrella category rather than a single device. In agricultural technology (AgTech) and plant biology, it encompasses a wide range of instruments designed to monitor plant health and surrounding environmental conditions in real time.

In essence, these sensors give plants a “voice,” allowing them to signal their needs before visible stress, such as wilting, occurs. Unlike conventional weather stations that measure only ambient air, botanical sensors often interface directly with plant physiology or the immediate root zone, capturing data at the source of growth.

Beyond the broad category, it’s useful to distinguish between two key subtypes. In-plant sensors (often called plant wearables) are tiny, flexible devices attached directly to leaves or stems, enabling close monitoring of physiological signals.

In contrast, soil and root micro-environment sensors operate within the rhizosphere—the soil zone surrounding the roots—capturing data on moisture, nutrients, and microbial activity. These complementary approaches provide a layered view of plant health, and we will explore them in greater depth in the next session.

Figure 1 Visualizing plant–sensor interaction: leaf-mounted and root-zone probes capture real-time physiological data. Source: Author

Plant monitoring sensors: Soil, trunk, and surface frontiers

In principle, there is a wide variety of sensors designed to monitor everything from a small succulent on your table to a massive sequoia in a forest. Among these, soil-based sensors are the ones most often found in homes and farms. Rather than measuring the plant directly, they focus on the environment around the roots, where growth truly begins.

Moisture and conductivity sensors reveal water levels and soil salinity, offering insight into nutrient and fertilizer availability. Here, pH sensors track soil acidity, ensuring that nutrients are in a form the plant can actually absorb. Taken together, these instruments provide a root-level perspective that helps growers fine-tune conditions for healthier, more resilient plants.

Figure 2 The multi-parameter root zone soil sensor measures moisture, temperature, and electrical conductivity. Source: Delta-T Devices

For trees and large-scale agriculture, researchers often turn to sensors that measure the pulse of the plant directly. Sap-flow sensors, for instance, are needle probes inserted into the trunk to track how quickly water moves upward—essentially a heart rate monitor for a tree. Dendrometers capture the subtle micro-expansions and contractions of the trunk, revealing how trees shrink slightly during the day as they consume water and swell again at night.

Infrared leaf-temperature sensors add another layer of insight, detecting whether a leaf is sweating through transpiration. When leaves overheat, it usually signals stress: the plant has closed its pores to conserve water. Together, these devices provide a dynamic picture of plant physiology, extending monitoring beyond the soil to the living tissue itself.

Figure 3 The SFM‑5 sap flow sensor enables minimally invasive, high‑precision measurements of sap flow and sapwood water content in most tree species. Source: UGT

Notably, a newer frontier in plant monitoring involves sensors that adhere directly to the plant’s surface, much like a simple patch or sticker. Graphene tattoo sensors are ultra-thin films that can be taped to a leaf, tracking water loss (transpiration) in real time without causing harm.

Biosignal monitors go further, measuring the electrical signals coursing through plant tissue—essentially listening to how a plant reacts to pests, drought, or other stressors before any visible symptoms appear. While these technologies remain largely experimental, they represent an exciting shift from soil and trunk measurements to direct, non-invasive monitoring of plant physiology, offering a glimpse of how future growers may detect stress before it becomes visible.

In essence, there are so many sensors designed to capture a plant’s vital signs. Stomatal aperture reveals how widely the pores are open, regulating gas exchange and water loss. Sap flow tracks the speed at which water and nutrients move through the stem, a direct measure of circulation. Volatile organic compounds serve as chemical distress signals, emitted when plants face pests or disease.

Volumetric water content pinpoints the precise amount of water available in the soil, while electrical conductivity provides a proxy for salinity and nutrient levels. Together, these parameters form a concise diagnostic suite, offering a snapshot of plant health from root hydration to stress signaling.

On a related note, chlorophyll sensor provides a direct measure of a plant’s photosynthetic capacity by gauging how much light is absorbed or reflected by leaves. Handheld meters and clip-on probes often use fluorescence or reflectance techniques to estimate chlorophyll content, which correlates strongly with nitrogen status and overall plant health.

Because chlorophyll levels drop under nutrient deficiency or stress, these sensors are widely used in precision agriculture to guide fertilization decisions and monitor crop vigor. Unlike soil or trunk sensors, chlorophyll sensors give an immediate snapshot of the leaf’s metabolic activity, making them a practical complement to water and nutrient monitoring systems.

Beyond electronic devices, there is also the emerging field of phytosensing, where plants themselves are engineered to act as living detectors. In this approach, a plant might be genetically modified to change color when it encounters a specific toxin in the soil, effectively turning its physiology into a visible alarm system. Phytosensing highlights a future where monitoring does not just rely on external instruments but on the plants’ own biology, transforming them into active participants in environmental sensing.

Connecting the sensors: Interfaces in practice

In practice, mainstream plant monitoring sensors rely on straightforward electrical connections and increasingly on wireless interfaces that tie them into larger IoT systems. Simple moisture probes output analog signals—usually variable voltage or resistance—requiring external circuitry for signal processing and interpretation.

More advanced probes, such as those for pH or electrical conductivity, typically use digital buses like I²C, SPI, or UART, which provide cleaner signals and allow multiple sensors to share the same wiring. Sap-flow sensors, by contrast, generate heat pulses and require timing circuits to measure how quickly the signal moves through the stem, while infrared leaf-temperature sensors may deliver either analog voltages or digital packets depending on design.

Once signals are captured, a microcontroller acts as a hub to convert raw data into usable readings. From there, connectivity options expand: Wi-Fi and Bluetooth are common in greenhouses or indoor setups, while LoRaWAN and Zigbee provide long-range or mesh networking for large farms.

Data is then routed to cloud platforms or local dashboards, where growers can visualize soil moisture, salinity, or canopy stress in real time. Interfaces range from simple panel displays in the field to mobile apps and web dashboards that log trends and trigger alerts.

Practical considerations remain central: sensors must be calibrated regularly, especially EC and pH probes; outdoor devices need waterproofing and corrosion resistance; and power supplies often rely on batteries supplemented by solar trickle charging. The choice of interface—analog, digital, or wireless—depends on scale and cost, but the goal is the same: to make plant vital signs accessible, reliable, and actionable for growers.

Precision agriculture: The IC ecosystem for botanical monitoring

Modern agricultural sensing integrates a diverse set of specialized ICs to track the vital signs of plants. For soil health, the AD5941 precision A/D converter provides advanced impedance spectroscopy capabilities, enabling high-precision moisture and salinity analysis. It also serves as a modern successor platform for electrochemical pH and nutrient testing when paired with suitable sensor electrodes.

Atmospheric monitoring is led by the SHT4x sensors for humidity and the SCD4x sensors for compact photoacoustic CO₂ detection, while BME688 combines gas sensing with integrated AI to detect volatile organic compounds (VOCs) that can signal plant stress.

Light sensing remains critical: the TCS3448 spectral sensor captures multiple wavelength bands, allowing quantification of photosynthetically active radiation (PAR) and enabling growers to fine-tune light recipes for photosynthesis.

Together, these modern ICs transform plant monitoring from guesswork into data-driven precision, optimizing irrigation, nutrient management, and environmental control.

Figure 4 The BME688 module empowers makers and hobbyists to build minimally invasive, AI-driven plant stress monitors through volatile organic compound detection. Source: M5Stack Technology

Closing note

Admittedly, even a jam-packed post cannot do full justice to the fundamentals and applications of botanical sensors. Much remains to be explored before the puzzle is complete—new sensor models, evolving standards, and emerging use cases continue to reshape the field.

Stay tuned for more. Future installments will dive deeper into canopy-level sensing, chlorophyll fluorescence, microclimate monitoring, and innovative energy harvesters that power sensors autonomously. We will also explore how AI-driven analytics can transform raw sensor data into actionable insights for agriculture, forestry, and ecological research.

This overview offers a snapshot of where plant sensing technology stands today, with the promise of richer insights to follow. If you are fascinated by the evolving world of botanical sensors, follow along and join the conversation—together, we will piece the puzzle into a complete picture of plant sensing technology.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Plant pulse sensors: From soil probes to tree tattoos appeared first on EDN.

A defunct Amazon Echo: Where did its acumen go?

Mon, 03/02/2026 - 16:02

A multi-day weather-induced, utility-instigated electricity cutoff thankfully left this engineer’s residence and its contents largely unscathed…with one geriatric smart speaker exception.

Last December’s high-wind-induced extended power outage thankfully didn’t cause notable damage to our home or its contents. But to say we escaped completely unscathed would still be a (slight) overstatement. When we returned home after Xcel Energy restored power, I noticed that several of our Amazon Echo devices—two second-generation Echo Dots:

and a first-generation Echo:

exhibited perpetually rotating topside-light patterns characteristic of an imperfect bootup:

I’ve occasionally encountered this misbehavior before with various Echo-family devices, and as in the past, power-cycling the two Echo Dots got them going again. But despite multiple attempts, both power- and reset switch-based, I couldn’t convince the Echo to resurrect itself:

Oddly enough, albeit presumably indicative of an underlying distributed-processing system architecture, the top-panel microphone mute button still seemed to operate as expected, at least from an LED-illumination response standpoint:

But the Echo never came online or, more broadly, was responsive to voice-command attempts. Not to mention the perpetually rotating topside light whose video you’ve already seen. That said, I’d been using it for a long time, and they’re apparently prone to such misbehavior sooner or later. And that said, its two same-generation siblings in the residence were still working fine, and the first-generation Echo doesn’t support Amazon’s latest Alexa+ enhancements anyway (the second-generation Echo I replaced it with, conversely, is Alexa+-cognizant).

At this point, I’ll reiterate something you’ve read from me in variously worded ways plenty of times before: when a device dies, it frequently then turns into a teardown candidate. I’d already disassembled the first-gen Echo before, for publication more than a decade ago, to be exact:

But as with my Tile Mate teardown a few months back, trying to figure out why a device has died is often reason enough for me to entertain a dissection revisit. Plus, in re-reading my earlier coverage, I was reminded that my teardown presentations have become more verbose (whether that word choice translates into “comprehensive” or “long-winded” is up to you to decide) in the last decade. And although I’m still primarily snapping photos using a smartphone, the integrated camera has gotten a significant upgrade in the intervening years. So…here goes!

Foot first

You’ve already seen the reset switch accessible via a hole in the device’s rubberized “foot”:

Here’s another bottom-side closeup, this time of the various product markings, including the always-informative FCC ID (ZWJ-0823):

As was also the case last time, the “foot” (both here and in subsequent photos accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes) peeled right off:

exposing to view the circuit testing (and firmware programming, I’m also guessing) conductive pad array, along with four screw heads:

I suspect you already know what comes next:

Voilà. Our first, but definitely not our last, PCB is now exposed for visual perusal:

Its functions include internal power generation and management, along with analog-to-digital audio conversion and subsequent amplification for the main (combined midrange and tweeter) and subwoofer speakers. Not to mention the aforementioned hardware reset switch. Let’s next disconnect the associated cabling:

thereby enabling completion of the bottom assembly’s separation from the remainder of the device. To wit, keep in mind in viewing and analyzing both this and subsequent images that the Amazon Echo is currently oriented upside-down, i.e., it’s resting on its top edge:

I’ve got the power

A few more initial images of the PCB from various perspectives and proximities follow:

A bottom assembly flip-over first-time reveals the external power input connector:

And before continuing, I’ll supply a few additional power-related images and comments (power…supply…get it? Ahem). First off, here’s the “wall wart”:

also including closeups of its specs and “barrel” connector:

I’ve had a few second-generation “Dot” devices’ external power supplies fail in the past; the end result is either a flat-out refusal to start at all or a perpetual repetition of partial boots followed by abrupt restarts. In those cases, the consistent “fix” was straightforward and non-wasteful. Since the AC/DC converter with USB-A output was distinct from the USB-A to microUSB cable that fed the device, I could just swap in a replacement for the former and be up and running again in no time. Every time I did this, by the way, I wondered how many Echo Dots prematurely ended up in the landfill due to typical-consumer ignorance of both the exhibited issue’s root cause and simple resolution solution.

The Amazon Echo, as with subsequent-generation Echo Dots, is different: the AC/DC converter and power cable are one integrated unit. And I’d also yet to encounter an Echo-family power supply failure that presented itself as a partial boot followed by a “hang”. Nevertheless, my obvious first step was still to tether a multimeter to the power connector and confirm that I was measuring the expected voltage. And then, since this preliminary outcome wasn’t in and of itself definitive (the power supply could still be peak current-compromised, to the earlier partial-boot-and-restart scenario, and something that the light power supply loading by the multimeter wouldn’t expose), I borrowed a power supply from one of the other first-generation Echoes in the house and sadly confirmed that I still saw the same “hang” behavior as before.

IC details

Back to the bottom of the device. Next, let’s remove the PCB from the assembly, an easily accomplished task:

Here, for comparisons sake, is the comparable PCB (and vantage point view) from my initial February 2016 teardown of the Amazon Echo:

Quoting from that earlier writeup:

At top is the DC power input jack. At bottom is the ribbon cable connector. On the right are the two speaker connectors, one white and the other black. Near the center is a Texas Instruments TLV320DAC3203 stereo audio codec (PDF). On the far left is Texas Instruments’ TPS53312 step-down converter. And the PCB is otherwise covered with assorted large inductors, “can” capacitors, and other passives.

The only thing I might add on revisit of my earlier prose is mention of the multi-LED cluster at center of the PCB, which works in conjunction with the clear plastic lightpipe you probably noticed in the earlier device interior shot, taken post-bottom assembly removal:

to route the device’s power and connectivity status to a backside indicator located above the indent for the power cord. In retrospect, I wish I would have noted not only the perpetually rotating LED pattern up top but also the information communicated (or not) by this secondary LED set, as it might have assisted in diagnosing whatever had gone awry with the Echo.

Next are some side views:

And now for the PCB’s other side:

once again accompanied by that of its decade-plus ago predecessor:

along with a requote of my prior prose:

Flip the PCB over and the comparatively sparse underside contains several notable elements, beyond even more passives. At top is, again, the dual screw-reinforced backside of the DC power input jack. At bottom is the system reset button. In between them are the previously mentioned test points. And in the middle of the left half of the PCB is the audio amplifier, again from Texas Instruments (the TPA3110D2 Class D Stereo device, to be precise).

You more recently saw mention of the TPA3110’s higher power, PFFB-supportive successor, Texas Instruments’ TPA3255, within a class D-based audio amplifier teardown I did last fall.

Symmetrical sound redirection

Onward. With the lightpipe out of the way, the conical black plastic piece underneath it (as currently oriented; above it in normal operation) slides right out:

Below it is the main full-range speaker, handling all but the lowest audio frequencies, which instead route to the subwoofer we’ll see shortly:

The plastic piece’s contours, with the cone end pointed toward the speaker (which normally points downward), uniformly redirect generated sound out the mesh sides of the device:

Locating the brains

At this point, speaking of “mesh”, let’s press “pause” on the speaker and redirect our attention to sliding that metal mostly-mesh outer chassis off instead:

Take the inner assembly:

Rotate it horizontally by 180°, and another PCB appears:

The connector at bottom in this upside-down (versus norm) orientation mates to a flex cable that also routes to the top assembly:

And the one at the other end mates to the earlier-seen flex cable that also ends up at the bottom assembly:

Let’s cut away the foam surrounding much of the insides, so we can see what’s underneath it:

That’s more like it:

There’s that RFID tag again, which I’d first showcased a decade-plus ago:

Here’s our first glimpse of the subwoofer which, like the full-range speaker you saw earlier, points downward in the device’s normal operating orientation. The full-range speaker’s rear housing, which you’ll see shortly, is rounded, and akin to the cone-shaped piece ahead of the full-range speaker, similarly redirects the subwoofer’s primary output out the sides:

And here once again is what I’m calling the “digital PCB”, now free of any foam obscurant:

See those four screw heads? Buh-bye:

In removing one of them, which promptly and firmly re-attached itself to the side of the internal assembly, I was reminded that there’s a sizeable subwoofer-inclusive magnet inside:

As before, I’ll start with the PCB’s outside:

Next is its decade-plus ago, still attached, 90°-rotated-in-comparison counterpart’s image:

And a reprint of the prior associated prose (perhaps obviously referencing locations in the initial teardown’s photo orientation):

In the middle, toward the top is Texas Instruments’ DM3725CUS100 “digital media processor” SoC. It’s fairly diminutive in processing chops, compared to the application processors in modern smartphones and tablets, containing only a single-core 1 GHz ARM Cortex-A8 CPU. My best guess, therefore, is that it primarily handles the Echo’s speech recognition features, with “heavier lifting” redirected to Amazon’s servers via the device’s Wi-Fi connection. Speaking of which, the shiny-packaged IC below the DM3725CUS100 is a Qualcomm Atheros QCA6234X-AM2D Wi-Fi and Bluetooth module, also found in Amazon’s Fire TV and Fire HD tablet. The corresponding antennas are etched into the PCB, on either side.

Volatile and nonvolatile memory are a necessity, of course, and in this case they respectively take the form of a Samsung K4X2G323PD-8GD8 1 Gbit 200 MHz x32 mobile DDR SRAM (in the upper left corner) and Sandisk SDIN7DP2-4G 4 GByte iNAND embedded flash memory drive (below it). A standalone power management IC is also pretty much a guarantee in a product like this, and the Echo doesn’t disappoint; on the right edge of the PCB is a Texas Instruments TPS65910A1.

Next is another set of PCB side shots; note that as with its bottom-located predecessor, this particular board is impressively “meaty” from a thickness standpoint:

Finally, what’s underneath? A decade back, I wrote, “I didn’t bother showing you the underside of this PCB, by the way, because there’s nothing really to see … unless you’re into a bunch of additional passives, that is.” As you’ll see, “a bunch of” was arguably even overstating reality:

To the summit

One more PCB to go; the seemingly still-functional one up top, starting with the removal of a side screw:

The top assembly’s now gone:

More accurately, I’d just momentarily set it aside:

Four more screws to remove (how many times have I already said that in this piece?):

And now a pictorial sequence of the steps necessary to fully-as-possible expose the PCB to view:

Last time, I wrote: “I didn’t bother with a shot of the underside of the PCB; the only contents of note are switches corresponding to the top-side microphone-mute and device-setup buttons.” This time, I’m instead going the extra mile for you, dear readers:

See, like I said before; just switches:

The other side of the PCB, of which you’ve already caught several glimpses, is more interesting:

Here’s the image of that same PCB (and side of it) from last time, once again notably rotated 90 degrees as compared to the new version:

Regarding the gear structure in one corner, I previously said that “Echo contains a rotating upper “cuff” which, among other things, acts as a manually operated alternative to voice command-driven volume up/down operations.” And the gear? It “provides cuff position and speed-of-rotation information.”

And what about the various visible ICs? Again, I requote (again, with location references to the original version of the photo, not the newer one):

Toward the top is a humble Texas Instruments SN74LVC74A dual positive-edge-triggered D-Type flip-flop (ironically the largest IC on this particular PCB). Toward the center are four Texas Instruments TLV320ADC3101 stereo ADCs. They surround one of the seven microphones, at center in gold. And they are surrounded by four Texas Instruments LP55231 9-output LED drivers. The other six microphones are symmetrically located along the rim of the PCB; one of them isn’t visible in the photo, obscured by the ribbon cable. And on either side of each of those edge-located microphones is an LED, twelve total in the design.

Transducers redux

Now let’s return to those two speakers—main and subwoofer—that you initially saw earlier. A decade-plus ago and regarding the foam seen surrounding the internal assembly after removing the mostly-mesh metal outer chassis, I wrote:

Underneath the thin black fabric layer surrounding the chassis is the woofer, along with its corresponding bass reflex port. To see them in detail, check out iFixit’s website; the electronics aspects of the design were my primary focus. The fabric’s purpose may be at least in part to diffuse the speakers’ outputs, thereby delivering the 360º sound that Amazon promotes. It may also dampen vibration at high volume.

This time, curiosity got the better of me, and I decided to peruse them for myself, sharing the images with you in the process. The first step, however, was to finish removing the main speaker. Hey, look. Four more screws!

The rounded rear of the main speaker’s acoustic suspension enclosure, as mentioned earlier, uniformly redirects the subwoofer’s primary sound output around the mesh sides of the device.

Speaking of the subwoofer…

And, last but not least, the curiously shaped (as you’ll soon see) bass reflex structure, intended to boost the subwoofer’s low frequency efficiency:

You’ve actually seen its associated port already, in one of the post-foam-removal internal assembly side shots. Here’s a closeup:

Remove the two screws whose heads are visible in the prior photo:

And the bass reflex structure then slips right out the Echo’s now-speakers-less bottom end (in normal operation; top as currently upside-down oriented):

Wutdunit?

(why yes, I did just create my own word)…

Unfortunately, unless you saw an old-vs-new teardown disparity I’d overlooked in any of the PCB photos I’ve shared here (a bulging capacitor, perhaps?), we’re left with the same question I posed at the start of this writeup: what caused this Amazon Echo to fail? The power subsystem in the bottom assembly seems to still be intact, given that the top assembly’s various LEDs and microphone mute switch continue to function (likely removing the top assembly from the root-cause list as well). I doubt that a failure in either/both the bottom assembly’s audio subsystem digital-to-analog and amplification stages would suffice to bring the device completely to its knees, either.

That leaves what I previously referred to as the “digital PCB”. A corrupted firmware image, perhaps the result of power loss mid-update, is one possibility. While the Echo was still present in my list of activated (albeit in this case, not found) devices, before I removed it in an ultimately fruitless attempt to fully factory-reset and then revive it, its settings in the Alexa app indicated that a firmware update was pending. I initially discounted this firmware-corruption possibility because:

  • Amazon certainly wouldn’t design a device that “bricked” so easily, absent any sort of user-friendly recovery scheme…would it?(??!!!!)
  • And at the time, my other two, still-functional first-generation Echoes’ settings displayed the exact same “firmware update available” messages. I therefore assumed they were bogus remnants of the first-generation Echo’s dearth of Alexa+ support.

But in re-looking at their settings just now, those messages are now gone. There’s no user-controlled way to manually initiate a firmware update; Amazon automatically “pushes” them (presumably at a time when it senses that a given device isn’t in use, for example). So…maybe?

The other, more benign possibility is simply that some circuit on that particular (or another) board has “gone south”, taking the entire device down with it. Nearing 3,000 words in, I’m going to wrap up at this point and turn the keyboard over to you for your theories and broader thoughts on this teardown in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A defunct Amazon Echo: Where did its acumen go? appeared first on EDN.

USB-C and Power Delivery: Too much of a good thing?

Fri, 02/27/2026 - 15:00

I’ve recently been doing some detailed research and studying related to the USB Type-C connector and the associated USB Power Delivery (PD) specification. At first, both seemed like such a good idea, but now I am not so sure – especially about the USB PD part.

First, a little background. Like many people, I have a drawer full of AC/DC charger units I no longer use but can’t bear to toss, Figure 1. These units are often derisively called wall warts; many also function as power sources in addition to chargers, to be used with or without batteries in their target unit.

Figure 1 If you have used electronic devices, toys, or smartphones over the past decades, you likely have a drawer or box stuffed with chargers that are no longer needed, but you can’t bear to toss out. Source: Google

These chargers come in a wide range of voltage and current ratings, each specific to the product with which they came. They also have a wide range of frustratingly incompatible coaxial (barrel) connectors (“coaxial” in their physical structure, and unrelated to RF coaxial-cable connectors), and both polarity orientations, Figure 2.

Figure 2 Barrel connectors come in a wide range of inner and outer diameter pairings, presumably to key the connectors to their voltage and current, but actually a source of confusion and waste. Sources: Bid or Buy/South Africa; Same Sky

As a consequence, it is almost impossible to use one AC/DC unit as a replacement for a misplaced or defunct one. While I have resorted to repurposing one with the needed rating but wrong connector by swapping and soldering the correct connector from another unit, the average person can’t do this.

Now, USB-C and USB-PD

Then came smartphone charging and a drive towards more uniformity in USB-based charging, using either the Apple Lightning connector, a USB Type A connector, or others. “Hey,” I thought, “we’re making progress.”

Now, we have the USB Type-C connector, which is mandated by the European Union for all suitable products, including smartphones and, by extension, driving its adoption outside the EU, Figure 3. So it looks like barrel connectors are history, and other USB connectors are falling behind, as USB-C is the way to go. So far, so good.

Figure 3 The USB Type-C connector is poised to dominate due to its capabilities and the EU mandate to be used wherever technically feasible. Source: CNET

Then I started looking into the USB Power Delivery (PD) standard in more detail. It dramatically increases the available voltage, current, and power levels, Figure 4.

Figure 4 The progression of power-delivery capabilities offered by the various USB connectors is impressive. Source: Texas Instruments

USB-PD offers three power-delivery modes:

  • Sink: a port, most often a device, that consumes power from VBUS when attached.
  • Source: a port that provides power over VBUS when attached,
  • Dual-role power (DRP): a port that can operate as either a sink or source, and may even alternate between these two states.
It gets messy

This makes it all sound so simple and effective, but USB PD is not like peeling an onion, where every layer you peel back reveals only one other one. Instead, it’s more like nuclear fission, where each action or state change can lead to multiple new ones.

I won’t try to describe all the ins and outs of USB PD. There are many good overviews as well as detailed dives into the standard (see References). To sum it all up: it’s very complicated, starting with a back-and-forth initialization-negotiation dialogue between the two sides of the connection to decide who can do what to whom, Figure 5. An added complication is that USB PD allows for multiple loads to be charged at the same time, each with different requirements.

Figure 5 Once the USB-C connector is connected, the two cable ends begin a sophisticated negotiation about what needs to be done and what can be done. Source: Acroname Inc.

USB PD has many cases, exceptions, state diagrams, timing diagrams, conditional rules…it’s a long list. With all this comes the need for a very smart embedded controller to implement it.

At first, I thought the entire USB-C/PD scenario was the best thing to happen. After all, what could be better than a “universal” charging setup? It promises to handle anything up to the specified maximum, with no action on the part of the user, and no incompatibilities. What’s not to like?

However, the more I looked into USB PD, the more concerned I became. In the attempt to be a solution to just about any charging situation (and let’s ignore the data-connection interface aspect), it tries to do an awful lot. Yet history shows that such overarching objectives, however laudable and well-intentioned, can become a swamp.

That’s where I started to worry. Who can actually grasp the totality and subtleties of USB PD, especially if there’s a problem? Can the controller really be tested to 100% certainty that it properly implements all the rules and cases correctly? Are there corner cases in the real world that will only show up months or years later, with frustrated users as the test subjects?

This isn’t the only example

Whatever happened to the engineering mandate to “keep it simple”? I’ll cite an automotive parallel. Volkswagen recently introduced the 2026 Tiguan SEL R-Line Turbo, which uses a list of engineering approaches to squeeze 268 horsepower and 258 lb-ft of torque out of a modest two-liter, four-cylinder engine.

To do this, they use forced induction turbocharging, where one turbine spins in the engine exhaust, with temperatures around 1,000 degrees, and its momentum is transferred to a paired turbine spinning at speeds above 150,000 rpm to pressurize the air-intake charge. It also employs variable inlet geometry that instantly and precisely meters boost, air charge, and bypass, reducing throttle latency and increasing efficiency. The super-high compression ratio of 10.5:1 relies on higher pressure in the direct fuel-injection system (from 350 to 500 bar) as well as a forged steel fuel rail to carry it.

But why stop there? In a classic example of inevitable follow-on consequences, the higher forces require thicker piston crowns, shortened connecting rods and thicker wrist pins. The need for cooling meant redesigning the combustion chamber itself, and incorporating a new air-to-water heat exchanger. The big turbo-edition comes with oil-cooled pistons and a nitrided crankshaft. Finally, the hydraulic intake cam adjuster replaces two pairs of cam pieces with double actuators and instead substitutes four separate cam pieces with eight adjusters.

 So I have to wonder: what will the reliability and maintenance of this engineered complexity and sophistication be in a mass-produced car?

In some ways, USB PD is the latest iteration of the belief that a universal solution is possible and that “this time, we’ll get it all right.” However, sometimes having just one more-tightly focused objective is a better idea long term, as there are fewer unexpected and unpleasant surprises.

Will I miss the cheap AC/DC charger that does one thing, with its proliferation of power ratings and barrel connectors? No, I won’t. Do I welcome the USB-C and PD standard and implementation? Let’s just say I am cautiously optimistic, as I recognize that it’s a complicated system and not merely an A-to-B power source. My personal jury is out on this question!

What are your thoughts on the complexity and ambitious reach of this power-delivery standard?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

References

EU and USB Type-C regulation

The post USB-C and Power Delivery: Too much of a good thing? appeared first on EDN.

Scope boosts high-speed interface validation

Thu, 02/26/2026 - 19:54

Keysight’s XR8 real-time oscilloscope accelerates high-speed interface debug and compliance validation with powerful parallel, multicore analysis. A newly designed frontend ASIC combined with an integrated 12-bit ADC and DSP engine preserves signal integrity, enhances timing accuracy, and delivers consistent, repeatable measurements across high-speed serial, memory, and mixed-signal designs.

Powered by Infiniium 2026 software, the XR8 streamlines workflows with flexible waveform windows and productivity tools including drag-and-drop functionality and an integrated SCPI recorder. Intrinsic jitter as low as 13 fs rms and noise below 130 µV at 8-GHz bandwidth maintain compliance margin for high-speed interfaces including USB4v2, DisplayPort 2.1, and DDR5. The integrated ADC/DSP engine increases acquisition, analysis, and reporting throughput by up to 3×, helping engineers complete high-speed interface validation faster and more efficiently.

The XR8’s redesigned mechanical architecture reduces power consumption, improves thermal efficiency, and minimizes acoustic noise in a compact footprint. This smaller, quieter platform can be deployed in space-constrained labs or positioned closer to the device under test for stable, low-noise operation.

For more information about the XR8 4-channel, 8-GHz to 33-GHz bandwidth oscilloscope, click the product page link below.

XR8 product page

Keysight Technologies 

The post Scope boosts high-speed interface validation appeared first on EDN.

GaN half-bridge simplifies 650-V power stages

Thu, 02/26/2026 - 19:54

MasterGaN6 from ST integrates two 650-V enhancement-mode GaN transistors with typical RDS(on) of 140 mΩ in a half-bridge configuration, delivering a compact, efficient power stage. This power system-in-package also integrates a high-voltage gate driver and linear regulators for both high-side and low-side supplies to further reduce external components.

As the second generation of the MasterGaN half-bridge family, MasterGaN6 adds dedicated fault and standby pins to enable enhanced system monitoring and power management. Integrated LDOs and a bootstrap diode ensure reliable, optimized gate driving for improved efficiency and performance in high-density power applications.

MasterGaN6 handles output currents up to 10 A, with an overall driver propagation delay of 45 ns and a minimum pulse width of 35 ns. Its 3.3-V to 15-V logic-compatible inputs feature hysteresis and an integrated pull-down for robust noise immunity. A comprehensive protection set includes cross-conduction prevention, thermal shutdown, and undervoltage lockout to ensure safe and reliable operation.

Prices for the MasterGaN6 half-bridge in a 9×9-mm QFN package start at $4.14 in lots of 1000 units.

MasterGaN6 product page 

STMicroelectronics

The post GaN half-bridge simplifies 650-V power stages appeared first on EDN.

Low-loss MLCCs deliver wideband RF performance

Thu, 02/26/2026 - 19:54

Kyocera AVX has expanded its 550/560 series of ultra-broadband MLCCs to support high-speed, high-bandwidth optical communication systems. The capacitors provide reliable, repeatable RF/microwave performance from 7 kHz to 110 GHz and exhibit low insertion loss and flat frequency response. Depending on case size and capacitance value, typical insertion loss remains below 0.5 dB through 40–70 GHz and below 1 dB through 70–110 GHz.

The four new devices are available in 0402-size cases with capacitance values of 1 nF, 10 nF, 25 nF, and 47 nF and maximum working voltage ratings from 16 V to 100 V. With these additions, the 550/560 series offers a total of 15 devices in 01005, 0201, and 0402 case sizes with capacitance values spanning 1 nF to 220 nF. All of the capacitors operate over a temperature range of -55°C to +125°C.

The 550/560 lineup features a rugged one-piece construction with tin- or gold-plated nickel barrier terminations compatible with reflow soldering. These terminations are designed to prevent base metallization from leaching into the solder and forming brittle intermetallic compounds, which could cause cracking and solderability issues.

Visit the 550/560 series product page to download the datasheet, which includes S-parameter data and S2P Touchstone files for RF and microwave simulation. The four new part numbers will be stocked this November and available for order at DigiKey and Mouser Electronics.

550/560 series product page

Kyocera AVX 

The post Low-loss MLCCs deliver wideband RF performance appeared first on EDN.

3-in-1 IoT module cuts complexity

Thu, 02/26/2026 - 19:53

The Iridium 9604 IoT module integrates satellite, LTE-M (Cat-M1), and multi-constellation GNSS into a compact 16×26×2.4-mm form factor. Built on the u-blox SARA-R5 platform, it integrates Iridium Short Burst Data (SBD), cellular connectivity, and GPS in a single device, enabling cost-effective dual-mode IoT deployments for industrial, infrastructure, and mobility applications.

The module supports GPS, GLONASS, Galileo, and BeiDou, and operates across an industrial temperature range of −40°C to +85°C. Optimized sleep modes and a unified power architecture across all three subsystems support ultra-low-power IoT designs.

Independent control of the satellite, LTE-M, and GNSS radios allows application-defined, GNSS-informed connectivity decisions, from simple failover to advanced routing logic. A unified AT command set simplifies firmware development across all functions.

The 9604 features dual RF ports—one shared for Iridium SBD and GNSS and one dedicated for LTE-M—reducing board space and simplifying RF design. The beta program was oversubscribed, with participants reporting lower system cost and up to 60% PCB footprint reduction.

Commercial availability is scheduled for June 2026, with development kits available for evaluation. Reserve priority access on the product page linked below.

9604 product page 

Iridium Communications 

The post 3-in-1 IoT module cuts complexity appeared first on EDN.

5G RedCap module enables high-speed IoT connectivity

Thu, 02/26/2026 - 19:53

Cavli’s CQM220 5G Reduced Capability (RedCap) module provides power- and cost-optimized 5G connectivity for IoT applications. Compliant with 3GPP Release 17, it delivers downlink speeds up to 220 Mbps and uplink up to 120 Mbps, with LTE Cat 4 fallback for 4G compatibility.

The module features an Arm Cortex-A7 processor running up to 1.9 GHz, flexible memory configurations, and advanced power management options including eDRX/DRX modes. It comes with the OpenWrt-based OpenSDK for on-module application development, reducing external MCU dependency.

Integrated multi-constellation, dual-band GNSS with L1 and L5 support enables precise positioning using GPS, GLONASS, Galileo, BeiDou, NavIC, QZSS, and SBAS in urban, industrial, and remote environments.

The CQM220 is available in a 28.0×25.5×2.7-mm LGA package for compact embedded designs and an M.2 form factor for routers, gateways, and CPE. It provides USB 2.0, PCIe Gen2, I2C, UART, SPI, SDIO, I2S, and ADC interfaces, along with main, diversity, and GNSS antenna connections.

Samples and evaluation kits can be ordered on the product page linked below.

CQM220 product page 

Cavli Wireless 

The post 5G RedCap module enables high-speed IoT connectivity appeared first on EDN.

Jumping the Jeep: An alternative cost-effective solar cell example app

Thu, 02/26/2026 - 15:00

A solar charging kit, inexpensive as-is and purchased after further promotional enticement, enables keeping a remotely located vehicle battery topped off.

One of the things I enjoy most about technology is watching a new approach (along with products based on it) hit its high-volume stride, typically driven by one or only a couple of early applications, and then just explode from there, both replacing precursor technologies and expanding into brand new applications and markets. This has certainly been the case, for example, with LEDs. See, for example, my recent teardown (where they replaced fluorescent tubes) for an example of the former, and an earlier teardown (where their low power consumption and DC voltage foundation enabled the development of a light bulb with integrated battery backup) for an example of the latter.

A solar revolution

Or take, as another technology case study, solar cells. Their combination of efficiency and cost-effectiveness, in combination with equally pervasive lithium battery technology, has enabled widespread replacement of predecessor SLA-based energy storage systems, both portable and whole-home permanent installations, while dramatically expanding the accessible market for such devices. At the same time, they’re helping create entirely new categories of products. Take, as a humble example, Renogy’s 10W solar trickle charger kit, two of which I purchased back in October 2024 and one of which I recently, belatedly, and finally pressed into service:

Right now, as I write this, they’re selling on Amazon for $25.17 each, brand new. A year-plus ago, during Amazon’s Prime Days sales, I got them off the Resale (formerly Warehouse) site in used, like-new condition for $17.74. I don’t think they’d even been opened by the prior purchaser(s) prior to getting returned. The intent at the time was to use them to keep the batteries in two of my vehicles, then outdoor-stored at a lot about a half hour drive away, trickle-charged up. But I could never figure out how to securely attach the solar cells to the vehicle covers, far from routing their outputs to the battery compartments. That said, I eventually figured that latter part out: SAE extension cables:

One of the vehicles, my 2001 Volkswagen Eurovan Camper, is now parked in my garage for critter-protection purposes. The other, a 2006 Jeep Wrangler Unlimited Rubicon, most recently mentioned last March when I discussed its then-drained battery state, is still down there (now with a permanently disconnected battery). A few months back, when I drove down and checked on it, my preparatory suspicion was confirmed; as happens every few years, the combination of persistent sun and still-frequent precipitation (rain, snow, hail…) exposure, along with also-frequent wind, had disintegrated the cover:

Successful experimentation

While waiting for the replacement cover to arrive, I had a bright idea; this’d be the perfect time to finally try out that solar cell kit! My original idea was to mount it to the now-exposed vehicle hood. But then I realized that I had an even better option available, inside the vehicle:

in combination with the 12V auxiliary power connector built into the console:

As you can see from the above image (which I snagged from an enthusiast forum thread post to save me an hour-long round-trip drive to the storage lot to take my own shot; that’s not actually my rig), there are two of them. One, the “cigarette lighter” located within the ashtray, is ignition-switched. It obviously won’t work for my purposes. The other, while (I think) still fused, otherwise routes directly to the battery; it’s always “hot”. That’s the one I needed and used:

And it works perfectly! My perhaps-obvious concern was two-fold:

  • It’d either not work sufficiently (or at all), leaving me with an eventually-drained battery once again, or
  • It’d work too well, not terminating the trickle charge when it sensed a “full” state, thereby also leading to the battery’s demise (along with who-knows-what other issues).

Two weeks later, when I went back and checked (in the process of installing the new vehicle cover), I happily discovered that all my worrying was for naught; it was working exactly as planned. Now I just need to figure out how to securely attach the solar cell to the outside of the new cover, and I’ll be set! Suggestions, along with more general thoughts, are as-always welcomed in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post Jumping the Jeep: An alternative cost-effective solar cell example app appeared first on EDN.

Power Tips #150: Overcoming high-voltage monitoring challenges in gigawatt-scale data centers

Thu, 02/26/2026 - 15:00

As AI and machine learning workloads accelerate, data center power consumption is beginning to outstrip existing infrastructure capacity. To meet this rising demand, new high-voltage DC standards support the higher-power, denser server racks now found at gigawatt-scale facilities. These high-voltage standards create engineering challenges when monitoring high-voltage power rails.

Designers need reliable, accurate, and fast-acting voltage supervision to prevent overvoltage damage to downstream components, and to help ensure a timely system response to undervoltage conditions. This article presents a supervision approach that addresses these requirements and enables the reliable deployment of next-generation high-voltage DC architectures.

 The push toward high-voltage DC architectures

The power profile of modern data centers is undergoing a dramatic shift as AI becomes the dominant application. Machine learning with large graphics processing unit arrays consumes power at levels once associated with industrial equipment rather than IT hardware. It is increasingly common for a single rack to draw 60 kW to 100 kW. Next‑generation AI systems are expected to push beyond 150 kW per rack.

Because traditional 48-V distribution designs cannot efficiently support these levels, designers are turning to a new class of high‑voltage DC standards centered around ±400 V or 800 V distribution. This shift, as shown in Figure 1, is not simply an incremental upgrade; it represents a fundamental change in the delivery of power across gigawatt‑scale facilities.

Figure 1 Conventional versus high-voltage data center power distribution. (Source: Texas Instruments)

Efficiency continues to drive the transition to higher voltages. Higher voltages reduce current and the I²R losses that dominate high power distribution, while also substantially cutting current and reducing conduction losses in cables, busbars, and connectors. Higher efficiency at large AI campuses means lower cooling requirements, improved energy performance, and increased computing density.

Higher voltages also unlock greater power‑delivery capability. Delivering 150 kW to 300 kW per rack at 48V requires heavy conductors, parallel cabling, and complex routing. High voltages deliver the same power with manageable current levels, enabling simpler infrastructures and longer distribution distances without excessive copper mass.

Cost provides yet another compelling factor. Smaller conductors, lighter busbars, and reduced copper usage lower material and installation expenses. At modern hyperscale data center campuses, these reductions are substantial.

Challenges in monitoring high-voltage power rails

As data‑center power architectures migrate toward higher‑voltage DC distribution, the need for monitoring and protection circuitry increases significantly. Higher-voltage DC distribution increases demands on monitoring and protection circuitry. Operating at ±400 V or 800 V means that every disturbance or transient condition carries more stored energy, with components operating closer to their absolute limits. These conditions reduce the margin for error and make precise power‑rail supervision essential.

Designers must contend with higher fault energy levels, faster electrical dynamics, increased electromagnetic noise, and tighter system‑level coordination requirements. In this environment, monitoring circuits must distinguish between harmless fluctuations and true fault conditions, with far greater accuracy and speed than lower‑voltage systems.

With these broader challenges in mind, let’s look more closely at two specific issues surrounding under- and overvoltage events:

  • Response time. The voltage monitor must respond to faults fast enough to prevent damage to downstream components, but should not trigger erroneously from a noisy environment or short transient voltage fluctuations. For example, imagine a large current spike causing the supply voltage to drop while the power supply responds. If the voltage drops for only a very short time, it may not be considered a fault condition, thus requiring no action. As soon as the voltage is low enough to be considered a fault, however, the voltage monitor should take action as soon as possible to prevent damage.
  • Size requirements are another common challenge for voltage monitoring. High-voltage data-center power supplies have extremely limited space, requiring the smallest possible monitoring solution. But it also has to be reliable. Ensuring that the voltage monitoring solution can be trusted to respond to faults is imperative to a reliable power supply and distribution system.
Requirements for monitoring high-voltage power rails

Figure 2 shows a minimal high-voltage monitoring circuit implementation using:

  • A high-voltage resistor ladder to step down the power rail for sensing comparators.
  • Two comparators to signal under- and overvoltage faults.
  • A voltage reference for comparators.
  • Filtering components.
  • An amplifier to provide a scaled-down voltage for the analog-to-digital converter (ADC) for analog monitoring and telemetry of the power rail.

Figure 2 High-voltage monitoring circuit building blocks. (Source: Texas Instruments)

Implementing this circuit with discrete components may present significant drawbacks. Individual component tolerances will add together, resulting in significant errors requiring costly, high-accuracy, low-temperature-drift components. Resistors are especially problematic, as each resistor’s uncorrelated error will sum to create a significant cumulative error in the resistor-divider. Discrete components consume significant board space, which is typically at a premium in data-center applications.

Figure 3 shows a reference layout with space requirements for high-voltage monitoring with discrete components.

Figure 3 A discrete high-voltage monitoring implementation. (Source: Texas Instruments)

An integrated solution

An integrated device for high-voltage supervision addresses these challenges by fully integrating the high-voltage resistor-divider, comparators, buffer, and additional features. The functional diagram in Figure 4 illustrates this approach, helping reduce total solution size while maintaining high performance.

By integrating the resistors, reference, and comparators, TI’s TPS371K-Q1 achieves an accuracy of 1% across the –40°C to 125°C temperature range, with a fast fault detection time of <5 µs, programmable glitch rejection and release delay time, as well as a 1% accurate high-bandwidth buffer that can directly drive 16-bit ADCs or downstream control circuits.

Figure 4 TPS371K-Q1 functional block diagram. (Source: Texas Instruments)

An integrated monitoring solution also provides significant board space savings in a compact package (Figure 5), requiring minimal external components.

Figure 5 Integrated high-voltage monitoring solution. (Source: Texas Instruments)

Application example

The implementation of a voltage monitoring system using the TPS371K-Q1 is straightforward. Figure 6 shows a basic schematic for monitoring the ±400V or 800V input to a DC/DC converter.

Figure 6 Voltage monitoring for a high-voltage DC/DC converter. (Source: Texas Instruments)

Using resistors on the ADJ OV and ADJ UV pins, designers can select under- and overvoltage thresholds to fit their system. The CTR and CTS pins allow the use of a capacitor to program a delay before assertion of a fault and a delay before deassertion once the voltage returns to normal. Open-drain outputs enable easy interface with logic levels other than the device’s own supply voltage. The VSENSE output pin provides a scaled representation of the SENSE input voltage for direct connection to an ADC. Designers can select voltage sense output factors with options ranging from 200 to 900.

Integrated monitoring solutions

The transition to high‑voltage DC architectures is reshaping design requirements for next‑generation data‑center power systems, especially as AI workloads continue to push rack‑level power far beyond the limits of today’s distribution schemes. Reliable voltage supervision becomes foundational, helping ensure high‑energy power-rail monitoring with the speed, accuracy, and reliability required to protect downstream converters and maintain system stability.

Integrated monitoring solutions such as the TPS371K-Q1 address these challenges by combining precise threshold detection, fast fault response, programmable filtering, and compact implementation into a single device optimized for the electrical and space constraints of modern data centers. By adopting advanced monitoring approaches, designers can confidently deploy ±400 V and 800 V architectures that deliver the efficiency, power density, and reliability needed to support the continued growth of AI‑driven computing at the gigawatt scale.

Henry Naguski is an applications engineer for Linear Power at Texas Instruments, working with voltage references and supervisors. He specializes in shunt voltage references and high-voltage supervisors. Henry holds a bachelor’s degree in computer engineering from Montana State University.

 

 

Masoud Beheshti leads application engineering and marketing for Linear Power at Texas Instruments. He brings extensive experience in power management, having held roles in system engineering, product line management, and marketing and applications leadership. Masoud holds a bachelor’s degree in electrical engineering from Ryerson University and an MBA with concentrations in marketing and finance from Southern Methodist University.

Related Content

The post Power Tips #150: Overcoming high-voltage monitoring challenges in gigawatt-scale data centers appeared first on EDN.

Pages