EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 59 хв 32 секунди тому

A new IDM era kickstarts in the gallium nitride (GaN) world

Чтв, 07/31/2025 - 11:14

The news about TSMC exiting the gallium nitride (GaN) foundry business has stunned the semiconductor industry, also laying the groundwork for integrated device manufacturers (IDMs) like Infineon Technologies to seize the moment and fill the vacuum.

Technology and trade media are abuzz with how GaN power device manufacturing is different from traditional power semiconductors, and how it doesn’t create strong demand for foundry services. Industry watchers are also pointing to the rising price pressure from Chinese GaN fabs as a driver for TSMC’s exit.

To offer clarity on this matter, EDN recently spoke with Johannes Schoiswohl, senior VP and GM of Business Line GaN Systems at Infineon. We began with asking how GaN manufacturing differs from mainstream silicon fabrication. “They are fundamentally not so different because we start with a silicon wafer and then grow epitaxy of GaN on top of it,” he said.

The dedicated epitaxial machines conduct the process of growing a GaN layer on top of a silicon substrate. “That’s the key difference,” Schoiswohl added. “From then onward, when GaN epi is grown, we use processes and tools similar to silicon fabrication.”

Figure 1 Johannes Schoiswohl explains the engineering know-how required in GaN fabrication. Source: Infineon

GaN’s journey to 300 mm

While China’s Innoscience claims to be the world’s largest 8-inch GaN IDM, operating a dedicated GaN-on-silicon facility, Infineon is hedging its bets on 300-mm GaN manufacturing. The German chipmaker plans to produce the first 300-mm GaN samples by the end of 2025 and kickstart volume manufacturing in 2026.

That will make Infineon the first semiconductor manufacturer to successfully develop 300-millimeter GaN power wafer technology within its existing high-volume manufacturing infrastructure. “We were able to move from 6-inch to 8-inch quickly and now to 300-mm because we could use the existing silicon equipment, and that’s beautiful from a capex perspective,” said Schoiswohl.

Figure 2 GaN production on 300-mm wafers is technically more advanced and significantly more efficient compared to established 200-mm wafers. Source: Infineon

What’s really new is the 300-mm epi tool, he added. “Moving to 300-mm fabrication is indeed challenging because there are a lot of engineering issues that need to be resolved,” Schoiswohl said. The GaN layer on top of the silicon layer has a different crystal structure, which causes a lot of strain and mismatch. Additionally, there could be a significant amount of wafer breakage. “It means that a lot of engineering know-how will go into the 300-mm GaN fabrication,” he said.

In a report published in Star Market Daily, Innoscience CEO Weiwei Luo acknowledged significant barriers that hinder the commercial realization of 12-inch or 300-mm GaN wafers. He especially mentioned the lack of metal-organic chemical vapor deposition (MOCVD) equipment capable of supporting 300-mm GaN epitaxy; MOCVD is the core equipment for the epitaxial growth of GaN layers.

Regarding MOVCD tools for 300-mm wafers, Schoiswohl acknowledged that Infineon is currently in the early stages. “We are working closely with MOCVD equipment vendors.”

GaN fabrication model

TSMC’s exit has raised questions about why the foundry model is losing traction in GaN. According to Innoscience CEO Luo, power GaN devices aren’t well-suited for the traditional foundry model because they require close coupling between design, epitaxy, process, and application. That’s where the foundry-client model struggles while the IDM model offers the agility and control.

Infineon’s Schoiswohl says that GaN manufacturing isn’t low margin, but what you need to do is ensure value creation. “First and foremost, you need to be cost-competitive,” he said. “You need to drive down costs aggressively, and for that, you must have a cost-effective manufacturing technology, which is 300-mm GaN wafers in this case.”

Second, IDMs like Infineon can innovate at the system level. “It’s not enough to simply develop a GaN transistor,” Schoiswohl said. “We need to have gate drivers and controllers and thus demonstrate how to create a system that offers maximum value.”

Figure 3 The system approach for GaN devices complements cost competitiveness. Source: Infineon

With optimized controllers and gate drivers, engineers can create GaN solutions that bring the system costs down. That makes GaN a meaningful and profitable business; however, this is far more challenging for a foundry than an IDM.

With 300-mm enablement and a focus on the system-level approach, Schoiswohl is confident that GaN can eventually reach cost parity with silicon. “The progress on product level triggers innovation on system level, where gate drivers and controller ICs are optimized for high-frequency implementations and new topologies.”

Future of GaN technology

While Infineon is doubling down on GaN manufacturing, Schoiswohl foresees massive advancements in the performance of GaN from a design standpoint. “We’ll see a huge drop in parasitic capacitance and on-state resistance in a given form factor.”

That, in turn, could harness the release of high-voltage bi-directional switches, where devices are analytically integrated into one die. You could turn it on and off in both directions, which enables a lot of new topologies.

With TSMC’s exit from the GaN fabrication business, will IDMs be the winners in this power electronics segment? Will GaN heavyweight Infineon be able to execute its 300-mm GaN roadmap as planned? Will other fabs follow suit after TSMC’s departure? These questions make the GaN turf a lot more fun to watch.

Related Content

The post A new IDM era kickstarts in the gallium nitride (GaN) world appeared first on EDN.

Independent control of thyristor half-wave firing angles via PWM

Срд, 07/30/2025 - 16:24

“Two halves make a whole” is a very old and often true maxim. For example, it’s almost always true when said about AC phase angle power control. You rarely want significant alternating half-cycle asymmetry due to the (usually undesirable) DC load current component that unequal half-cycle conduction angles create. A nicely balanced, DC-free, whole, and symmetrical full-wave power is therefore the desired output waveform.

Wow the engineering world with your unique design: Design Ideas Submission Guide

So, what to do if you have an application that needs better symmetry than available thyristors can deliver without some fine-tuning? For example, in Figure 1, Q2’s datasheet specifies ±3 V = ±8% of polarity-dependent trigger voltage asymmetry. Or suppose that (for some bizarre reason) you actually want precisely controllable amounts of deliberate half-cycle conduction angle inequality. What then? 

Figure 1 offers a simple solution for both problems. It implements independent control of positive and negative half-cycle phase angles using separate and independent trigger-time constant-setting PWM channels: One for positive half cycles, another for negative. Where:

Positive half wave timeconstant = R1C1 / DF+

Negative half wave timeconstant = R1C1 / DF-

DF = PWM duty factor = 0 to 1

Figure 1 Q1 and Q3 provide independent triggering-time constants for opposite polarity half-waves.

The power control method in play is phase angle conduction via QUADRAC thyristor Q2. It’s wired in the traditional way except that opto-isolators Q1 and Q3 fill in for the usual manual phase adjustment pot. The duty factor (DF) of the PWM inputs sets the phototransistor’s average conductance. Diodes D1 and D2 select whichever optoisolator corresponds with an instantaneous 60-Hz half-wave polarity. The type H11D1 300-V opto has a typical current transfer ratio of 80% which makes ~10 mA of PWM drive current necessary. Current limiter R2’s 330 Ω assumes a 5-V rail and a low impedance driver. That will need adjustment if either assumption doesn’t apply to your system. The PWM cycle rate isn’t critical but should be circa 10 kHz. 

The full-throttle output efficiency is around 99%, but Q2’s maximum junction temperature rating is only 110 °C. So, adequate heatsinking of Q2 will be wise if RMS output >200 W is expected.

The adjustment range for each half-cycle phase spans an upper limit of DF = 1, which sets a maximum conductance angle of ~2.6 radians and 95% = 117 Vrms output power, down to DF = 0 and zero power. Figure 2 shows the approximate relationship between DF and conduction angle, while Figure 3 illustrates its inverse.

Figure 2 Thyristor conduction angle R [R = pi – 0.60 DF-(2/pi)] versus PWM DF, where the y-axis is in radians and the x-axis is the unitless DF.

Figure 3 The PWM DF [((pi – R)/0.60)-(pi/2)] versus the desired conduction angle R. The y-axis is the DF, and the x-axis is in radians.

 Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Independent control of thyristor half-wave firing angles via PWM appeared first on EDN.

A nice, simple, and reasonably accurate PWM-driven 16-bit DAC

Втр, 07/29/2025 - 16:32

Implementing a simple digital-to-analog converter (DAC) by cascading a single pulse width modulator (PWM) and an analog low-pass filter is nothing new. Nor is applying to a filter the sum of the outputs of a most significant 2N-count PWM and a least significant 2N-count one to get a composite 22N-bit DAC [1][2]. But designing one with simple, adjustment-free topologies and reasonably accurate, repeatable performance characteristics is not trivial. A proposed example is seen in Figure 1. Let’s examine the circuit from the output to the input.

Figure 1 The PWM—driven 16-bit DAC. Capacitors C1, C2, and C3 are NPO/COG ceramic.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Op amp

The OPA383 is a “rail-to-rail” input and output op amp. Typical of such parts, the output doesn’t quite swing to the rails. A close look at the spec reveals that the V+ and V- supplies should be at least ±155 mV beyond the range of output signals, and their difference should be less than 5.5 V.  Input offset voltage is ±5 µV maximum at 25°C, but unfortunately, we are not given limits over temperature. However, a graph of 5 measured units shows limited susceptibility to temperature. Let’s assume three times 5, or a 15 µV maximum over the temperature range.

The bias current is ±76 pA maximum from -40°C to +85°C. I’d like to keep the design’s various independent error contributions under ½ PWM count (in this case, under 2-17 of full-scale). Considering the under 100 kΩ resistance seen by the op amp input, full-scale DAC voltages over 2.0 V would encounter errors less than ½ count due to bias current and offset voltage.

The op amp’s DC gain is a minimum of 118 dB, and its gain-bandwidth (GBW) product is 2.5 MHz typical. In the absence of other information, I’ll assume and work with a minimum GBW of 1 MHz.

PWM filter

The filter consists of U2, Ra, Rb1, Rb2, R1, R2, R3, C1, C2, and C3. It’s important to keep a handle on the tolerances of these passive components to ensure repeatable results. The capacitances of common ceramic types such as Y5V and X7R are very sensitive to temperature and to DC voltage; they are not recommended for use in filters requiring any significant stability. Film and ceramic COG/NPO types are far less sensitive. NPO/COG capacitors and the resistors of the values and tolerances shown in the schematic are available for well under $0.10 in 1000-piece quantities.

The filter shown is a 3rd-order one (evident from the presence of three capacitors). Generally, 3rd-order filters offer a smaller (better) product of settling time and ripple attenuation factor than 2nd-order filters (two capacitors). Design aids for 3rd-order types are rare, so I’ve used one I developed and published in EDN almost 15 years ago [3]. This filter does not rely on the cancellation of large signals of opposing phases, so there is no need for adjustment pots to deal with the lack of zero-tolerance components that would otherwise be required to achieve maximal nulling.

It’s the job of the filter to suppress the AC “ripple” of PWM signals, which are at their worst when the output is 50% of full scale. Minimization of settling time is also of interest. To assess the effects of variations due to component tolerances, Figure 2 and Figure 3 show 100-run Monte Carlo trials of settling times for a zero to full-scale transition and for ripple attenuation.

Figure 2 100 Monte Carlo runs of a transition from 0 to full scale, 0 to 65535 counts. Settling to better than ½ count occurs in less than 2 milliseconds.

Figure 3 100 Monte Carlo runs of ripple, where full scale is 65535 counts. Ripple is less than ½ PWM count peak and 1 count peak-to-peak. PWM frequency is 78 kHz.

Summing network

There are two 8-bit PWMs. To create a 16-bit signal, the contribution of the most significant PWM signal is weighted by a factor very close to 256 times larger than that of the less significant PWM signal. A summing network of Ra and Rb1 + Rb2 accomplishes this. (Note that the remaining filter components have no DC effect on this network.)

Filter drivers (logic Inverters)

The logic inverters shown driving the summing network have finite output resistances, which effectively add to those of Ra and R1b. Unfortunately, logic inverters are not linear devices and are not characterized as such. The best way to determine maximum output resistance from their data sheets is to first identify the specified supply voltage nearest (but less than or equal) to the one intended for use, and then divide the maximum output voltage drop by the specified load current.

It’s best to do this for the high side, as its resistance is typically higher than that of the low side. For instance, if a 3.3-V supply is intended for a Texas Instruments SN74AC04, use the datasheet’s 2.46-V minimum for a 3-V supply drawing 12 mA to arrive at a maximum resistance of 45 Ω. Paralleling five gates will reduce that resistance to an unknown amount below 9 Ω. The amount is unknown because the individual inverters share common resistances on the wafer and in the wafer-to-package bonding wires. And so up to 9 Ω is added to Ra. The up to 45 Ω added to Rb has a comparably negligible effect.

But here we depart from the goal of limiting an error source to a maximum contribution of ½ count—the maximum differential non-linear error is now just under 1 count. Fortunately, even with this error, an increasing series of counts yields a monotonically rising sequence of output voltages. If the performance improvement is worth the cost, you could stay below a ½ count error by doing the following:

  • Replacing the inverter with the low-resistance, dual-channel TS5A22362DRCR analog switch
  • Replacing the R1a1% part with a 0.05% part
  • Replacing the 30.1 kΩ R1b with a 28.7 kΩ 1% resistor in series with a 5% 510-Ω unit.
Driver power supply

Alas, once again, we must abandon the goal of keeping the errors introduced to less than ½ LSb. The TI REF35 IC’s ±0.05% at 25°C rating equates to 33 LSb’s! And even with the benefit of calibration and added hardware to adjust the inverter/analog switch’s supply voltage, the reference’s 12 ppm/°C temperature variations would leave us in the lurch. Once again, we have to eat some error.

In the spirit of continuing to do our best with the cards dealt, the reference’s DC resistance (60 ppm max of 3.3 V (for instance) / 1 mA) is about only 0.2 Ω. This is negligible when met with the DC resistance seen through the summing network of Ra and Rb. Transients from the inverters are a concern, however.

Adequate decoupling of those devices is a must. Additionally, the AC impedance due to the combination of R1, Ra Rb, and C1 of approximately Zsum = 16.5 kΩ appears at the inverter outputs and so also across their supply terminals. Fortunately, these are at frequencies no lower than the PWM frequency (see next section for this value). The capacitors shown keep the impedance at this frequency well below 0.1% of the almost completely resistive Zsum. For practical considerations, the magnitude of this combination is indistinguishable from that of Zsum.

PWM signal source

The PWM signal source is probably a microprocessor. These days, most of them can be clocked at 20 MHz or greater, meaning that they could all source 8-bit PWMs of at least 20 MHz / 256 = 78 kHz. It’s this frequency or higher that the filter was designed for. So why not use microprocessor GPIO PWM outputs as drivers?

First, there’s the usually fairly high GPIO output resistance. Additionally, if you’ve ever looked closely at the voltage of a microprocessor digital output, you might have seen that it’s a few millivolts or even tens of millivolts from ground and the device’s supply. This is because the processor is performing functions in addition to generating a PWM, which draw significant current, producing voltage drops through portions of the IC wafer and its package bonding wires. The SN74AC74 has no such other functions, and any such voltage drops are part of the voltage drop specs discussed earlier.

Modifications

Want lower ripple amplitude? Increase the PWMs’ frequency. Want faster settling time? The resistance looking into Ra and Rb is Rab = 4009 Ω. Reduce R3, R2, and R1—Rab by some factor and/or C1, C2, and C3 by the same or different factor. Increase the PWMs’ frequency by at least the product of the factors. Increase it further to achieve both improvements.

To sum it up

A simple design has been introduced for a PWM-driven 16-bit DAC. Peak ripple is less than ½ LSb and the circuit settles to this level in less than 2 ms. Monte Carlo analyses show that these parameters are met even considering passive component and op amp GBW tolerances. In 1k quantities, the reference is about $1, the op amp is under $0.75, and the filter passives are each under $0.10.

Error sources in various parts of the circuit have been identified and, where possible, limited to no more than ½ LSb. To address other larger errors, suggestions of additional hardware and calibration have been made, but the temperature sensitivity of the voltage reference is a limiting factor.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content/References

  1. Double up on and ease the filtering requirements for PWMs
  2. Inherently DC accurate 16-bit PWM TBH DAC
  3. Design second- and third-order Sallen-Key filters with one op amp

The post A nice, simple, and reasonably accurate PWM-driven 16-bit DAC appeared first on EDN.

Perusing Walmart’s onn. 4K Pro Streaming Device with Google TV: Storage aplenty

Пн, 07/28/2025 - 16:59

Toward the end of my late-April teardown of Walmart’s first-generation Google TV-based onn. 4K Streaming Box, which EDN quickly augmented by publishing my dissection of its “stick” sibling, the onn. Full HD Streaming Device, two weeks later, I wrote:

I’ve also got an onn. Google TV 4K Pro Streaming Device sitting here which I’ll be swapping into service in place of its Google Chromecast with Google TV (4K) predecessor. Near-term, stand by for an in-use review; eventually, I’m sure I’ll be tearing it down, too.

With apologies (although I know of at least one reader that won’t be disappointed), I’m going to alter that planned content-publication cadence. Thanks to a gently used onn. Google TV 4K Pro Streaming Device that I subsequently found at notable discount to MSRP on eBay, you’re going to get that teardown today. And although I hope that TheDanMan (and the rest of you) get something(s) useful out of this project, I already did. More on that after the break(down).

The onn. 4K Pro Streaming Device teardown

Let’s start with some overview shots of our patient, as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. I’d incorrectly mentioned back in the late April teardown that this device has dimensions of 7.71” x 4.92” x 2.71”…those are actually the package dimensions (in my slight defense, every other review I’ve found parrots the exact same info from Walmart’s website). It’s ~4.25” on each side (a rounded square in form factor) and 1.5” tall (rounded top, too), per my tape measure. And my kitchen scale says it weighs ~9.9 oz:

Around the front is a button that, when pressed, causes the remote control (assuming it’s in Bluetooth broadcast range) to emit a tone so you can find it buried between the sofa cushions (or wherever else you might have absentmindedly put it). Given its first-time inclusion and placement prominence, such scenarios are apparently quite common! Unseen behind the mesh on both sides of the button are microphones in an ambient noise-squelching array arrangement for the device’s also-first-time integrated Google Assistant (now Gemini, I guess) voice interface.

The left-side switch controls the mics’ muted-or-not status. Unmuted in its current state, it exposes a red background when slid to the right. You’ll see later what else turns red:

Around the rear are (left to right) the reset switch, a (first-time once again) optional wired Ethernet connector, the HDMI output, a USB-A 3.0 connector (useful for, among other things, tethering to local mass storage for media playback purposes), and the “barrel” power input:

Speaking of which, now’s as good a time as any to show you the “wall wart” power supply:

Back to our patient. The right side is comparatively bland:

And last, but not least, here’s the bottom:

with a closeup of the label, revealing (among other things) the 2AYYS-ORPK4VTG FCC ID.

If you were thinking that the rubber “foot” (specifically, screw heads underneath it) was a likely pathway inside…well, you’d be right:

And here we go (in what follows, I admittedly followed in the footsteps of this video)…

Hopefully, you’ll read this next bit before you go ahead and rip the top half off. Don’t. Two wiring harnesses require detachment first, one more fragile (and difficult to disconnect) than the other:

I’ll ruin the surprise at this point (sorry). The red-and-black wire combo in the lower left quadrant goes to the speaker. The flex PCB one in the upper right ends up at the dual-microphone array. Stay tuned for more revealing pictures to come.

The former was straightforward to detach:

The latter, a bit trickier:

requiring that I first lift up retaining clips on both sides of the connector soldered to the PCB.

Let’s focus on the top half of the chassis first:

The large metal piece is, likely already unsurprising to you, given the piece of grey thermal transfer tape attached to the middle of it, a big ol’ heatsink for the PCB-housed circuitry normally located directly below it (along with adding heft to the overall assemblage’s weight):

With the heatsink out of the way, the speaker underneath it (and at the very top of the device when fully assembled) is obvious:

Below the speaker are four side-by-side square light guides that route PCB-located LEDs’ illuminations to the outer topside of the device; you’ll see them in action shortly.

And below them is a mini-PCB containing the MEMS microphones:

The mini-PCB is held in place by two brackets, themselves held in place by two screws:

With them removed, there’s still a minor matter of some adhesive to deal with:

Voila:

After retracing my steps to put the mic mini-PCB back in place, I tackled the output transducer next. First, I removed the transparent housing around its backside, which both transforms this portion of the design into a closed-box (i.e., “sealed”) speaker enclosure and suppresses the sound it generates from “leaking” into the microphones’ inputs:

Once again, with aspirations of returning the device to a fully functional state post-teardown, I reversed course and put everything back together again, then switched my attention to the lower half of the chassis. You can already tell, even from the overview image, where the thermal tape on the heatsink had originally attached:

Before going any further, here’s a look at both sides of the rectangular Wi-Fi 6 (802.11ax) antennae on both sides of the device, lower left (inside view first, then outside):

and upper right (ditto).

The Wi-Fi subsystem is dual-band (2.4 GHz and 5 GHz), so I’m guessing there’s one antenna dedicated to each band. The cables initiating at each antenna terminate at RP-SMA connectors on the lower right corner of the PCB. Also shown here is the Fn-Link Technology 6252B-SRB wireless communications module that manages both Wi-Fi 6 and Bluetooth, and the Bluetooth antenna itself. Also, in the lower left of the photo is one of the four PCB-resident LEDs, each surrounded by grey rubber, into which the square light guides seen earlier can be inserted:

Here’s another perspective on the Bluetooth antenna:

Above the wireless communications subsystem is a piece of grey tape which, when lifted out of place, reveals what I believe is the (presumably class D) audio amplifier for the speaker output, judging by its proximity to the speaker cable harness connector:

At lower left, again, identity-assumed per its connector proximity (this time for the microphone array flex PCB) is the corresponding (both amplification and digitization?) subsystem for the microphone inputs. To their right are the other three LEDs:

In the upper left is, I believe, the device’s power generation, regulation, and management subsystem (proximity-assessed once again; note the input barrel connector above it):

Underneath the piece of foam bridging between the USB-A 3.0 and HDMI connectors is what I originally thought might be something substantive, semiconductor-wise:

Alas, it ended up being just a few more passives:

And of course there’s the sizeable Faraday cage dominating the PCB landscape. But, putting whatever’s underneath it (although I already have ideas) aside for a moment, let’s first get the lay of the land overview of the PCB underside:

Whaddya know…there’s another thermal tape-augmented heatsink here:

And another foam square-augmented Faraday cage:

Prior to popping it off, I first need to fully free the PCB…which necessitated disconnecting those previously glimpsed Wi-Fi antennae connectors:

That’s better:

Note once again the antennae on either side of the underside chassis’ insides, and the heatsink in the middle. Above the left-side antenna is the microphone mute switch:

which, when assembled, mates with the PCB-mounted switch assembly at far right on this shot:

About that Faraday cage…as previously mentioned (and as always), I’m striving to return this device to full functionality post-teardown, so I need to be careful when popping the top off:

Success! Not much to see here, but a bunch of passives, likely associated with ICs on the other side of the PCB. The thermal tape similarly likely assists in removing heat generated by those other-side ICs. Even though heat generally goes up, some of it will also radiate through the PCB, ultimately destined for dissipation by the previously seen heatsink on the bottom of the device:

Speaking of which, let’s return to the larger Faraday cage on the PCB topside.

Careful…careful…

I’m two for two. And as expected, the “meat” of the semiconductor content is here:

In the upper left is the 3 GByte DRAM, likely multi-die stacked in construction (as I’ve discussed in detail recently), and marked thusly:

Rayson
RS768M32LX4
D4BNR63BT
2402CNPFV

The results of a web search on “RS768M32LX4” suggest that it’s LPDDR4 in technology and 3733 Mbps in peak data transfer rate.

To its right is the Amlogic S905X4 application processor ,whose presence I already tipped off to you in late April. And below them both is the 32 GByte e.MMC flash memory storage module:

FORESEE (from Longsys, strictly speaking)
FEMDNN032G-A3A55
H23092453972340
001

Here are some side views of the PCB, after putting the Faraday cage covers back in place:

And after carefully reconnecting the Wi-Fi antennas:

and the microphone and speaker cable harnesses:

squeezing the two halves of the enclosure back together and reinstalling the screws and rubber “foot”, I connected the wall wart, plugged it in, and crossed my fingers:

Huzzah! The four red lights you see in this photo indicate that the mic array is currently muted:

And lest you doubt, the HDMI output is fully functional, too:

About that on-screen remote-control notation…I earlier mentioned that the gently used device I tore down today delivered ancillary benefits for me. Had I not been an honest fellow, the benefits might have been even more bountiful. When it initially arrived, I powered it up and noticed that it still had the previous owner’s Google account info configured in it, including access to purchased content (and potentially, the ability to both buy and rent even more of it if I so desired). I immediately factory-reset it and then messaged the seller with a heads-up to be more thorough about wiping devices before shipping them to their new homes in the future!

The remote control 

But about that remote control…as alluded to earlier, this is actually the second onn. 4K Pro in my possession. I bought the first one from Walmart last August, right after they were released:

which you can chronologically tell, among other reasons, because Walmart has subsequently transitioned the packaging’s color scheme and broader contents:

The other means of indicating when I’d bought it is that, although Walmart advertises it as including a remote control that not only supports the aforementioned “Find Remote” functionality and embeds a Google Assistant-supportive microphone in addition to the mics in the device itself, but also offers backlit keys and a “Free TV” button:

at least some (including mine) initial onn. 4K Pro shipments came for unknown reasons bundled with a prior-revision remote control absent those latter two features. Here’s the one that was in the box alongside my original device:

And, to my ancillary-benefits comments, here’s the full-featured newer-version one that came with the device I more recently acquired off eBay:

Here are both remotes alongside my original device and other in-box goodies that came with it:

mimicking another one of Walmart’s “stock” images.

Here’s a PDF copy of the Quick Start guide that’s also in the box. And speaking of which, here are the results of my packing everything back into the box, simulating (in reverse, albeit also including both remotes; I now have a spare in case “Find Remote” ever fails!) what the insides looked like when I unpacked the original device late last summer:

With that, having passed through 2,000 words a few paragraphs ago, I’ll wrap up and await your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Perusing Walmart’s onn. 4K Pro Streaming Device with Google TV: Storage aplenty appeared first on EDN.

Chiplet design basics for engineers

Пн, 07/28/2025 - 11:22

The world is experiencing an insatiable and rapidly growing demand for artificial intelligence (AI) and high-performance computing (HPC) applications. Breakthroughs in machine learning, data analytics, and the need for faster processing across all industries fuel this surge.

Application-specific integrated circuits (ASICs), typically implemented as system-on-chip (SoC) devices, are central to today’s AI and HPC solutions. However, traditional implementation technologies can no longer meet the escalating requirements for computation and data movement in next-generation systems.

From chips to chiplets

Traditionally, SoCs have been implemented as a single, large monolithic silicon die presented in an individual package. However, multiple issues manifest as designers push existing technologies to their limits. As a result, system houses are increasingly adopting chiplet-based solutions. This approach implements the design as a collection of smaller silicon dies, known as chiplets, which are connected and integrated into a single package to form a multi-die system.

For example, Nvidia’s GPU Technology Conference (GTC) has grown into one of the world’s most influential events for AI and accelerated computing. Held annually, GTC brings together a global audience to explore breakthroughs in AI, robotics, data science, healthcare, autonomous vehicles, and the metaverse.

During his GTC 2025 keynote, Nvidia president, co-founder, and CEO Jensen Huang emphasized the need for advanced chiplet designs, stating: “The amount of computation we need as a result of agentic AI, as a result of reasoning, is easily 100 times more than we thought we needed this time last year.”

Despite a wide range of analyst expectations, explosive growth is undisputed; chiplets are becoming the default way to build large AI/HPC dies (Figure 1).

Figure 1 Chiplet market forecast illustrates its explosive growth. Source: Nomura and MarketUS

Figure 1 above represents the center of gravity of several published forecasts. Tools, technologies, and ecosystems are coming together with a 2026-27 inflection point to facilitate designers’ goal of being able to purchase complex chiplet IP on the open market.

These chiplets will adhere to standard die-to-die (D2D) interfaces, allowing them to operate plug-and-play or mix-and-match. This is expected to generate explosive growth in the chiplet market, reaching at least USD 100 billion by 2035, with some forecasts more than doubling this forecast.

Why chiplets?

One increasingly popular approach is to take an existing monolithic die design and disaggregate it into multiple chiplets. A simplistic representation of this is depicted in Figure 2.

Figure 2 Monolithic die (left) is shown vs. multi-die system (right). Source: Arteris

In monolithic implementations, reticle limits impact scalability, and yields fall as the die size increases. It’s also harder to reuse or modify IP blocks quickly, and implementing all the IPs at the same process technology node can be inefficient.

Chiplet-based multi-die systems offer multiple advantages. When the design is disaggregated into various smaller chiplets, yields improve, and it’s easier to scale designs, currently up to 12x of today’s reticle limit. Also, each IP can be implemented at the most appropriate technology node. For example, high-speed logic chiplets may use the 3-nm node, SRAM memory chiplets the 7-nm node, and high-voltage input/output (I/O) interfaces the 28-nm node.

Observe the red bands shown in Figure 2. These represent a network-on-chip (NoC) interface IP. In a multi-die system, each chiplet can have its own NoC. The chiplet-to-chiplet interfaces, known as die-to-die connections, are typically implemented using bridges based on standard interconnect protocols and physical layers such as BoW, PCIe, XSR, and UCIe.

Aggregation, disaggregation, and re-aggregation

As chiplet-based designs gain traction, it’s essential to understand how today’s SoCs are typically assembled. Currently, the predominant method is to gather a collection of soft IPs, represented at the register transfer level (RTL) of abstraction, and aggregate them into a single, monolithic design. Most of these IPs are sourced from trusted third-party vendors, with the SoC design team creating one or two IPs that will differentiate the device from competitive offerings.

To successfully integrate these IPs into a cohesive design, two other aspects are essential beyond the internal logic that accounts for most of an IP block’s transistors. The first is connectivity information, including port definitions, data widths, operating frequencies, and supported interface protocols. The second is the configuration and status registers (CSRs) set, which must be placed appropriately within the overall SoC memory map to ensure correct system behavior.

Because of this complexity, performing this aggregation by hand is no longer possible. IP-XACT is an IEEE standard (IEEE 1685) that defines an XML-based format for describing and packaging IPs. To facilitate automated aggregation, each IP has an associated IP-XACT model.

As SoC complexity continues to rise, it is becoming increasingly common to take an existing monolithic die design and disaggregate it into multiple chiplets. To support this chiplet-based design, the tools must be able to disaggregate an SoC design into multiple chiplets, each of which may contain many original soft IPs. In addition to partitioning the logic, the tools must generate IP-XACT representations for each chiplet, including connectivity and registers.

Technology Is here now

AI and HPC workloads are advancing quickly, driving a fundamental shift toward chiplet-based architectures. These designs provide a practical solution to meet the increasing demands for scalability and efficient data movement. They require new methodologies and supporting technology to manage multi-die systems’ design, assembly, and integration.

Take, for instance, Arteris’ multi-die solution, which automates key aspects of multi-die design. Magillem Connectivity and Magillem Registers support the assembly and configuration of systems built from IP blocks or chiplets. These tools manage both disaggregation of monolithic designs and re-aggregation into multi-die systems across the design flow.

On the interconnect side, Arteris supplies both coherent and non-coherent NoC IP. Ncore enables cache-coherent communication across chiplets, presenting a unified memory system to software. FlexNoC and FlexGen provide non-coherent options that are compatible with monolithic and multi-die implementations.

Andy Nightingale, VP of product management and marketing at Arteris, has over 37 years of experience in the high-tech industry, including 23 years in various engineering and product management positions at Arm

 

Register for the virtual event The Future of Chiplets 2025 held on 30-31 July.

Related Content

The post Chiplet design basics for engineers appeared first on EDN.

Flip ON Flop OFF for 48-VDC systems

Птн, 07/25/2025 - 17:32

There have been numerous circuits published in EDN as design ideas (DI) for the past few months, centering around the “Flip ON Flop OFF” circuit originally published by Stephen Woodward. These are all designed for DC voltages less than 15 V, since this is the maximum power supply voltage of the CMOS ICs that were used in their design.

Wow the engineering world with your unique design: Design Ideas Submission Guide

There are several applications that use 48 VDC as the supply voltage, such as telecom equipment, solar panel controllers, and EV controllers. In general, DC on/off switches are bulky, as there is no current zero-breaking concept as in the case of AC circuits. A DC on/off switch will break the full load current, leading to arcing and contact erosion. Because of this, bulk-sized switches with higher current capacity are employed.

Figure 1’s circuit can flip on and flop off 48 VDC with a tiny push button. D1 is a 5.6-V Zener diode. It is connected to the base of the Q2 transistor. Its emitter voltage becomes around 5 VDC (Vz-Vbe).

ICs U1 and U2 operate with this 5 VDC voltage. When the pushbutton (PB) is pushed once momentarily, a small pulse is generated, which is counted by U1. Its LSB pin becomes HIGH, which is applied to the gate of Q1. Hence, it conducts, and the output gets 48 VDC. For the next push of PB, the LSB pin of U1 goes LOW, and the gate of Q1 becomes LOW, and Q1 stops conducting. This makes the output voltage go to zero. This action repeats for every push.

Figure 1 The flip on, flop off circuit for 48 V. The output gets 48V DC when you push PB once momentarily. For the next push, output becomes 0 V. U1 and U2 operate at 5VDC only. Connect the Vcc pins of U1 and U2 to VDD and the ground pins to VSS, as shown in the above circuit. Use heat sink for Q1 for higher currents.

Since PB encounters current around a milliamp, the low current, sleek PB is sufficient to switch ON or OFF the 48-V supply with high current. With a proper heatsink on Q1, this circuit can switch ON or OFF DC currents up to several amps as per the data sheet of Q1.

Both R1 and C1 are for PB switch debounce. Both R2 and C2 are for the power-on reset of U1.

If galvanic isolation is needed (this may not always be the case), you may connect an ON/OFF switch prior to the input. In this topology, on-load switching is taken care of by the PB-operated circuit, and the ON/OFF switch switches zero current only, so it does not need to be bulky. You can select a switch that passes the required load current. While switching ON, first close the ON/OFF switch and then operate PB to connect. While switching OFF, first push PB to disconnect and operate the ON/OFF switch.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

The post Flip ON Flop OFF for 48-VDC systems appeared first on EDN.

1-GHz TDS upgrade triples EMI test speed

Птн, 07/25/2025 - 17:31

The Keysight N9048B PXE EMI test receiver, paired with a standalone stream processing unit (SPU), delivers real-time, gapless 1-GHz time domain scan (TDS) bandwidth. With the SPU upgrade, the receiver covers 30 MHz to 1 GHz in a single step—down from three in the previous version—tripling EMI test speed. Together, the units enable faster, more accurate EMI testing.

With 1-GHz FFT bandwidth, the system accelerates EMI scans by covering the CISPR C and D bands in a single pass and supports user-selectable resolution bandwidths of 9 kHz, 120 kHz, and 1 MHz. Real-time, gapless capture ensures no transient events are missed, while high sensitivity and wide dynamic range reveal signals close to the noise floor.

The test setup shortens troubleshooting time from hours to minutes and fully complies with CISPR 16-1-1:2019 requirements. Additionally, the standalone SPU provides a path for future upgrades, offering long-term flexibility.

The Keysight N9048B PXE EMI receiver will be showcased at Techno-Frontier 2025 in Tokyo at the TOYO booth. Learn more about the N9048BSPU stream processing unit by viewing the flyer here.

N9048B PXE product page

Keysight Technologies 

The post 1-GHz TDS upgrade triples EMI test speed appeared first on EDN.

Zigbee/BLE module enables scalable IoT networking

Птн, 07/25/2025 - 17:31

Quectel’s KCMA32S Zigbee/BLE module combines a compact design with versatile connectivity for a wide range of IoT devices. It is built on Silicon Labs’ ultra-low-power EFR32MG21 wireless SoC, which integrates an Arm Cortex-M33 processor running at up to 80 MHz and supports Zigbee 3.0 and BLE 5.3 for concurrent protocol operation.

The KCMA32S enables mesh networking over Zigbee and BLE, supporting scalable, many-to-many communication for smart lighting, building automation, and home networks. An optional Secure Vault feature adds advanced security, while flexible memory configurations—up to 96 KB of SRAM and 1024 KB of flash—offer ample headroom for application development.

With its small-scale 20×12×2.2-mm LCC+LGA form factor, the KCMA32S helps reduce both size and cost in end products. It offers up to 20 GPIOs, multiplexable via the QuecOpen SDK for interfaces such as I²C, UART, SPI, and I²S. The module delivers a receive sensitivity of –104 dBm and transmit power up to +20 dBm, with optional PCB antenna, RF coaxial connector, or pin antenna interfaces.

A timeline for availability of the KCMA32S Zigbee/BLE module was not provided at the time of this announcement.

KCMA32S product page

Quectel Wireless Solutions 

The post Zigbee/BLE module enables scalable IoT networking appeared first on EDN.

RF transceiver enables wideband SDR systems

Птн, 07/25/2025 - 17:31

Sequans’ Iris SQN9506 wideband RF transceiver covers 220 MHz to 7.125 GHz with up to 200 MHz of instantaneous bandwidth. Designed for software-defined radio (SDR) in defense, aerospace, drones, V2X, routers, and other 5G systems, it provides 20 receive and 4 transmit paths, supporting up to 4×4 MIMO in both Tx and Rx modes.

The SQN9506’s integrated RF architecture includes 10 RF synthesizers for concurrent scanning and observation. Unlimited frequency hopping with fast switching enables robust anti-jamming performance, allowing real-time hopping across channels during transmission and reception. The transceiver also supports digital interfaces such as DigRFv4 and JESD204C for both data and control.

Easily integrated with any FPGA processor, the single-die device offers low power consumption—drawing just 0.2 mA in sleep mode, 3 mA in standby, and 8 mA when idle. It is housed in a 178-pin BGA package with dimensions of 8.4×4.8×1 mm.

A timeline for availability of the Iris SQN9506 RF transceiver was not provided at the time of this announcement.

Iris SQN9506 product page 

Sequans Communications 

The post RF transceiver enables wideband SDR systems appeared first on EDN.

ICs guard automotive systems from reverse power

Птн, 07/25/2025 - 17:31

Two diode controllers, the AP74502Q and AP74502HQ from Diodes, provide 80-V reverse battery polarity protection for automotive systems. They also offer load disconnect to guard against overvoltage and undervoltage conditions. Compatible with 12-V, 24-V, and emerging 48-V battery systems—including those in hybrid and electric vehicles—the devices are well-suited for ADAS, body control modules, infotainment, and exterior lighting.

The AEC-Q100 qualified controllers differ in peak gate source current. The AP74502Q handles 60 µA (typical), enabling smooth startup with inherent inrush current control—useful in applications where limiting surge current is critical. By comparison, the AP74502HQ withstands a higher 11-mA peak, allowing quicker MOSFET turn-on with a 1-µs typical response.

Both controllers support input voltages as low as 3.2 V for reliable operation during cold crank conditions and provide a peak gate turn-off sink current of 2.3 A for fast MOSFET switching during voltage events. With the charge pump enabled, their quiescent current is 62 µA, dropping to 1 µA when disabled, minimizing power consumption and battery drain.

Housed in SOT28 packages, the AP74502Q and AP74502HQ cost $0.27 each in 1000-unit quantities.

AP74502Q product page   

AP74502HQ product page  

Diodes

The post ICs guard automotive systems from reverse power appeared first on EDN.

Kioxia adds 245-TB SSD to enterprise lineup

Птн, 07/25/2025 - 17:31

Raising the bar for enterprise NVMe SSDs, Kioxia’s LC9 drive delivers 245.76 TB in both U.2 (2.5-inch) and E3.L form factors. It joins the previously announced 122.88-TB model in the U.2 (2.5-inch) and E3.S form factors, expanding the LC9 series to meet the performance and efficiency demands of generative AI—while helping replace multiple power-hungry HDDs.

LC9 drives feature a PCIe 5.0 interface, offering up to 128 GT/s via a Gen5 single x4 or dual x2 configuration. They are compliant with NVMe 2.0 and NVMe-MI 1.2c specifications and meet many of the requirements in the Open Compute Project (OCP) Datacenter NVMe SSD specification v2.5.

These high-capacity SSDs integrate multiple 8-TB devices, each built from 32 stacked dies of 2-Tb BiCS8 3D QLC NAND in a compact 154-ball package. The drives also incorporate a controller and firmware, along with features such as die failure recovery, parity protection, and power loss protection. Dual-port operation enables high availability, and data security options include SIE, SED, and planned FIPS 140-3 compliance.

The LC9 series of SSDs is now sampling to select customers.

Kioxia

The post Kioxia adds 245-TB SSD to enterprise lineup appeared first on EDN.

When a ring isn’t really a ring

Чтв, 07/24/2025 - 18:07

Early in my engineering career, I worked with a couple of colleagues on an outside project. We had a concept for a security system for gaming arcades. At the time, arcades were very popular, hosting games like Pac-Man, Space Invaders, and Pinball. One of the business problems, though, was the theft of coins from the gaming machines. Apparently, when staff members were emptying the coin boxes, they would pocket a handful of coins. Theft in these arcades was said to be around 25%.

Do you have a memorable experience solving an engineering problem at work or in your spare time? Tell us your Tale

Our concept for preventing these thefts was a device that consisted of two parts. One micro-based device was installed in each of the arcade games. This counted the coins as they entered the slot. Then, periodically, the total coin count and game ID were transmitted, via the power line, to the back office. In the back office was the receiver. It monitored the power line and collected all the transmissions from the various games. This back-office device was also connected to a telephone landline, and once a day, the central office would call into the back-office device to have the daily data sent to it. The hand count of coins could then be reconciled with the electronic coin count from all the machines.

My colleagues and I divided up the work, with one doing the schematic and PCB prototypes. Another did the enclosures, labeling, etc. I did the firmware for the two pieces of equipment. After many months of evening work, we had a system that performed just as we expected. We also got a test site identified to install a complete system. As the arcade was more than 1000 miles away, we had someone at the other end install the system. After a few days, we got a call from the arcade operator telling us the office device would not answer the phone call into it. The hardware design was rechecked to see if the opto-isolator, signaling the firmware of a high voltage on the ring line, was designed correctly to take into account lower-level ring voltages—no issue there. This issue fell on me as it appeared to be a firmware issue. I tested my firmware dozens of times with various changes using an actual landline—it always worked. After many days of testing, I announced that I could not find any issues.

As a last resort, we had the hardware engineer fly to the arcade site with a raft of test equipment. After only a few hours, he called and said he had found the issue. The standard for ringing for a landline is defined by ANSI T1.401-1988 section 5.4.2, which I followed for the firmware. According to this standard, the ring cadence consists of 2 seconds of ringing followed by 4 seconds of silence. The phone system, in the town where the arcade was, followed this…sort of. During the ring, there was a short dropout ( about 80 ms, if I remember correctly). So, what the firmware saw was about 1 second of ring, no ring for 80 ms, then 920 ms of ring, and then 4 seconds of silence. The firmware, noting that the ring was only one second long, determined that it wasn’t a valid ring and therefore wouldn’t answer. The discovery of the issue was long, difficult, and expensive. The fix was easy to implement in firmware. After updating the firmware, the arcade system worked very well (we never got rich off it, though…another, non-technical, story).

The takeaway here is not how to construct landline phone answering firmware; those days are long gone. But the lesson here is that when you have an issue, suspect everything. We continued to have discussions on why the system would not answer the phone when we knew it was sensing the ring. We never thought that maybe the cadence, defined by an ANSI standard, would not be correct. Why the town’s telephone ring system had an 80 ms gap was never discovered, but it obviously didn’t meet the spec. So, if you can’t find a problem in your device, maybe it’s the other device(s) you’re connecting to. And at that point, the other system needs to be checked against its specs.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

Related Content

The post When a ring isn’t really a ring appeared first on EDN.

PCB design tips for EMI and thermal management in 800G systems

Чтв, 07/24/2025 - 10:34

As the industry accelerates toward 800G Ethernet and optical interconnects, engineers face new challenges in managing electromagnetic interference (EMI) while ensuring signal integrity at unprecedented speeds. The transition to 112G pulse amplitude modulation 4-level (PAM4) SerDes introduces faster edge rates and dense spectral content, elevating the risk of radiated and conducted emissions.

Simultaneously, compact module form factors such as QSFP-DD and OSFP force high-speed lanes, DC-DC converters, and control circuitry into tight spaces, increasing the potential for crosstalk and noise coupling. Power delivery noise, insufficient shielding, and poor return path design can easily transform an 800G design from lab success to compliance failure during emissions testing.

To avoid late-stage surprises, it’s critical to address EMI systematically from the PCB level up, balancing stack-up, routing, and grounding decisions with high-speed signal integrity and practical manufacturability.

This article provides engineers with actionable PCB design strategies to reduce EMI in 800G systems while maintaining high performance in data center and telecom environments.

Layout considerations

For chip-to-chip 112G PAM4 signaling, the key frequency is the Nyquist frequency, which is half of the baud rate. PAM4 encodes 2 bits per symbol.

  • Therefore, the baud rate (symbol rate) is half of the bit rate. For 112 Gbps, the baud rate is 112 Gbps / 2 = 56 Gbaud (gigabaud).
  • The Nyquist frequency is half of the baud rate. So, the Nyquist frequency for 112G PAM4 is 56 Gbaud / 2 = 28 GHz.

The maximum insertion at 29 GHz for 112G medium range PAM4 is 20 dB. Megtron 7 offers a low dissipation factor (Df) of 0.003 at 29 GHz, which is adequate for 112G. Df of 0.003 is squarely in the “very low loss” category. It means that the material will dissipate a minimal amount of the signal’s energy, allowing more of the original signal strength to reach the receiver.

This helps preserve the critical amplitude differences between the PAM4 levels, enabling a lower bit error rate (BER). Low-cost FR-4 material typically has Df value of 0.015, which is excessive for 112G PAM4.

Aperture and shielding effectiveness

To avoid EMI, the wavelength relationship is essential, especially when considering wires or openings that may serve as unintentional antennas. An EMI shield’s seam, slot, or hole can all function as a slot antenna. When this opening’s dimensions get close to a sizable portion of an interfering signal’s wavelength, it turns into an effective radiator, letting EMI escape, perhaps failing the radiated emission test in an anechoic chamber.

As a general guideline, the maximum size of any aperture should be less than λ/20 (one-twentieth of the wavelength) of the highest frequency of concern to achieve efficient EMI shielding. See Figure 1 for typical airflow management openings.

Figure 1 Airflow apertures and shielded ventilation are shown for airflow management. Source: Author

The wavelength is calculated as lambda = c / f = (3 * 108) / (28 * 109) = 10.7 mm

Opening dimension = lambda / 20 = 0.536 mm

To reduce EMI problems, all apertures for equipment that operate at or are vulnerable to 28-GHz signals should ideally be less than 0.536 mm. The permitted dimensions for apertures decrease with increasing frequencies.

Routing guidelines and via stub impact at 112G PAM4

The spacing rule between two differential pairs is different for TX-to-TX and TX-to-RX. Generally, the allowed serpentine routing length for 112G PAM4 is less than previous speeds. Serpentine lines have less impact on a differential pair that is weakly connected.

A via stub is the unused portion of a through-hole via that extends beyond the layer where the signal transitions (Figure 2). For example, if a signal goes from the top layer to an inner layer via a through-hole, the part of the via extending from that inner layer to the bottom of the board forms a stub.

Figure 2 The diagram provides an overview of PCB via stub. Source: Author

f = c/(4*L*√ℇeff)

f = resonant frequency of a via stub = 28 GHz

c = speed of light = 3 x 108 m/s

L = Length of via stub = 1.533 mm = 60.35 mils

ℇeff = 3.05 at 28GHz

A via stub length of ~60 mils will resonate near 28 GHz in Megtron 7. For 112G PAM4 designs, this length is too long and can cause serious signal integrity issues.

Power considerations

Generally, 800G transceivers consume between 13 W and 18 W per port for short range but exact value is mentioned in module manufacturer datasheet. These transceivers contain 8 lanes for 112G to transmit 800G. A 1RU appliance with 32 QSFP-DD would need 25.6T switch. See Figure 3 for a simplified diagram of 1RU appliance with one ASIC.

Figure 3 Airflow management is shown for 1U high-speed systems incorporating a single ASIC. Source: Author

  • Power consumption for 112G PAM4 SerDes is high (typically 0.5–1.0 W per lane). For example, SerDes system will consume worst-case scenario Power = 8 * 1 W = 8 W.
  • Tcase_max = 90°C, Tambient_max = 50°C. Rth = (90 – 50) / 8 = 5° C/W. System designers should ensure heatsink and thermal interface material provides ≤ 5 ° C/W.
  • Q = Power to be dissipated (watts). ΔT = Allowable air temperature rise across the system (°C). Conversation factor = 3.16
  • CFM = Q* 3.16/ΔT = 2000 * 3.16/15 = 421
  • In 1RU, engineers use multiple 40 x 40 x 56 mm high-RPM fans for airfield distribution that typically pushes ~25-30 CFM. Fans required = 421/25 = 16.8 ≈ 17 fans. Accommodating this high number of fans is difficult because external power supplies occupy rear space.

Design recommendations

As 800G hardware and 112G PAM4 SerDes become standard in next-generation data center and telecom systems, engineers face a multifaceted design challenge: maintaining signal integrity, controlling EMI, and managing thermal constraints within high-density 1RU systems.

Careful PCB material selection, such as low-loss Megtron 7, precise routing to minimize via stub resonance, and disciplined aperture management for shielding are essential to avoid signal degradation and EMI test failures. Simultaneously, the high-power density of 800G optics and SerDes require advanced thermal design, airflow planning, and redundancy considerations to meet operational and reliability targets.

By systematically addressing EMI and thermal factors early in the design cycle, engineers can confidently build 800G systems that pass compliance testing while delivering high performance under real-world conditions. Doing so not only avoids costly late-stage redesigns but also ensures robust deployment of high-speed systems critical for the evolving demands of cloud and AI workloads.

Ujjwal Sharma is a hardware engineer specializing in high-speed system design, signal/power integrity, and optical modules for data center hardware.

Related Content

The post PCB design tips for EMI and thermal management in 800G systems appeared first on EDN.

PWM + Quadrac = Pure Power Play

Срд, 07/23/2025 - 16:58

It’s just a fact, I’m curiously fond of topologies that combine PWM switching and filtering circuitry with power handling devices like adjustable voltage regulator chips. This scheme makes power-capable DACs with double-digit wattage outputs. For example, “0 V to -10 V, 1.5 A LM337 PWM power DAC.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

The simple circuit in Figure 1 joins this favored family but makes its siblings look weak and wimpy by upping the power ante by more than a factor of 10. It attains output capabilities over a kilowatt and gets there with a total parts count of only nine inexpensive discretes. Here’s how it works.

Figure 1 The quadrac Q2 conduction-angle triggering time constant = R1C1 / DF, where DF is the PWM duty factor from 0 to 100%.

The power control method in play is variable AC phase angle conduction via a quadrac (also sometimes called an alternistor). Quadracs are bidirectional thyristors that comprise the dual functions of a triac (to do the power switching) and an integrated diac (to trigger the triac).

They’re popular in applications like variable-speed power tools and lamp dimmers because they’re cheap, efficient, and durable. What’s also nice is that the only support components they need for AC power control are a small potentiometer and a timing capacitor (both also cheap) to adjust triggering delay and thereby the phase angle of conduction, thence power output

Q2 is wired in exactly that traditional way ,except that opto-isolator Q1 and R1 fill the role of the pot. The duty factor (DF) of Q1’s PWM input sets its average conductance and thereby the effective trigger delay from a

DF = 1 minimum of ~1.7 ms for an upper 95% output power, down to a DF = 0 delay that’s longer than the entire 8.33 ms AC half-cycle. Which is to say: OFF. The PWM cycle rate isn’t critical but should be at least 10 kHz to avoid possible annoying beat frequencies since it’s not synchronized with the 60 Hz AC cycle.

The relationship between DF, phase angle, and percent power output is equal to the time integral of [(Vpk*sin(r)) 2], which is shown in Figure 2.

Figure 2 The (Vpk*sin(r))2 power output versus the PWM DF. The right axis is the voltage of the trigger capacitor (C1), the left axis is the fraction of the full output power versus trigger phase, and the x-axis is the AC phase in radians.

Because Q1, unlike Q2, isn’t bidirectional, the D1-4 diode bridge is necessary to keep it upright despite 60-Hz phase reversals. Q1’s typical current transfer ratio of 80% makes ~10 mA of PWM drive current necessary. Current limiter R2’s 330 Ω assumes a 5-V rail and a low impedance driver and will need adjustment if either assumption is violated. The Vc1 trigger voltage is 38 V ±5 V with ±3 V max asymmetry. These tolerances place a limit on DF versus power precision.

The full throttle Q3 power output efficiency is around 99%, but Q2’s max junction temperature rating is only 110 °C. Adequate heatsinking of Q2 will therefore be wise if outputs greater than 200 W and/or toasty ambient temperatures are expected.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post PWM + Quadrac = Pure Power Play appeared first on EDN.

A battery backup for a solar-mains hybrid lamp 

Втр, 07/22/2025 - 17:35
1. The solar-mains hybrid lamp

In the April 4, 2024, issue of EDN, the design of a solar mains hybrid lamp (HL) was featured. The lamp receives power from both a solar panel and a mains power supply to turn on an array of LED lamps. Even when solar power is widely variable, it supplies a constant light output by dynamically drawing balanced power from the mains supply. Also, it tracks the maximum power point very closely. 

Wow the engineering world with your unique design: Design Ideas Submission Guide

1.1 Advantages

The advantages of the HL are as follows:

  1. It utilizes all the solar power generated and draws only the necessary power from the grid to maintain constant light output.
  2. It does not inject power into the grid; hence, it does not contribute to any grid-related issues.
  3. It uses a localized power flow with short cabling, resulting in negligible transmission losses.
  4. It uses DC operation, resulting in a simple, reliable, and low-cost system.
  5. Generated PV power is utilized even if the grid fails, thus acting as an emergency lamp in the event of a grid failure during the daytime.
  6. It has a lengthy lifespan of 15 years with minimal or no maintenance, resulting in a good return on investment.
1.2 Disadvantages

The limitations of the HL are as follows: 

  1. It does not provide light if the grid fails after sunset.
  2. Solar power is not utilized outside of office hours or on holidays.

As mentioned above, the HL’s utility can be fully realized in places such as hospitals, airports, and malls, as it can be used every day of the week.

In offices that are open for work only 5 days per week, the generated PV power will be wasted on weekends and outside of office hours (early mornings and evenings). 

For such applications, to fully utilize the generated PV power, a battery backup scheme is proposed. It is designed as an optional add-on feature to the existing HL. The PV power, which would otherwise go to waste, can now be stored in the battery whenever the HL is not in use. The stored energy can be utilized instead of mains power on workdays to reduce the electricity bill. In cases where the grid fails, it will work as an emergency lamp. 

2. Battery backup block diagram

The block diagram of the proposed scheme is shown in Figure 1. It consists of a HL having an array of 9 LED lamps, A1 to A9. Each HL has five 1-W white LEDs connected in series, mounted on a metal core PCB (MCPCB). For more details, refer to the previous article, “Solar-mains HL.” Here, the HL is used as is, without any changes. 

The PV voltage (Vpv) is supplied through a two-pole two-way switch S1 to the HL. Switch S1A is used to connect the PV panel to either the lamp or to the battery. As shown in the figure, the PV panel is connected to the battery through an Overvoltage Cutoff circuit. This circuit disconnects PV power when the battery voltage reaches its maximum value of Vb(MAX). 

A single-pole two-way switch S2 is used to select either MAINS or BAT to feed power to the VM terminal of the HL. When S2 is in the BAT position, battery power is fed through the undervoltage trip circuit. Whenever the battery voltage drops to the minimum value Vb(MIN), the HL is disconnected from the battery. Switch S1B is used to disconnect the battery/mains power to the HL when S1 is in the CHARGE position.

Figure 1 The proposed add-on battery backup system for HL.

Note: This simple battery cutoff and trip circuit has been implemented to prove the concept of battery backup using the existing HL. In the final design, the Overvoltage Cutoff circuit should be replaced with a solar charge controller, which will track the maximum power point as the battery charges. Readily available off-the-shelf solar charge controllers could be used. The selection of a solar charge controller is given in Section 5.

Here are the lamp specifications: 

  1. Solar PV panel: 30 Wp, Vmp = 17.5 V, Imp = 1.7 A
  2. Adapter specifications Va = 18 V; Current 2 A
  3. Lead Acid Battery: 6 V 5 Ah. (3 batteries connected in series)
  4. Battery nominal voltage Vb = 18V, Vb(MAX) = 19 V, Vb(MIN) = 17 V
  5. Lamp power output: 30 W
3. Overvoltage and undervoltage circuits 

The circuit diagram of the battery Overvoltage Cutoff and Undervoltage Trip is shown in Figure 2. Three lead-acid batteries (6 V, 5 Ah) connected in series are used for storing solar energy. The battery is connected to the solar panel Vpv through a P-channel MOSFET T1 (IRF9540). The Schottky diode D1 (1N5822) is connected in series to prevent the battery from getting discharged into the solar panel when it is not producing any power. 

T1 is controlled using comparator CMP1 of IC1 (LM393). The battery voltage is sensed using the potential divider R6 and R7. The reference to the comparator non-inverting pin (3) is generated from a +12-V power supply implemented using the IC2 (LM431) shunt regulator. If the battery voltage is lower than the reference voltage, the CMP1 output (pin 1) is high. This turns on transistor T3, which turns on T1. The green LED_G indicates that the battery is being charged.

Figure 2 The circuit diagram of Overvoltage Cutoff and Undervoltage Trip circuits.

The battery is connected to the load through MOSFET T2 (IRF9540). T2 is controlled using comparator CMP2 of IC1. The battery voltage is sensed using the potential divider R14 and R15, and is connected to the non-inverting terminal (Pin 5). The reference voltage is connected to the inverting terminal (Pin 6). 

So long as the battery voltage is higher than the reference, the CMP2 output remains high. This drives transistor T4, which turns on T2. When the battery voltage drops below the reference, T2 is turned off, thus disconnecting the lamp load. LED_R indicates the battery voltage is within the Vb(MIN) and Vb(MAX) range.

Figure 3 shows the PCB assembled according to the circuit diagram in Figure 2. The connections for the solar panel Vpv, battery Vb, and battery output Vb+ (through the MOSFET T2) are made using three 2-pin screw terminals. 

Figure 3: The assembled PCB for battery overvoltage cutoff and undervoltage trip circuit.

Figure 4 shows the interconnections of the battery charger circuit with the HL.

Figure 4 A top view of the interconnections of the battery charger circuit with the HL.

The modes of operation of this circuit are captured in Table 1. When S1 is in the CHARGE position, the PV voltage is supplied to the batteries for charging. In this mode, the position of S2 does not affect the charging process. 

When S1 is in the PV position, the HL turns ON. Using S2 we can select either mains power or battery power.

S1

S2

Function

CHARGE

X

Battery charging

PV

MAINS

Hybrid with mains power

PV

BAT

Hybrid with battery power

Table 1 Operating modes of the battery backup circuit: battery charging, hybrid with mains power, and hybrid with battery power. 

4. Integration and testing

Figure 5 shows the integration of the battery protection circuit with the HL and three batteries. The cable from the PV panel is connected to the 2-pin screw terminal labeled as Vpv. Three 6-V batteries in series are connected to the screw terminal Vb. A DC socket labeled Va is mounted for plugging into the adapter pin. In the photograph, S1 is in CHARGE position, so the battery is being charged using PV power. In this case, the position of S2 is irrelevant and will not affect the charging process.

Figure 5 An image of the circuit in Battery Charging mode. The green LED indicates the battery is being charged from the PV panel. The red LED indicates battery power is available for use.

Figure 6 shows the HL turned on using PV power and a battery. In this case, S1 is in the PV position, and S2 is in the BAT position. Note that the LED lamp array (A1 to A9) is facing downwards. On the HL PCB, there are nine red and nine green indicator LEDs. Each pair of LEDs represents 11% of the total power. The photograph shows four green LEDs are ON, which means 44% of the power is coming from solar. The remaining 55% of power is being drawn from the battery. The green and red LED combination changes as the sunlight varies. 

Figure 6 The lamp in Hybrid mode. Four green LEDs indicate 44% of the power is coming from the PV panel. Five red LEDs indicate 55% of the power is being drawn from the battery.

5. Design Example of a 90-W HL with battery backup

Here, the design of a 90-W HL with a battery backup is proposed. The nominal working voltage selected is 48 V. 

5.1 HL specs

The specifications for the HL design are as follows: 

  1. Solar Panel Specifications: Power = 30 Wp, Vmp = 17.5 V, Imp = 1.7 A
  2. Number of Solar Panels connected in series: 3
  3. Solar Array Voltage: Vpv = 3 x 17.5 = 52.5 V; Voc = 60 V
  4. Number of LEDs in each MCPCB (A1 to A9): 15 white LEDs of 1 Watt each.
  5. Forward voltage of LED: 3.12 V
  6. Voltage across each lamp (A1 to A9): 15 x 3.12 = 46.8 V
  7. Current through LED lamps: 0.2 A (selected) 
  8. Current limiting resistor [1]: R1 to R9 = (52.5 – 46.8)/0.2 = 28.5 Ω (select 27Ω/2W)
  9. Adapter specifications: 48 V, 2 A

As stated earlier, this lamp can be used without a battery backup in facilities that are open all seven days a week. In these applications, the solar power generated is fully utilized, so the cost of this lamp is minimal. The deployment of a large number of such lamps can significantly reduce the electricity bill. 

However, in offices that operate 5 days a week, the power generated during weekends goes to waste. In cases where another load can utilize the available PV power on weekends, such as a pump, vacuum cleaner, or a battery that needs charging, the PV panel’s output can be connected to that load. This way, we can still use the HL as is. However, if there is no other load that can utilize the PV power, then we must resort to battery backup.

5.2 Battery selection

The battery selection can be as follows: 

  1. Lithium-ion Battery: 13S (13 cells in series), Nominal voltage 48 V
  2. Battery voltages: Vb(MIN) = 42 V, Vb = 46.8 V, Vb(MAX) = 54.6 V
  3. Energy storage capacity (24 Ah):  48 x 24 = 1152 Wh
  4. Solar energy generation per day: 90 W x 6 hrs = 540 Wh
  5. Battery storage: 1152 Wh / 540 Wh = 2.1 or 2 days  
5.3 Solar charge controller specs

A wide range of solar charge controllers is available on the market. To select a suitable charge controller, the following specifications are provided as guidelines:

  1. Battery Type: Li-ion, Life-Po4
  2. Nominal Voltage: 48 V
  3. Controller type: MPPT
  4. Maximum output current: 5 A
  5. Protections: Battery reverse polarity, solar panel reversal, short circuit protection, battery overvoltage cutoff, battery low voltage trip.

Note that the open-circuit voltage (Voc) of the solar array is 60 V; therefore, the selected components should have a voltage rating greater than 60 V. 

This design is for a 90-W HL; however, higher-wattage lamps can also be designed. In that case, the lamp MCPCB selected should have a higher power rating. Alternately, the number of MCPCBs can be increased to around 16. This way, the array can be arranged in a 4×4 layout. With an increased number of arrays, both the hardware and software of HL have to be upgraded. 

It may be possible to connect two MCPCBs in parallel to increase the lamp power. However, in this case, the two MCPCBs should have a matching LED array forward voltage. This will ensure equal division of lamp current. 

5.4 Scheduling

The design shown here uses manual switches which can be replaced with semiconductor switches. In this case, the operation of the HL can be automated with a weekly programming cycle. On weekdays, it will work in hybrid mode. In this mode we can either select mains power or battery power. The duration of battery power consumption can be planned to ensure that battery is available for charging during weekends. 

6. Storing the HL’s excess energy

The solar-mains HL proposed earlier, provides constant light irrespective of the sunlight conditions. It is a very cost-effective design and can be deployed in large numbers to reduce electricity costs. However, if it is not used on all 7 days of the week, then the solar power gets wasted. To avoid any power wastage, a battery backup system has been proposed here as an add-on feature. Using batteries, the excess solar energy can be stored. The battery backup makes this lamp work as an emergency lamp, also during grid failures.  

Vijay Deshpande recently retired after a 30-year career focused on power electronics and DSP projects, and now works mainly on solar PV systems.

Related Content

The post A battery backup for a solar-mains hybrid lamp  appeared first on EDN.

How to prevent overvoltage conditions during prototyping

Втр, 07/22/2025 - 16:40

The good thing about being a field applications engineer is that you get to work on many different circuits, often all at the same time. While this is interesting, it also presents problems. Jumping from one circuit to another involves disconnecting a spaghetti of leads and probes, and the chance for something going wrong increases exponentially with the number of wires involved.

It’s often the most basic things that are overlooked. While the probes and leads are checked and double checked to ensure everything is in place, if the voltage on the bench power supply is not adjusted correctly, the damage can be catastrophic, causing hours of rework.

The circuit described in this article helps save the day. Being a field applications engineer also results in a myriad of evaluation boards being collected, each in a state of modification, some of which can be repurposed for personal use. This circuit is based on an overvoltage/reverse voltage protection component, designed to protect downstream electronics from incorrect voltages being applied in automotive circuits.

Such events are caused by the automotive battery being connected the wrong way or a load dump event where the alternator becomes disconnected from the battery, causing a rise in voltage applied to the electronics.

Circuit’s design details

As shown in Figure 1, MAX16126 is a load dump protection controller designed to protect downstream electronics from over-/reverse-voltage faults in automotive circuits. It has an internal charge pump that drives two back-to-back N-channel MOSFETs to provide a low loss forward path if the input voltage is within a certain range, configured using external resistors. If the input voltage goes too high or too low, the drive to the gates of the MOSFETs is removed and the path is blocked, collapsing the supply to the load.

Figure 1 This is how over-/reverse-voltage protection circuit works. Source: Analog Devices Inc.

MAX16127 is similar to MAX16126, but in the case of an overvoltage, it oscillates the MOSFETs to maintain the voltage across the load. If a reverse voltage occurs on the input, an internal 1 MΩ between the GATE and SRC pins of the MAX16126 ensures MOSFETs Q1 and Q2 are held off, so the negative voltage does not reach the output. The MOSFETs are connected in opposing orientations to ensure the body diodes don’t conduct current.

The undervoltage pin, UVSET, is used to configure the minimum trip threshold of the circuit while the overvoltage pin, OVSET, is used to configure the maximum trip threshold. There is also a TERM pin connected via an internal switch to the input pin and this switch is open circuited when the part is in shutdown, so the resistive divider networks on the UVSET and OVSET pins don’t load the input voltage.

In this design, the UVSET pin is tied to the TERM pin, so the MOSFETs are turned on when the device reaches its minimum operating voltage of 3 V. The OVSET pin is connected to a potentiometer, which is adjusted to change the overvoltage trip threshold of the circuit.

To set the trip threshold to the maximum voltage, the potentiometer needs to be adjusted to its minimum value and likewise for the minimum trip threshold the potentiometer is at its maximum value. The IC switches off the MOSFETs when the OVSET pin rises above 1.225 V.

The overvoltage clamping range should be limited to between 5 V and 30 V, so resistors are inserted above and below the potentiometer to set the upper and lower thresholds. There are Zener diodes connected across the UVSET and OVSET pins to limit the voltage of these pins to less than 5.1 V.

Assuming a 47-kΩ resistor is used, the upper and lower resistor values of Figure 1 can be calculated.

To achieve a trip threshold of 30 V, Equation 1 is used:

To achieve a trip threshold of 5 V, Equation 2 is used:

Equating the previous equations gives Equation 3:

So,

From this,

Using preferred values, let R3 = 10 kΩ and R2 = 180 kΩ. This gives an upper limit of 29 V and a lower limit of 5.09 V. This is perfect for a 30 V bench power supply.

Circuit testing

Figure 2 shows the prototype PCB. The trip threshold voltage was adjusted to 12 V and the circuit was tested.

Figure 2 Modified evaluation kit illustrate the circuit testing. Source: Analog Devices Inc.

The lower threshold was measured at 5.06 V and the upper threshold was measured at 28.5 V. With a 10-V input and a 1-A load, the voltage measured between input and output was measured at 19 mV, which aligns with the MOSFET datasheet ON resistance of about 10 mΩ.

Figure 3 shows the response of the circuit when a 10-V step was applied. The yellow trace is the input voltage, and the blue trace shows the output voltage. The trip threshold was set to 12 V, so the input voltage is passed through to the output with very little voltage drop.

Figure 3 A 10-V step is applied to the input of MAX16126. Source: Analog Devices Inc.

The input voltage was increased to 15 V and retested. Figure 4 shows that the output voltage stays at 0 V.

Figure 4 A 15-V step is applied to the input of MAX16126. Source: Analog Devices Inc.

The input voltage was reversed, and a –7 V step was applied to the input, with the results shown in Figure 5.

Figure 5 A –7 V step is applied to the input of MAX16126. Source: Analog Devices Inc.

The negative input voltage was increased to –15 V and reapplied to the input of the circuit. The results are shown in Figure 6.

Figure 6 A –15 V step is applied to the input of MAX16126. Source: Analog Devices Inc.

Caution should be exercised when probing the gate pins of the MOSFETs when the input is taken to a negative voltage. Referring to Figure 1, the body diode of Q1 pulls the two source pins toward VIN, which is at a negative voltage. There is an internal 1 MΩ resistor between the GATE and SRC connections of MAX16126, so when a ground referenced 1 MΩ oscilloscope probe is attached to the gate pins of the MOSFETs, the oscilloscope probe acts like a 1 MΩ pull-up resistor to 0 V.

As the input is pulled negative, a resistive divider is formed between 0 V, the gate voltage, and the source of Q2, which is being pulled negative by the body diode of Q1. When the input voltage is pulled to lower than twice the turn-on voltage of Q2, this MOSFET turns on and the output starts to go negative. Using a higher impedance oscilloscope probe overcomes this problem.

A simple modification to the MAX16126 evaluation kit provides reassuring protection from user-generated load dump events caused by momentary lapses in concentration when testing circuits on the bench. If the components in the evaluation kit are used, the circuit presents a low loss protection circuit that is rated to 90 V with load currents up to 50 A.

Simon Bramble specializes in analog electronics and power. He has spent his career in analog electronics and worked at Maxim and Linear Technology, both now part of Analog Devices Inc.

Related Content

The post How to prevent overvoltage conditions during prototyping appeared first on EDN.

Firmware-upgrade functional defection and resurrection

Пн, 07/21/2025 - 17:37

My first job out of college was with Intel, in the company’s nonvolatile memory division. After an initial couple of years dabbling with specialty EPROMs, I was the first member from that group to move over to the then-embryonic flash memory team to launch the company’s first BootBlock storage device, the 28F001BX. Your part number decode is correct: it was a whopping 1 Mbit (not Gbit!) in capacity 😂. Its then-uniqueness derived from two primary factors:

  • Two separately erasable blocks, asymmetrical in size
  • One of which (the smaller block) was hardware-lockable to prevent unintentional alteration of its contents, perhaps obviously to allow for graceful recovery in case the main (larger) block’s contents, the bulk of system firmware, somehow got corrupted.

The 28F001BX single-handedly (in admitted coordination with Intel’s motherboard group, the first to adopt it) kickstarted the concept of upgradable BIOS for computers already in the field. Its larger-capacity successors did the same thing for digital cellular phones, although by then I was off working on even larger capacity devices with even more (symmetrical, this time) erase blocks for solid-state storage subsystems…which we now refer to as SSDs, USB flash sticks, and the like. This all may explain why in-system firmware updates (which involve much larger code payloads nowadays, of course)—both capabilities and pitfalls—have long been of interest to me.

The concept got personal not too long ago. Hopefully, at least some of you have by now read the previous post in my ongoing EcoFlow portable power station (and peripheral) series, which covered the supplemental Smart Extra Battery I’d gotten for my DELTA 2 main unit:

Here’s what they look like stacked, with the smart extra battery on top and the XT150 cable interconnecting them, admittedly unkempt:

The timeline

Although that earlier writeup was published on April 23, I’d actually submitted it on March 11. A bit more than a week post-submission, the DELTA 2 locked up. A week (and a day) after the earlier writeup appeared at EDN.com, I succeeded in bringing it back to life (also the day before my birthday, ironically). And in between those two points in time, a surrogate system also entered my life. The paragraphs that follow will delve into more detail on all these topics, including the role that firmware updates played at both the tale’s beginning and end points.

A locked-up DELTA 2

To start, let’s rewind to mid-March. For about a week, every time I went into the furnace room where the gear was stored, I’d hear the fan running on the DELTA 2. This wasn’t necessarily atypical; every time the device fired up its recharge circuits to top off the battery, the fan would briefly go on. And everything looked normal remotely, through the app:

But eventually, the fan-running repetition, seemingly more than mere coincidence, captured my attention, and I punched the DELTA 2’s front panel power button to see what was going on. What I found was deeply disturbing. For one thing, the smart extra battery was no longer showing as recognized by the main unit, even though it was still connected. And more troubling, in contrast to what the app was telling me, the display indicated the battery pack was drained. Not to mention the bright red indicator, suggestive that the battery pack was actually dead:

So, I tried turning the DELTA 2 off, which led to my next bout of woe. It wouldn’t shut down, no matter how long I held the power button. I tried unplugging it, no luck. It kept going. And going. I realized that I was going to need to leave it unplugged with the fan whining away, while in parallel I reached out to customer support, until the battery drained (the zeroed-out integrated display info was obviously incorrect, but I had no idea whether the “full” report from the app was right, either). Three days later, it was still going. I eventually plugged an illuminated workbench light to one of its AC outlets, whose incremental current draw finally did the trick.

I tried plugging the DELTA 2 back in. It turned on but wouldn’t recharge. It also still ignored subsequent manual power-down attempts, requiring that I again drain the battery to force a shutoff. And although it now correctly reported a zeroed battery charge status, the dead-battery icon was now joined by another error message, this indicating overload of device output(s) (?):

At this point, I paused and pondered what might have gone wrong. I’d owned the DELTA 2 for about six months at that point, and I’d periodically installed firmware updates to it via the app running on my phone (and in response to new-firmware-available notices displayed in that app) with no issues. But I’d only recently added the Smart Extra Battery to the mix. Something amiss about the most recent firmware rev apparently didn’t like the peripheral’s presence, I guessed:

So, while I was waiting for customer service to respond, I hit up Reddit. And lo and behold, I found that others had experienced the exact same issue:

Resuscitation

It turns out that V1.0.1.182 wasn’t the most recent firmware rev available, but for reasons that to this day escape me (but seem to be longstanding company practice), EcoFlow didn’t make the V1.0.1.183 successor generally available. Instead, I needed to file a ticket with technical support, providing my EcoFlow account info and my unit’s serial number, along with a description of the issue I was having, and requesting that they “push” the new version to me through the app. I did so, and with less than 24 hours of turnaround, they did so as well:

Fingers crossed, I initiated the update to the main unit:

Which succeeded:

Unfortunately, for unknown reasons, the subsequent firmware update attempt on the smart extra battery failed, rendering it inaccessible (only temporarily, thankfully, it turned out):

And even on the base unit, I still wasn’t done. Although it was now once again responding normally to front-panel power-off requests, its display was still wonky:

However, a subsequent reset and recalibration of the battery management system (BMS), which EcoFlow technical support hadn’t clued me in on but Reddit research had suggested might also be necessary, kicked off (and eventually completed) the necessary recharge cycle successfully:

(Longstanding readers may remember my earlier DJI drone-themed tutorial on what the BMS is and why periodic battery cycling to recalibrate it is necessary for lithium-based batteries):

And re-attempt of the smart extra battery firmware update later that day was successful as well:

Voila: everything was now back to normal. Hallelujah:

That said, I think I’ll wait for a critical mass of other brave souls to tackle the V1.0.1.200 firmware update more recently made publicly available, before following their footsteps:

The surrogate

And what of that “surrogate system” that “also entered my life”, which I mentioned earlier in this piece? This writeup’s already running long, so I won’t delve into too much detail on this part of the story here, saving it for a separate planned post to come. But the “customer service” folks I mentioned I’d initially reached out to, prior to my subsequent direct connection to technical support, were specific to EcoFlow’s eBay storefront, where I’d originally bought the DELTA 2.

They ended up sending me a DELTA 3 Plus and DELTA 3 Series Smart Extra Battery (both of which I’ve already introduced in prior coverage) as replacements, presumably operating under the assumption that my existing units were dead parrots, not just resting. They even indicated that I didn’t need to bother sending the DELTA 2-generation devices back to them; I should just responsibly dispose of them myself. “Teardown” immediately popped into my head; here’s an EcoFlow-published video I’d already found as prep prior to their subsequent happy restoration:

And here are the DELTA 3 successors, both standalone:

and alongside their predecessors. The much shorter height (and consequent overall decreased volume) of the DELTA 3 Series Smart Extra Battery versus its precursor is particularly striking:

As previously mentioned, I’ll have more on the DELTA 3 products in dedicated coverage to come shortly. Until then, I welcome your thoughts in the comments on what I’ve covered here, whether in general or related to firmware-update snafus you’ve personally experienced!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Firmware-upgrade functional defection and resurrection appeared first on EDN.

Two new runtime tools to accelerate edge AI deployment

Пн, 07/21/2025 - 16:03

While traditional artificial intelligence (AI) frameworks often struggle in ultra-low-power scenarios, two new edge AI runtime solutions aim to accelerate the deployment of sophisticated AI models in battery-powered devices like wearables, hearables, Internet of Things (IoT) sensors, and industrial monitors.

Ambiq Micro, the company that develops low-power microcontrollers using sub-threshold transistors, has unveiled two new edge AI runtime solutions optimized for its Apollo system-on-chips (SoCs). These developer-centric tools—HeliosRT (runtime) and HeliosAOT (ahead-of-time)—offer deployment options for edge AI across a wide range of applications, spanning from digital health and smart homes to industrial automation.

Figure 1 The new runtime tools allow developers to deploy sophisticated AI models in battery-powered devices. Source: Ambiq

The industry has seen numerous failures in the edge AI space because users dislike it when the battery runs out in an hour. It’s imperative that devices running AI can operate for days, even weeks or months, on battery power.

But what’s edge AI, and what’s causing failures in the edge AI space? Edge AI is anything that’s not running on a server or in the cloud; for instance, AI running on a smartwatch or home monitor. The problem is that AI is power-intensive, and sending data to the cloud over a wireless link is also power-intensive. Moreover, the cloud computing is expensive.

“What we aim is to take the low-power compute and turn it into sophisticated AI,” said Carlos Morales, VP of AI at Ambiq. “Every model that we create must go through runtime, which is firmware that runs on a device to take the model and execute it.”

LiteRT and HeliosAOT tools

LiteRT, formerly known as TensorFlow Lite for microcontrollers, is a firmware version for TensorFlow platform. HeliosRT, a performance-enhanced implementation of LiteRT, is tailored for energy-constrained environments and is compatible with existing TensorFlow workflows.

HeliosRT optimizes custom AI kernels for the Apollo510 chip’s vector acceleration hardware. It also improves numeric support for audio and speech processing models. Finally, it delivers up to 3x gains in inference speed and power efficiency over standard LiteRT implementations.

Next, HeliosAOT introduces a ground-up, ahead-of-time compiler that transforms TensorFlow Lite models directly into embedded C code for edge AI deployment. “AOT interpretation, which developers can perform on their PC or laptop, produces C code, and developers can take that code and link it to the rest of the firmware,” Morales said. “So, developers can save a lot of memory on the code size.”

HeliosAOT provides a 15–50% reduction in memory footprint compared to traditional runtime-based deployments. Furthermore, with granular memory control, it enables per-layer weight distribution across the Apollo chip’s memory hierarchy. It also streamlines deployment with direct integration of generated C code into embedded applications.

Figure 2 HeliosRT and HeliosAOT tools are optimized for Apollo SoCs. Ambiq

“HeliosRT and HeliosAOT are designed to integrate seamlessly with existing AI development pipelines while delivering the performance and efficiency gains that edge applications demand,” said Morales. He added that both solutions are built on Ambiq’s sub-threshold power optimized technology (SPOT).

HeliosRT is now available in beta via the neuralSPOT SDK, while a general release is expected in the third quarter of 2025. On the other hand, HeliosAOT is currently available as a technical preview for select partners, and general release is planned for the fourth quarter of 2025.

Related Content

The post Two new runtime tools to accelerate edge AI deployment appeared first on EDN.

Did connectivity sunsetting kill your embedded-system battery?

Птн, 07/18/2025 - 22:12

You’re likely familiar with the concept of “sunsetting,” where a connectivity standard or application is scheduled to be phased out, such that users who depend on it are often simply “out of luck.” It’s frustrating, as it can render an established working system that is doing its job properly either partially or totally useless. The industry generally rationalizes sunsetting as an inevitable consequence of the progress and new standards not only superseding old ones but making them obsolete.

Sunsetting can leave unintended or unknowing victims, but it goes far beyond just loss of connectivity, and I am speaking from recent experience. My 2019 ICE Subaru Outback wouldn’t start despite its fairly new battery; it was totally dead as if the battery was missing. I jumped the battery and recharged it by running the car for about 30 minutes, but it was dead again the next morning. I assumed it was either a defective charging system or a low- or medium-resistance short circuit somewhere.

(As an added punch to the gut, with the battery dead, there was no way to electronically unlock the doors or get to the internal hood release, so it seemed it would have to be towed. Fortunately, the electronic key fob has a tiny “secret” metal key that can be used in its old-fashioned, back-up mechanical door lock just for such situations.)

I jump-started it again and drove directly to the dealer, who verified the battery and charging system were good. Then the service technician pulled a technical rabbit out of his hat—apparently, this problem was no surprise to the service team.

The vampire (drain) did it—but not the usual way

The reason for the battery being drained is subtle but totally avoidable. It was an aggravated case of parasitic battery drain (often called “vampire drain” or “standby power”; I prefer the former) where the many small functions in the car still drain a few milliamps each as their keep-alive current. The aggregate vampire power drawn by the many functions in the car, even when the car is purportedly “off,” can kill the battery.

Subaru used 3G connectivity to link the car to their basic Starlink Safety and Security emergency system, a free feature even if you don’t pay for its many add-on subscription functions (I don’t). However, 3G cellular service is being phased out or “sunsetted” in industry parlance. Despite this sunsetting, the car’s 3G transponder, formally called a Telematics Data Communication Module (TDCM or DCM), just kept trying, thus killing the battery.

The dealer was apologetic and replaced the 3G unit at no cost with a 4G-compatible unit that they conveniently had in stock. I suspect they were prepared for this occurrence all along and were hoping to keep it quiet. There have been some class-action suits and settlements on this issue, but the filing deadline had passed, so I was out of luck on that.

An open-market replacement DCM unit is available for around $500. While the dealer pays less, it’s still not cheap, and swapping them is complicated and time-consuming. It takes at least an hour for physical access, setup, software initialization, and check-out—if you know what you are doing. There are many caveats in the 12-page instruction DCM section for removal and replacement of the module (Figure 1) as well as in the companion 14-page guide for the alternative Data Communication Module (DCM) Bypass Box (Figure 2), which details some tricky wire-harness “fixing.”
Figure 1 The offending unit is behind the console (dashboard) and takes some time to remove and then replace. Source: Subaru via NHTSA

Figure 2 There are also some cable and connector issues of which the service technician must be aware and use care. Source: Subaru via NHTSA

While automakers impose strict limits on the associated standby drain current for each function, it still adds up and can kill the battery of a car parked and unused for anywhere from a few days to a month. The period depends on the magnitude of the drain and the battery’s condition. I strongly suspect that the 3G link transponder uses far more power than any of the other functions, so it’s a more worrisome vampire.

Sunsetting + vampire drain = trouble

What’s the problem here? Although 3G was being sunsetted, that was not the real problem; discontinuing a standard is inevitable at some point. Further, there could also be many other reasons for not being able to connect, even if 3G was still available, such as being parked in a concrete garage. After all, both short- and long-term link problems should be expected.

No, the problem is a short-sighted design that allowed a secondary, non-core function over which you have little or no control (here, the viability of the link) to become a priority and single-handedly drain power and deplete the battery. Keep in mind that the car is perfectly safe to use without this connectivity feature being available.

There’s no message to the car’s owner that something is wrong; it just keeps chugging away, attempting to fulfill its mission, regardless of the fact that it depletes the car’s battery. It has a mission objective and nothing will stop it from trying to complete it, somewhat like the relentless title character in the classic 1984 film The Terminator.

A properly vetted design would include a path that says if connectivity is lost for any reason, keep trying for a while and then go to a much lower checking rate, and perhaps eventually stop.

This embedded design problem is not just an issue for cars. What if the 3G or other link was part of a hard-to-reach, long-term data-collection system that was periodically reporting, but also had internal memory to store the data? Or perhaps it was part of a closed-loop measurement and control that could function autonomously, regardless of reporting functionality?

Continuously trying to connect despite the cost in power is a case of the connectivity tail not only wagging the core-function dog but also beating it to death. It is not a case of an application going bad due to forced “upgrades” leading to incompatibilities (you probably have your own list of such stories). Instead, it’s a design oversight of allowing a secondary, non-core function to take over the power budget (in some cases, also the CPU), thus disabling all the functionality.

Have you ever been involved with a design where a non-critical function was inadvertently allowed to demand and get excessive system resources? Have you ever been involved with a debug challenge or product-design review where this unpleasant fact had initially been overlooked, but was caught in time?

Whatever happens, I will keep checking to see how long 4G is available in my area. The various industry “experts” say 10 to 15 years, but these experts are often wrong! Will 4G connectivity sunset before my car does? Abd if it does, will the car’s module keep trying to connect and, once again, kill the battery? That remains to be seen!

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

References

The post Did connectivity sunsetting kill your embedded-system battery? appeared first on EDN.

Evaluation board powers small robotics and drones

Птн, 07/18/2025 - 21:05

The EPC91118 reference design from EPC integrates power, sensing, and control on a compact circular PCB for humanoid robot joints and UAVs. Driven by the EPC23104 GaN-based power stage, the three-phase BLDC inverter delivers up to 10 A RMS steady-state output and 15 A RMS pulsed.

Complementing the GaN power stage are all the key functions for a complete motor drive inverter, including a microcontroller, rotor shaft magnetic encoder, regulated auxiliary rails, voltage and current sensing, and protection features. Housekeeping supplies are derived from the inverter’s main input, with a 5-V rail powering the GaN stage and a 3.3-V rail supplying the controller, sensors, and RS-485 interface. All these functions fit on a 32-mm diameter board, expanding to 55 mm including an external frame for mechanical integration.

The inverter’s small size allows integration directly into humanoid joint motors. GaN’s high switching frequency allows the use of compact MLCCs in place of bulkier electrolytic capacitors, helping reduce overall size while enhancing reliability. With a footprint reportedly 66% smaller than comparable silicon MOSFET designs, the EPC91118 enables a space-saving motor drive architecture.

EPC91118 reference design boards are priced at $394.02 each. The EPC23104 eGaN power stage IC costs $2.69 each in 3000-unit reels. Both are available for immediate delivery from Digi-Key.

EPC91118 product page

Efficient Power Conversion

The post Evaluation board powers small robotics and drones appeared first on EDN.

Сторінки