Українською
  In English
EDN Network
MCUs cut power to 0.25 µA in standby

Powered by a 32-MHz Arm Cortex-M23 processor, the Renesas RAOE2 group of entry-level MCUs offers low power consumption and an extended temperature range. The devices have a feature set that is optimized for cost-sensitive applications such as battery-operated consumer electronics, small appliances, industrial control systems, and building automation.
RAOE2 MCUs consume 2.8 mA in active mode and 0.89 mA in sleep mode. An integrated high-speed on-chip oscillator supports fast wakeup, allowing the device to remain in software standby mode longer—where power consumption drops to just 0.25 µA. With ±1.0% precision, the oscillator also improves baud rate accuracy and maintains stability across a temperature range of -40°C to +125°C.
The MCUs operate from 1.6 V to 5.5 V, eliminating the need for a level shifter or regulator in 5-V systems. They offer up to 128 KB of code flash and 16 KB of SRAM, along with integrated timers, serial communication interfaces, analog functions, and safety features. Security functions include a unique ID, true random number generator (TRNG), AES libraries, and flash read protection.
RAOE2 MCUs are available now in a variety of packages, including a 5×5-mm, 32-lead QFN.
The post MCUs cut power to 0.25 µA in standby appeared first on EDN.
PMICs optimize energy harvesting designs

Low-current PMICs in AKM’s AP4413 series enable efficient battery charging in devices that typically use disposable batteries, including remote controls, IoT sensors, and Bluetooth trackers. With current consumption as low as 52 nA, they have minimal impact on a system’s power budget—critical for energy harvesting applications.
The series comprises four variants with voltage thresholds tailored to common rechargeable battery types. Each device integrates voltage monitoring to prevent deep discharge, enabling quick startup or recovery. An inline capacitor allows the AP4413 to maintain operation even when the battery is fully discharged, while recharging it simultaneously.
System configuration example.
The AP4413 PMICs are in mass production and come in 3.0×3.0×0.37-mm HXQFN packages.
The post PMICs optimize energy harvesting designs appeared first on EDN.
Cadence debuts DDR5 MRDIMM IP at 12.8 Gbps

Cadence has announced the first DDR5 12.8-Gbps MRDIMM Gen2 memory IP subsystem, featuring a PHY and controller fabricated on TSMC’s N3 (3-nm) process. The design was hardware-validated with Gen2 MRDIMMs populated with DDR5 6400-Mbps DRAM chips, achieving a 12.8-Gbps data rate—doubling the bandwidth of the DRAM devices. The solution addresses growing memory bandwidth demands driven by AI workloads in enterprise and cloud data center applications.
Based on a silicon-proven architecture, the DDR5 IP subsystem provides ultra-low latency encryption and advanced RAS features. It is designed to enable the next-generation of SoCs and chiplets, offering flexible integration options, as well as precise tuning of power and performance.
Combined with Micron’s 1γ-based DRAM and Montage Technology’s memory buffers, Cadence’s DDR5 MRDIMM IP delivers a high-performance memory subsystem with doubled bandwidth. The PHY and controller have been validated using Cadence’s DDR Verification IP (VIP), enabling rapid IP and SoC verification closure. Cadence reports multiple ongoing engagements with leading customers in AI, HPC, and data center markets.
For more information, visit the DDR5 MRDIMM PHY and controller page.
The post Cadence debuts DDR5 MRDIMM IP at 12.8 Gbps appeared first on EDN.
Quantum-safe root-of-trust solution to secure ASICs, FPGAs

A new quantum-safe root-of-trust solution enables ASICs and FPGAs to comply with post-quantum cryptography (PQC) standards set out in regulations like the NSA’s CNSA 2.0. PQPlatform-TrustSys, built around the PQC-first design philosophy, aims to help manufacturers comply with cybersecurity regulations with minimal integration time and effort.
It facilitates robust key management by tracking the key’s origin and permission, including key revocation, an essential and often overlooked part of securing any large-scale cryptographic deployment. Moreover, root-of-trust enforces restrictions on critical operations and maintains security even if the host system is compromised.
Next, key origin and permission attributes are extended to cryptographic accelerators connected to a private peripheral bus. PQPlatform-TrustSys, launched by London, UK-based PQShield, has been unveiled after the company achieved FIPS 140-3 certification through the Cryptographic Module Verification Program (CMVP), which is designed to evaluate cryptographic modules and provide agencies and organizations with a metric for security products.
PQShield, a supplier of PQC solutions, has also built its own silicon test chip to prove this can all be delivered ‘first time right’. Its PQC solutions are developed around three pillars: ultra-fast, ultra-secure, and ultra-small.
PQShield’s security products are built around three basic tenets: ultra-fast, ultra-secure, and ultra-small.
The PKfail vulnerability has thrust multiple security issues within the secure boot and secure update domains, which play a fundamental role in protection against malware. Inevitably, ASICs and FPGAs will need to ensure secure boot and secure update while meeting both existing and new regulatory requirements with clear timelines set out by NIST.
Industry watchers believe that we have a five-to-10-year window to migrate to the PQC world. So, the availability of a quantum-safe root-of-trust solution bodes well for preparing ASICs and FPGAs to function securely in the quantum era.
Related Content
- Post-Quantum Cryptography: Moving Forward
- An Introduction to Post-Quantum Cryptography Algorithms
- Perspectives on Migration Toward Post-Quantum Cryptography
- Release of Post-Quantum Cryptographic Standards Is Imminent
- The need for post-quantum cryptography in the quantum decade
The post Quantum-safe root-of-trust solution to secure ASICs, FPGAs appeared first on EDN.
Current monitor

Almost every wall power supply has no indicator showing whether current is consumed by the load or not.
Wow the engineering world with your unique design: Design Ideas Submission Guide
It seems that this was a shortcoming that was not only noticed by me: I once saw the solution given in Figure 1.
Figure 1 Wall power supply indicator solution showing whether or not a current is being consumed by the load or not.
The thing is that the circuit was not functional—there were only places for the transistor, LED, and resistors on the board, not the elements themselves. It’s easy to say why: the voltage drop base-emitter (Vbe) is about 0.7 V, or 15% from the output voltage of this 5-V device. A monitor like this (Figure 1) would only be tolerable with a 12-V device or higher (24 V).
The circuit in Figure 2 is exceptionally good for low voltages, around 3 to 9 V, and for currents exceeding ~50 mA.
Figure 2 Current monitor circuit for a wall power supply that is good for voltages from 3 to 9 V and currents exceeding 50 mA.
It provides not only the opportunity to monitor its output current in a more efficient (30x) way, the bi-color LED allows it to estimate the value of the current and indicates the on-state of the device. Of course, the LEDs might be separate as well.
As for Q1, Q2: any low-power PNP with a reasonably high B will do, e.g., BC560.
—Peter Demchenko studied math at the University of Vilnius and has worked in software development.
Related Content
- Multi-decade current monitor the epitome of simplicity
- Current Sensor LED Indicator
- High-side current monitor operates at high voltage
- Current monitor compensates for errors
- Current monitor uses Hall sensor
The post Current monitor appeared first on EDN.
Selective averaging in an oscilloscope

Sometimes, you only want to analyze those signal components that meet certain criteria or occur at certain times within an acquisition. This is not too difficult for a single acquisition, but what if you want to obtain the average of those selected measurement events? Here is where seemingly unrelated features of the oscilloscope can work together to get the desired data.
Consider an application where a device produces periodic RF pulse bursts, as shown in Figure 1.
Figure 1 The device under test produces periodic RF pulse bursts; the test goal is to acquire and average bursts with specific amplitudes. Source Arthur Pini
The goal of the test is to acquire and average only those bursts with a specific amplitude. In this case, those with a nominal value of 300 millivolts (mV) peak-to-peak. This desired measurement can be accomplished using the oscilloscope’s Pass/Fail testing capability to qualify the signal. Pass/Fail testing allows the user to test the waveform based on parametric measurements, like amplitude, and pass or fail the measured waveform based on it meeting preset limits. Alternatively, it can be tested by comparing the waveform to a mask template to determine if the waveform is within or outside of the mask. Based on the test results, many actions can be taken, from stopping the acquisition, storing the acquired waveform to memory or file, sounding an audible alarm, or emitting a pulse.
Selective averaging uses Pass/Fail testing to isolate the desired pulse bursts based on their amplitude or conformance to a mask template. Signals meeting the Pass/Fail criteria are stored in internal memory. The averager is set to use that storage memory as its source so that qualified signals transferred to the memory are added to the average.
Setting up Pass/Fail testingTesting is based on the peak-to-peak amplitude, which uses measurement parameter P1. The measurement setup accepts or passes a pulse burst having a nominal peak-to-peak amplitude of 300 mV within a range of ±50 mV of nominal. The test limits are set up in test condition Q1 (Figure 2).
Figure 2 The initial setup to capture and average only pulses with amplitudes of 300 ± 50 mV. Source: Arthur Pini
The oscilloscope’s timebase is set to capture individual pulse bursts, in this case, 100 ns per division. This is important as only individual bursts should be added to the average. A single burst has been acquired, and its peak-to-peak amplitude is 334 mV, as read in parameter P1. The Pass/Fail test setup Q1 tests for the signal amplitude within ±50 mV of the nominal 300 mV amplitude. These limits are user-adjustable to acquire pulse bursts of any amplitude.
A single acquisition is made, acquiring a 338 mV pulse, which appears in the top display grid. This meets the Pass/Fail test criteria, and the signal is stored in memory M1 (Figure 3).
Figure 3 Acquiring a signal that meets the acceptance criteria adds a copy of the signal in memory M1 (center grid) and adds it to the averager contents (lower grid). Source: Arthur Pini
The memory contents are added to the average, showing a waveform count of 1. The Actions tab of the Pass/Fail setup shows that if the acquired signal passes the acceptance criteria, it is transferred into memory. The waveform store operation (i.e., what trace is stored in what memory) is set up separately in the Save Waveform operation under the File pulldown menu.
What happens if the acquired pulse doesn’t meet the test criteria? This is shown in Figure 4.
Figure 4 Acquiring a 247 mV burst results in a failed Q1 condition. In this case, the signal is not stored to M1 and is not added to the average. Source Arthur Pini
The acquired waveform has a peak-to-peak amplitude of 247 mV, outside the test limit. This results in a failure of the Q1 test (shown in red). The test action does not occur, and the low amplitude signal is not added to the average.
Using mask templatesSelective averaging can also be based on mask testing. Masks can be created based on an acquired waveform, or custom masks can be created using software utilities from the oscilloscope manufacturer and downloaded to the oscilloscope. This example uses a mask based on the acquired signal (Figure 5).
Figure 5 A mask, based on the nominal amplitude signal, is created in the oscilloscope. The acquired signal passes if all waveform samples are within the mask. Source Arthur Pini
The mask is created by adding incremental differences both horizontally and vertically about the source waveform. All points must be inside the mask for the acquired signal to pass. As in the previous case, if the signal passes, it is stored in memory and added to the average (Figure 6).
Figure 6 If the acquired signal is fully inside the mask, it is transferred to memory M1 and added to the average. Source Arthur Pini
If the acquired signal has points outside the mask, the test fails, and the signal is not transferred to memory or the average (Figure 7).
Figure 7 An example of a mask test failure with the circled points outside the mask. This waveform is not added to the average. Source Arthur Pini
Selective averaging with a gating signalThis technique can also be applied to signals on a multiplexed bus with a gating signal, such as a chip select, available (Figure 8).
Figure 8 Pass/Fail testing can be employed to select only those signals that are time-coincident with a gating signal, such as a chip select signal. Source: Arthur Pini
The gating signal or chip select is acquired on a separate acquisition channel. In the example, channel 3 (C3) was used. The gating signal is positive when the desired signal is available. To add only those signals that coincide with the gating signal, pass/fail testing verifies the presence of a positive gating signal. Testing that the maximum value of C3 is greater than 100 mV verifies that the gate signal is in a high state, and the test is passed. The oscilloscope is set to store C1 in memory M1 under a passed condition, which is added to the average (Figure 9).
Figure 9 The average based on waveforms coincident with the gate positive gate signal state. Source: Arthur Pini
Isolating test signalsIf the segments of the analyzed signal are close together and cannot be separated using the standard timebase (1,2,5 step) scales, a horizontal (zoom) expansion of the acquired signal can be used to select the desired signal segment part. The variable zoom scale provides very fine horizontal steps. The zoom trace can be used instead of the acquired channel, and the average source is the zoom trace.
Selective averagingSelective averaging, based on Pass/Fail testing, is an example of linked features in an oscilloscope that complement each other and offer the user a broader range of measurements. Averaging was the selected analysis tool, but it could have been replaced with the fast Fourier transform (FFT) or a histogram. The oscilloscope used in this example was a Teledyne LeCroy HDO 6034B.
Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.
Related Content
- Reducing noise in oscilloscope and digitizer measurements
- 10 tricks that extend oscilloscope usefulness
- FFTs and oscilloscopes: A practical guide
- Oscilloscope special acquisition modes
- Understanding and applying oscilloscope measurements
- Combating noise and interference in oscilloscopes and digitizers
The post Selective averaging in an oscilloscope appeared first on EDN.
Portable power station battery capacity extension: Curious coordination

I’m still awaiting an opportunity, when I have spare time, the snow’s absent from the deck and winds are calm, to test out those two 220W solar panels I already mentioned I bought last year:
for parallel-combining and mating with my EcoFlow DELTA 2 portable power station:
While I remain on more-favorable-conditions standby, I’ve got two other pieces of EcoFlow gear also in the queue to tell you about. One, the 800W Alternator Charger that I mentioned in a more recent piece, isn’t an installation high-priority right now, so hands-on results prose will also need to wait.
But the other (and eventually also its replacement; hold that thought), which I pressed into service as soon as it arrived, is the topic of today’s post. It’s the DELTA 2 Smart Extra Battery, which mates to the DELTA 2 base unit over a thick dual-XT150-connectors-inclusive cable and combo-doubles the effective subsequently delivered storage capacity:
Here’s what my two identical-sized (15.7 x 8.3 x 11 in/400 x 211 x 281 mm) albeit different-weight (DELTA 2 base unit: 27 lbs/12 kg, DELTA 2 Smart Extra Battery: 21 lbs/9.5 kg) devices look like in their normal intended stacked configuration:
And here’s my more haphazard, enthusiastic initial out-of-box hookup of them:
In the latter photo, if you look closely, you can already discern why I returned the original Smart Extra Battery, which (like both its companion and its replacement) was a factory-refurbished unit from EcoFlow’s eBay storefront. Notice the brightness difference between it and the more intense DELTA 2’s displays. I should note upfront that at the time I took that photo, both devices’ screens still had the factory-installed clear plastic protectors on them, so there might have been some resultant muting. But presumably it would have dimmed both units’ displays equally.
The displays are odd in and of themselves. When I’d take a screen protector off, I’d see freakish “static” (for lack of a better word) scattered all over it for a few (dozen) seconds, and I could also subsequently simulate a semblance of the same effect by rubbing my thumb over the display. This photo shows the artifacts to a limited degree (note, in particular, the lower left quadrant):
My root-cause research has been to-date fruitless; I’d welcome reader suggestions on what core display technology EcoFlow is using and what specific effect is at play when these artifacts appear. Fortunately, if I wait long enough, they eventually disappear!
As for the defective display in particular, its behavior was interesting, too. LCDs, for example, typically document a viewing angle specification, which is the maximum off-axis angle at which the display still delivers optimum brightness, contrast and other attributes. Beyond that point, typically to either side but also vertically, image quality drops off. With the DELTA 2 display, it was optimum when viewed straight on, with drop-off both from above and below. With the original Smart Extra Battery display, conversely, quality was optimum when viewed from below, almost (or maybe exactly) as if the root cause was a misaligned LCD polarizer. Here are closeups of both devices’ displays, captured straight on in both cases, post-charging:
After checking with Reddit to confirm that what I was experiencing was atypical, I reached out to EcoFlow’s eBay support team, who promptly and thoroughly took care of me (and no, they didn’t know I was a “press guy”, either), with Fedex picking up the pre-paid return-shipping defective unit at my front door:
and a replacement, quick-shipped to me as soon as the original arrived back at EcoFlow.
That’s better!
The Smart Extra Battery appears within the app screens for the DELTA 2, vs as a distinct device:
Here’s the thick interconnect cable:
I’d initially thought EcoFlow forgot to include it, but eventually found it (plus some documentation) in a storage compartment on top of the device:
Here are close-ups of the XT150 connectors, both at-device (the ones on the sides of the DELTA 2 and Smart Extra Battery are identical) and on-cable (they’re the same on both ends):
I checked for available firmware updates after first-time connecting them; one was available.
I don’t know if it was related to the capacity expansion specifically or was just timing-coincidental, and if it was for the DELTA 2 (with in-progress status shown in the next photo), Smart Extra Battery or both…but it completed uneventfully and successfully.
Returning to the original unit, as that’s what I’d predominantly photo-documented, it initially arrived only 30% “full”:
With the DELTA 2 running the show, first-time charging of the Smart Extra Battery was initially rapid and high power-drawing; note the incoming power measured at it:
and flowing both into and out of the already-fully-charged DELTA 2:
As the charging process progressed, the current flow into the Smart Extra Battery slowed, eventually to a (comparative) trickle:
until it finished. Note the high reported Smart Extra Battery temperature immediately after charge completion, both in an absolute sense and relative to the normal-temperature screenshot shown earlier!
In closing, allow me to explain the “Curious Coordination” bit in the title of this writeup. I’d upfront assumed that if I lost premises power and needed to harness the electrons previously collected within the DELTA 2/Smart Extra Battery combo instead, the Smart Extra Battery would be drained first. Such a sequence would theoretically allow me to, for example, then disconnect the Smart Extra Battery and replace it with another already-fully-charged one I might have sat around to further extend the setup’s total usable timespan prior to complete depletion.
In saying this, I realize that the feasibility of such a scenario isn’t likely, since the Smart Extra Battery can’t be charged directly from AC (or solar, for that matter) but instead requires an XT150-equipped “smart” source such as a (second, in this scenario) DELTA 2. That said, what I discovered to be the case when I finally got the gear in my hands was the exact opposite; the DELTA 2 battery drained first, down to a nearly (but not completely) empty point, then the discharge source switched to the extra battery. And that said, further research has educated me that actual behavior varies depending on how much current is demanded by whatever the combo is powering; in heavy-load scenarios, the two devices’ battery packs drain in parallel.
What are your thoughts on this behavior, and/or anything else I’ve mentioned here? Share them with your fellow readers (and me!) in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- A holiday shopping guide for engineers: 2024 edition
- The Energizer 200W portable solar panel: A solid offering, save for a connector too fragile
- EcoFlow’s Delta 2: Abundant Stored Energy (and Charging Options) for You
The post Portable power station battery capacity extension: Curious coordination appeared first on EDN.
LM4041 voltage regulator impersonates precision current source

The LM4041 has been around for over 20 years. During those decades, while primarily marketed as a precision adjustable shunt regulator, this classic device also found its way into alternative applications. These include voltage comparators, overvoltage protectors, voltage limiters, etc. Voltage, voltage, voltage, must it always be voltage? It gets tedious. Surely this popular precision chip, while admittedly rather—um—“mature”, must have untapped potential for doing something that doesn’t start with voltage.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The Design Idea (DI) presented in Figure 1 offers the 4041 a usual, possibly weird, maybe even new role to play. It’s a precision current source.
Figure 1 Weirdly, the “CATHODE” serves as the sense pin for active current source regulation.
The above block diagram shows how the 4041 works at a conceptual level:
Sourced current = Is = (V+ – (Vc + 1.24v))/R1
Is > 0, V+ < 15v, Is < 20 mA
The series connection subtracts an internal 1.24-V precision reference from the external voltage input on the CATHODE pin. The internal op-amp subtracts the voltage input on the FB pin from that difference, then amplifies and applies the result to the pass transistor. If it’s positive [(V+ – 1.24) > Vc], the transistor turns on and shunts current from CATHODE to ANODE. Otherwise, it turns off.
When a 4041 is connected in the traditional fashion (FB connected to CATHODE and ANODE grounded), the scheme works like a shunt voltage regulator should, forcing CATHODE to the internal 1.24-V reference voltage. But what will happen if the FB pin is connected to a constant control voltage [Vc < (V+ – 1.24v)] and CATHODE—and instead of being connected to FB—floats freely on current sensing resistor R1?
What happens is the current gets regulated instead of the voltage. Because Vc is fixed and can’t be pulled up to make FB = CATHODE – 1.24, CATHODE must be pulled down until equality is achieved. For this to happen, a programmed current, Is, must be passed that is given by:
Is = (V+ – (Vc – 1.24))/R1.
Figure 2 illustrates how this relationship can be used (assuming a 5-V rail that’s accurate enough) to make a floated-cathode 4041 regulate a constant current source of:
Is = (5v – 2.5 – 1.23)/R1 = 1.27v /R1
It also illustrates how adding a booster transistor Q1 can accommodate applications needing current or power beyond Z1’s modest limits. Notice that Z1’s accuracy will be unimpaired because, with whatever fraction of Is that Q1 causes to bypass, Z1 is summed back in before passing through R1.
Figure 2 The booster transistor Q1 can handle current beyond 4041 max Is and dissipation limits.
Figure 3 shows how Is can be digitally linearly programmed with PWM.
Figure 3 Schematic showing the DAC control of Is. Is = Df amps, where Df = PWM duty factor. The asterisked resistors should be 1% or better.
Incoming 5-Vpp, 10-kHz PWM causes Q2 to switch R5, creating a variable average resistance = R5/Df. Thanks to the 2.5-V Z1 reference, the result is a 0 to 1.22 mA current into Q1’s source. This is summed with a constant 1.22 mA bias from R4 and level shifted by Q1 to make a 1.22 to 2.44 V control voltage, Vc, for current source Z2.
The result is a linear 0- to 1-A output current, Is, into a grounded load where Is = Df amps. Voltage compliance is 0 to 12 V. The 8-bit compatible PWM ripple filtering is 2nd order using “Cancel PWM DAC ripple with analog subtraction.”
R3C1 provides the first-stage ripple filter and R7C2 the second. The C1 and C2 values shown are scaled for Fpwm = 10 kHz to provide an 8-bit settling time of 6 ms. If a different PWM frequency is used, scale both capacitors by 10kHz/Fpwm.
A hot topic is that Q4 can be called on to dissipate more than 10 W, so don’t skimp on heatsink capacity.
Q3 is a safety shutdown feature. It removes Q1 gate drive when +5 falls below about 3 V, shutting off the current source and protecting the load when controller logic is powered down.
Figure 4 adds zero and span pots to implement a single-pass calibration for best accuracy:
- Set Df = 0% and adjust single turn ZERO trim for zero output current
- Set Df = 100% and adjust single turn CAL trim for 1.0 A output
- Done.
Figure 4 Additional zero and span pots to implement a single-pass calibration for best accuracy.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- PWM-programmed LM317 constant current source
- Low-cost precision adjustable current reference and application
- A negative current source with PWM input and LM337 output
- A high-performance current source
- Simple, precise, bi-directional current source
The post LM4041 voltage regulator impersonates precision current source appeared first on EDN.
Did you put X- and Y-capacitors on your AC input?

X- and Y-capacitors are commonly used to filter AC power-source electromagnetic interference (EMI) noise and are often referred to as safety capacitors. Here is a detailed view of these capacitors, related design practices and regulatory standards, and profile of supporting power ICs. Bill Schweber also provides a sneak peek into how they operate in AC power line circuits.
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- When the AC line meets the CFL/LED lamp
- How digital capacitor ICs ease antenna tuning
- What would you ask an entry-level analog hire?
- Active filtering: Attenuating switching-supply EMI
The post Did you put X- and Y-capacitors on your AC input? appeared first on EDN.
Walmart’s onn. 4K streaming box: A Google TV upgrade doesn’t clobber its cost

Within my teardown published last summer of Walmart’s “onn.”-branded original Android TV-based streaming receiver, the UHD Streaming Device:
I mentioned that I already had Google TV operating system-based successors for both the “box” and “stick” Android TV form factor (subsequently dissected by me and published last December) sitting on my shelves awaiting my teardown attention. That time is now, specifically for the onn. Google TV 4K Streaming Box I’d bought at intro in April 2023 for $19.88 (the exact same price as its Android TV-based forebear):
The sizes of the two device generations are near-identical, although it’s near-impossible to find published dimension specs for either device online, only for the retail packaging containing them. As such, however, a correction is in order. I’d said in my earlier teardown that the Android TV version of the device was 4.9” both long-and-wide, and 0.8” tall: it’s actually 2.8” (70mm, to be precise) in both length and width, with a height of ~0.5” (13 mm). And the newer Google TV-based variant is ~3.1” (78mm) both long and wide and ~0.7” (18 mm) tall.
Here are more “stock” shots of the newer device that we’ll be dissecting today, along with its bundled remote control and other accessories:
Eagle-eyed readers may have already noticed the sole layout difference between the two generations’ devices. The reset switch and status LED are standalone along one side in the original Android TV version, whereas they’re at either side of, and on the same side as, the HDMI connector in the new Google TV variant. The two generations’ remote controls also vary slightly, although I bet the foundation hardware design is identical. The lower right button in the original gave user-access favoritism to HBO Max (previously HBO Go, now known as just “Max”):
whereas now it’s Paramount+ getting the special treatment (a transition which I’m guessing was motivated by the more recent membership partnership between the two companies and implemented via a relabel of that button along with an integrated-software tweak).
Next, let’s look at some “real-life” shots, beginning with the outside packaging:
Note that, versus the front-of-box picture of its precursor that follows, Walmart’s now referring to it as capable of up to “4K” output resolution, versus the previous, less trendy “UHD”:
Also, it’s now called a “box”, versus a “device”. Hold that latter thought until next month…now, back to today’s patient…
The two sides are comparatively info-deficient:
The bottom marks a return to info-rich form:
While the top as usual never fails to elicit a chuckle from yours truly:
Let’s see what’s inside:
That’s quite a complex cardboard assemblage!
The first thing you’ll see when you flip up the top flap:
are our patient, currently swathed in protective opaque plastic, and a quick start guide that you can find in PDF form here, both as-usual accompanied in the photo by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes.
Below them, in the lower level of the cardboard assemblage, are the aforementioned remote control and a 1-meter (3.28 ft) HDMI cable:
Here’s the backside of the remote control; note the added sticker (versus its predecessor) above the battery compartment with re-pairing instructions, along with the differing information on the smaller sticker in the upper right corner within the battery compartment:
I realized after typing the previous words that since I hadn’t done a teardown of the remote control last time, I hadn’t taken a picture of its opened backside, either. Fortunately, it was still inhabiting my office, so…here you go!
Also originally located in the lower level of the cardboard assemblage are the AC adapter, an oval-shaped piece of double-sided adhesive for attaching the device to a flat surface, and a set of AAA batteries for the remote control:
Here’s the micro-USB jack that plugs into the on-device power connector:
And here are the power adapter’s specs:
which are comparable, “wall wart” form factor variances aside, with those of its predecessor:
Finally, here are some overview images of our patient, first from above:
Here’s the micro-USB side:
This side’s bare on this generation of the device:
but, as previously mentioned, contained the status LED and reset switch in the prior generation:
They’ve moved one side over this time, straddling the HDMI cable (I realized after taking this shot and subsequently moving in with the disassembly that the status LED was behind the penny; stand by for another look at it to come shortly!):
The last (left) side, in contrast, is bare in both generations:
Finally, here’s the device from below:
And here’s a closeup of the label, listing (among other things) the FCC ID, 2AYYS-8822K4VTG (no, I don’t know why there are 28 different FCC documents posted for this ID, either!):
Now to get inside. Ordinarily, I’d start out by peeling off that label and seeing if there are any screw heads visible underneath. But since last time’s initial focus on the gap between the two case pieces panned out, I decided to try going down that same path again:
with the same successful outcome (a reminder at the start that we’re now looking at the underside of the inside of the device):
Check out the hefty piece of metal covering roughly half of the interior and linked to the Faraday cage on the PCB, presumably for both thermal-transfer and cushioning purposes, via two spongy pieces still attached to the latter:
I’m also presuming that the metal piece adds rigidity to the overall assembly. So why doesn’t it cover the entirety of the inside? They’re not visible yet, but I’m guessing there are Bluetooth and Wi-Fi antenna somewhere whose transmit and receive potential would have been notably attenuated had there been intermediary metal shielding between them and the outside world:
See those three screws? I’m betting we can get that PCB out of the remaining top portion of the case if we remove them first:
Yep!
Before we get any further, let me show you that status LED that was previously penny-obscured:
It’s not the actual LED, of course; that’s on the PCB. It’s the emissive end of the light guide (aka, light pipe, light tube) visible in the upper left corner of the inside of the upper chassis, with its companion switch “plunger” at upper right. Note, too, that this time one (gray) of the “spongy pieces” ended up stuck to this side’s metal shielding, which once again covers only ~half of the inside area:
The other (pink) “spongy piece” is still stuck to one of the two Faraday cages on the top side of the PCB, now visible for the first time:
In the upper right corner is the aforementioned LED (cluster, actually). At bottom, as previously forecasted unencumbered by intermediary shielding thanks to their locations, are the 2.4 GHz and 5 GHz Wi-Fi antennae. Along the right edge is what I believe to be the PCB-embedded Bluetooth antenna. And as for those Faraday cages, you know what comes next:
They actually came off quite easily, leaving me cautiously optimistic that I might eventually be able to pop them back on and restore this device to full functionality (which I’ll wait to try until after this teardown is published; stay tuned for a debrief on the outcome in the comments):
Let’s zoom in and see what’s under those cage lids:
Within the upper one’s boundary are two notable ICs: a Samsung K4A8G165WC-BCTD DDR4-2666 8 Gbit SDRAM and, to its right, the system’s “brains”, an Amlogic S905Y4 app processor.
And what about the lower cage region?
This one’s an enigma. That it contains the Wi-Fi and Bluetooth transceivers, and other circuitry is pretty much a given, considering its proximity to the antennae (among other factors). And it very well could be one and the same as the Askey Computer 8822CS, seemingly with Realtek wireless transceiver silicon inside, that was in the earlier Android TV version of device. Both devices support the exact same Bluetooth (5.0) and Wi-Fi (2.4/5GHz 802.11 a/b/g/n/ac MIMO) protocol generations, and the module packaging looks quite similar in both albeit rotated 90° in one PCB layout versus the other:
That said, unfortunately, there’s no definitively identifying sticker atop the module this time, as existed previously. If it is the same, I hope the manufacturer did a better job with its soldering this time around!
Now let’s flip the PCB back over to the bottom side we’ve already seen before, albeit now freed from its prior case captivity:
I’ll direct your attention first to the now clearly visible reset switch at upper right, along with the now obscured light guide at upper left. I’m guessing that the black spongey material makes sure that as much of the light originating at the PCB on the other side makes it outside as possible, versus inefficiently illuminating the device interior instead.
Once again, the Faraday Cage lifts off cleanly and easily:
The Faraday cage was previously located atop the PCB’s upper outlined region:
Unsurprisingly, another Samsung K4A8G165WC-BCTD DDR4-2666 8 Gbit SDRAM is there, for 2 GBytes of total system memory.
The region below it, conversely, is another enigma of this design:
Its similar outline to the others suggests that a Faraday cage should have originally been there, too. But it wasn’t; you’ve seen the pictorial proof. Did the assembler forget to include it when building this particular device? Or did the manufacturer end up deciding it wasn’t necessary at all? Dunno. What I do know is that within it is nonvolatile storage, specifically the exact same Samsung KLM8G1GETF-B041 8 GByte eMMC flash memory module that we saw last time!
More generally, what surprises me the most about this design is its high degree of commonality with its predecessor despite its evolved operating system foundation:
- Same Bluetooth and Wi-Fi generations
- Same amount and speed bin of DRAM, albeit from different suppliers, and
- Same amount of flash memory, in the same form factor, from the same supplier
The SoCs are also similar, albeit not identical. The Amlogic S905Y2 seen last time dates from 2018, runs at 1.8 GHz and is a second-generation offering (therefore the “2” at the end). This time it’s the 2022-era Amlogic S905Y4, with essentially the same CPU (quad-core Arm Cortex-A53) and GPU (Mali-G31 MP2) subsystems, and fabricated on the same 12-nm lithography process, albeit running 200 MHz faster (2 GHz). The other notable difference is the 4th-gen (therefore “4” at the end) SoC’s added decoding support for the AV1 video codec, along with both HDR10 and HDR10+ high dynamic range (HDR) support.
Amlogic also offers the Amlogic S905X4; the fundamental difference between “Y” and “X” variants of a particular SoC involves the latter’s integration of wired Ethernet support. This latter chip is found in the high-end onn. Google TV 4K Pro Streaming Device, introduced last year, more sizeable (7.71 x 4.92 x 2.71 in.) than its predecessors, and now normally selling for $49.88, although I occasionally see it on sale for ~$10 less:
The 4K Pro software-exposes two additional capabilities of the 4th-generation Amlogic S905 not enabled in the less expensive non-Pro version of the device: Dolby Vision HDR and Dolby Atmos audio. It also integrates 50% more RAM (to 3 GBytes) and 4x the nonvolatile flash storage (to 32 GBytes), along with making wireless connectivity generational advancements (Wi Fi 6: 2.4/5GHz 802.11ax), embedding a microphone array and swapping out geriatric micro-USB for USB-C. And although it’s 2.5x the price of its non-Pro sibling, everything’s relative; since Google has now obsoleted the entire Chromecast line, including the HD and 4K versions of the Chromecast with Google TV, the only Google-branded option left is the $99.99 Google TV Streamer successor.
I’ve also got an onn. Google TV 4K Pro Streaming Device sitting here which, near term, I’ll be swapping into service in place of its Google Chromecast with Google TV (4K) predecessor. Near-term, stand by for an in-use review; eventually, I’m sure I’ll be tearing it down, too. And even nearer term, keep an eye out for my teardown of the “stick” form factor onn. Google TV Full HD Streaming Device, currently scheduled to appear at EDN online sometime next month:
For now, I’ll close with some HDMI and micro-USB end shots, both with the front:
and backsides of the PCB pointed “up”:
Along with an invitation for you to share thoughts on anything I’ve revealed and discussed here in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Walmart’s onn. UHD streaming device: Android TV at a compelling price
- Walmart’s onn. FHD streaming stick: Still Android TV, but less thick
- Google’s Chromecast with Google TV: Dissecting the HD edition
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- Google’s Chromecast Ultra: More than just a Stadia consorta
The post Walmart’s onn. 4K streaming box: A Google TV upgrade doesn’t clobber its cost appeared first on EDN.
How software testing guarantees the absence of bugs

Major industries such as electric vehicles (EVs), Internet of Things (IoT), aeronautics, and railways have strict, well-established processes to ensure they can maintain high safety standards throughout their operations. This level of precision, compliance, and enforcement is particularly important for safety-critical industries such as avionics, energy, space and defense, where high emphasis is placed on the development and validation of embedded software that contemporary and newly developed vehicles and vessels rely on to ensure operational safety.
It’s rare for a software glitch on its own to cause a catastrophic event. However, as embedded software systems become more complex, so too does the onus on developers to make sure their software is able to operate within that complexity bug-free.
That’s because the increasing interconnectivity between multiple information systems has transformed the critical domains like medical devices, infrastructure, transportation, and nuclear engineering. Then there are issues like asset security, risk management, and security architecture that require safe and secure operation of equipment and systems. This necessity for safety is not only acute from operational safety perspectives, but also in terms of cybersecurity.
However, despite the application of rigorous testing processes and procedures that are already in place, subtle bugs are still missed by testing techniques that don’t provide full coverage and don’t embed themselves deeply within operational environments. They are unacceptable errors that cannot be allowed to remain undetected and potentially metastasize but finding them and rooting them out is still a major challenge for most.
While the software driving embedded compute systems becomes more complex and, therefore, more vulnerable, increasingly strict safety regulations designed to protect human lives are coming into force, which means that software development teams need to devise innovative solutions that enable them to proactively address safety and security issues. They should also be able to do so quickly to respond to demand without compromising test result integrity.
This need is particularly significant among critical software companies who depend heavily on traditional testing methods. Even when following highly focused, tried and true testing processes, there are for many software development engineers a nagging concern that a bug could have slipped through undetected.
That’s because they sometimes do, which leaves many quality assurance and product managers, especially in critical industries, to lose sleep over whether they have done enough to ensure software safety.
One major software supplier in the aerospace industry recently faced such a dilemma when it approached TrustInSoft with a problem.
A customer of the software supplier had discovered an undetected bug in one of several software modules that had been supplied to them, and the software was already fully operational. Once informed of the issue and being directed to resolve it, the supplier needed months to locate, understand, and ultimately rectify the bug, resulting in substantial costs for bug detection and software reengineering. The situation also had a negative impact on the supplier’s reputation and its business relationships with other customers.
That’s when they realized they needed a better, more conclusive way to ward off such incursions and do so confidently.
As a first step, the software supplier consulted TrustInSoft to see if it’s possible to confirm that the bug that had taken the software supplier months to identify was not only truly gone, but that no others were lurking undetected.
In just a few days, analysis revealed several previously undiscovered bugs in addition to what had caused the initial alarm. Each of these subtle bugs would have been extremely difficult, if not impossible, to detect using conventional methods, which is most likely why they were missed.
TrustInSoft Analyzer’s use of formal methods gives developers definitive proof that their source code is free from memory-safety issues, runtime errors, and security vulnerabilities. The analyzer’s technology is based on rigorously specified mathematical models that verify a software’s properties and behaviors against precisely defined specifications. It can, as a result, identify every potential security vulnerability within the source code.
The integration of formal methods enables users to conduct truly exhaustive analyses. What that means in practice is that complex formal method analysis techniques can be applied to—and keep pace with—increasingly sophisticated software packages. For many organizations, this intensive verification and validation process is now a requirement for safety and security-critical software development teams.
A significant advantage of formal method tools over traditional static analysis tools for both enterprise and open-source testing is the ability to efficiently perform the equivalent of billions of tests in a single run, which is unprecedented in conventional testing environments.
Critical industries provide essential services that have direct importance to our lives. But any defects in the software code at the heart of many of those industries can pose serious risks to human safety. TrustInSoft Analyzer’s ability to mathematically guarantee the absence of bugs in critical software is therefore essential to establish and maintain operational safety before it’s too late.
Caroline Guillaume is CEO of TrustInSoft.
Related Content
- Embedded Software Testing Basics
- Don’t Let Assumptions Wreck Your Code
- Software Testing Needs More Automation
- 5 Software Testing Challenges (and How to Avoid Them)
- Performance-Regression Pitfalls Every Project Should Avoid
The post How software testing guarantees the absence of bugs appeared first on EDN.
Architectural opportunities propel software-defined vehicles forward

At the end of last year, the global software-defined vehicle (SDV) market size was valued at $49.3 billion. With a compound annual growth rate exceeding 25%, the industry is set to skyrocket over the next decade. But this anticipated growth hinges on automakers addressing fundamental architectural and organizational barriers. To me, 2025 will be a pivotal year for SDVs, provided the industry focuses on overcoming these challenges rather than chasing incremental enhancements.
Moving beyond the in-cabin experienceIn recent years, innovations in the realm of SDVs have primarily focused on enhancing passenger experience with infotainment systems, high-resolution touchscreens, voice-controlled car assistance, and personalization features ranging from seat positions to climate control, and even customizable options based on individual profiles.
While enhancements of these sorts have elevated the in-cabin experience to essentially replicate that of a smartphone, the next frontier in the automotive revolution lies in reimagining the very architecture of vehicles.
To truly advance the future of SDVs, I believe OEMs must partner with technology companies to architect configurable systems that enable SDV features to be unlocked on demand, unified infrastructures that optimize efficiency, and the integration of software and hardware teams at organizations. Together, these changes signal a fundamental redefinition of what it means to build and operate a vehicle in the era of software-driven mobility.
1. Cost of sluggish software updatesThe entire transition to SDVs was built on the premise that OEMs could continuously improve their products, deploy new features, and offer better user experience throughout the vehicle’s lifecycle, all without having to upgrade the hardware. This has created a new business model of automakers depending on software as a service to drive revenue streams. Companies like Apple have shelved plans to build a car, instead opting to control digital content within vehicles with Apple CarPlay. As automakers rely on users purchasing software to generate revenue, the frequency of software updates has risen. However, these updates introduce a new set of challenges to both vehicles and their drivers.
When over-the-air updates are slow or poorly executed, it can cause delayed functionality in other areas of the vehicle by rendering certain features unavailable until the software update is complete. Lacking specific features can have significant implications for a user’s convenience but also surfaces safety concerns. In other instances, drivers could experience downtime where the vehicle is unusable while updates are installed, as the process may require the car to remain parked and powered off.
Rapid reconfiguration of SDV softwareModern users will soon ditch their car manufacturers who continue to deliver slow over-the-air updates that impair the use of their car, as seamless and convenient functionality remains a priority. To stay competitive, OEMs need to upgrade their vehicle architectures with configurable platforms to grant users access to features on the fly without friction.
Advanced semiconductor solutions will play a critical role in this transformation, by facilitating the seamless integration of sophisticated electronic systems like advanced driver-assistance systems (ADAS) and in-vehicle entertainment platforms. These technological advancements are essential for delivering enhanced functionality and connected experiences that define next-generation SDVs.
To support this shift, cutting-edge semiconductor technologies such as fully-depleted silicon-on-insulator (FD-SOI) and Fin field-effect transistor (FinFET) with magnetoresistive random access memory (MRAM) are emerging as key enablers. These innovations enable the rapid reconfiguration of SDVs, significantly reducing update times and minimizing disruption for drivers. High-speed, low-power non-volatile memory (NVM) further accelerates this progress, facilitating feature updates in a fraction of the time required by traditional flash memory. Cars that evolve as fast as smartphones, giving users access to new features instantly and painlessly, will enhance customer loyalty and open up new revenue streams for automakers, Figure 1.
Figure 1 Cars that evolve as fast as smartphones using key semiconductor technologies such as FD-SOI, FinFET, and MRAM will give users access to new features instantly and painlessly. Source: Getty Images
2. Inefficiencies of distinct automotive domainsThe present design of automotive architecture also lends itself to challenges, as today’s vehicles are built around a central architecture that is split into distinct domains: motion control, ADAS, and entertainment. These domains function independently, each with their own control unit.
This current domain-based system has led to inefficiencies across the board. With domains housed in separate infrastructures, there are increased costs, weight, and energy consumption associated with computing. Especially as OEMs increasingly integrate new software and AI into the systems of SDVs, the domain architecture of cars presents the following challenges:
- Different software modules must run on the same hardware without interference.
- Software portability across different hardware in automotive systems is often limited.
- AI is the least hardware-agnostic component in automotive applications, complicating integration without close collaboration between hardware and software systems.
The inefficiencies of domain-based systems will continue to be amplified as SDVs become more sophisticated, with an increasing reliance on AI, connectivity, and real-time data processing, highlighting the need for upgrades to the architecture.
Optimizing a centralized architectureOEMs are already trending toward a more unified hardware structure by moving from distinct silos to an optimized central architecture under a single house, and I anticipate a stronger shift toward this trend in the coming years. By sharing infrastructure like cooling systems, power supplies, and communication networks, this shift is accompanied by greater efficiency, both lowering costs and improving performance.
As we look to the future, the next logical step in automotive innovation will be to merge domains into a single system-on-chip (SoC) to easily port software between engines, reducing R&D costs and driving further innovation. In addition, chiplet technology ensures the functional safety of automotive systems by maintaining freedom of interference, while also enabling the integration of various AI engines into SDVs, paving the way for more agile innovation without overhauling entire vehicles (Figure 2).
Figure 2 Merge multiple domains into a singular, central SoC is key to realizing SDVs. This architectural shift inherently relies upon chiplet technology to ensure the functional safety of automotive systems. Source: Getty Images
3. The reorganization companies must faceMany of these software and hardware architectural challenges stem from the current organization of companies in the industry. Historically, automotive companies have operated in silos, with hardware and software development functioning as distinct, and often disconnected entities. This legacy approach is increasingly incompatible with the demands of SDVs.
Bringing software to the forefrontMoving forward, automakers must shift their focus from being hardware-centric manufacturers to becoming software-first innovators. Similar to technology companies, automakers must adopt new business models that allow for continuous improvement and rapid iteration. This involves restructuring organizations to promote cross-functional collaboration, bringing traditionally isolated departments together to ensure seamless integration between hardware and software components.
While restructuring any business requires significant effort, this transformation will also reap meaningful benefits. By prioritizing software first, automakers will be able to deliver vehicles with scalable, future-proofed architectures while also keeping customers satisfied as seamless over-the-air updates remain a defining factor of the SDV experience.
Semiconductors: The future of SDV architectureThe SDV revolution stands at a crossroads; while the in-cabin experience has made leaps in advancements, the architecture of vehicles must evolve to meet future consumer demands. Semiconductors will play an essential role in the future of SDV architecture, enabling seamless software updates without disruption, centralizing domains to maximize efficiency, and driving synergy between software and hardware teams.
Sudipto Bose, Senior Director of Automotive Business Unit, GlobalFoundries.
Related Content
- CES 2025: Wirelessly upgrading SDVs
- CES 2025: Moving toward software-defined vehicles
- Software-defined vehicle (SDV): A technology to watch in 2025
- Will open-source software come to SDV rescue?
The post Architectural opportunities propel software-defined vehicles forward appeared first on EDN.
Why optical technologies matter in machine vision systems

Machine vision systems are becoming increasingly common across multiple industries. Manufacturers use them to streamline quality control, self-driving vehicles implement them to navigate, and robots rely on them to work safely alongside humans. Amid these rising use cases, design engineers must focus on the importance of reliable and cost-effective optical technologies.
While artificial intelligence (AI) algorithms may take most of the spotlight in machine vision, optical systems providing the data these models analyze are crucial, too. Therefore, by designing better camera and sensor arrays, design engineers can foster optimal machine vision on several fronts.
Optical systems are central to machine vision accuracy before the underlying AI model starts working. These algorithms are only effective when they have sufficient relevant data for training, and this data requires cameras to capture it.
Some organizations have turned to using AI-generated synthetic data in training, but this is not a perfect solution. These images may contain errors and hallucinations, hindering the model’s accuracy. Consequently, they often require real-world information to complement them, which must come from high-quality sources.
Developing high-resolution camera technologies with large dynamic ranges gives AI teams the tools necessary to capture detailed images of real-world objects. As a result, it becomes easier to train more reliable machine vision models.
Expanding machine vision applications
Machine vision algorithms need high-definition visual inputs during deployment. Even the most accurate model can produce inconsistent results if the images it analyzes aren’t clear or consistent enough.
External factors like lighting can limit measurement accuracy, so designers must pay attention to these considerations in their optical systems, not just the cameras themselves. Sufficient light from the right angles to minimize shadows and sensors to adjust the focus accordingly can impact reliability.
Next, video data and still images are not the only optical inputs to consider in a machine vision system. Design engineers can also explore a variety of technologies to complement conventional visual data.
For instance, lidar is an increasingly popular choice. More than half of all new cars today come with at least one radar sensor to enable functions like lane departure warnings. So, lidar is following a similar trajectory as self-driving features grow.
Complementing a camera with lidar sensors can provide these machine vision systems with a broader range of data. More input diversity makes errors less likely, especially when operating conditions may vary. Laser measurements and infrared cameras could likewise expand the roles machine vision serves.
The demand for high-quality inputs means the optical technologies in a machine vision system are often some of its most expensive components. By focusing on developing lower-cost solutions that maintain acceptable quality levels, designers can make them more accessible.
It’s worth noting that advances in camera technology have already brought the cost of such a solution from $1 million to $100,000 on the high end. Further innovation could have a similar effect.
Machine vision needs reliable optical technologies
AI is only as accurate as its input data. So, machine vision needs advanced optical technologies to reach its full potential. Design engineers hoping to capitalize on this field should focus on optical components to push the industry forward.
Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.
Related Content
- What Is Machine Vision All About?
- Know Your Machine Vision Components
- Video Cameras and Machine Vision: A Technology Overview
- How Advancements in Machine Vision Propel Factory Revolution
- Machine Vision Approach Addresses Limitations of Standard 3D Sensing Technologies
The post Why optical technologies matter in machine vision systems appeared first on EDN.
Automotive chips improve ADAS reliability

TI has expanded its automotive portfolio with a high-speed lidar laser driver, BAW-based clocks, and a mmWave radar sensor. These devices support the development of adaptable ADAS for safer, more automated driving.
The LMH13000 is claimed to be the first laser driver with an ultra-fast 800-ps rise time, enabling up to 30% longer distance measurements than discrete implementations and enhancing real-time decision making. It integrates LVDS, CMOS, and TTL control signals, eliminating the need for large capacitors or additional external circuitry. The device delivers up to 5 A of adjustable output current with just 2% variation across an ambient temperature range of -40°C to +125°C.
By leveraging bulk acoustic wave (BAW) technology, the CDC6C-Q1 oscillator and the LMK3H0102-Q1 and LMK3C0105-Q1 clock generators provide 100× greater reliability than quartz-based clocks, with a failure-in-time (FIT) rate as low as 0.3. These devices improve clocking precision in next-generation vehicle subsystems.
TI’s AWR2944P front and corner radar sensor builds on the AWR2944 platform, offering a higher signal-to-noise ratio, enhanced compute performance, expanded memory, and an integrated radar hardware accelerator. The accelerator enables the system’s MCU and DSP to perform machine learning tasks for edge AI applications.
Preproduction quantities of the LMH13000, CDC6C-Q1, LMK3H0102-Q1, LMK3C0105-Q1, and AWR2944P are available now on TI.com. Additional output current options and an automotive-qualified version of the LMH13000 are expected in 2026.
The post Automotive chips improve ADAS reliability appeared first on EDN.
PMIC fine-tunes power for MPUs and FPGAs

Designed for high-end MPU and FPGA systems, the Microchip MCP16701 PMIC integrates eight 1.5-A buck converters that can be paralleled and are duty cycle-capable. It also includes four 300-mA LDO regulators and a controller to drive external MOSFETs.
The MCP16701 enables dynamic VOUT adjustment across all converters, from 0.6 V to 1.6 V in 12.5-mV steps and from 1.6 V to 3.8 V in 25-mV steps. This flexibility allows precise power tuning for specific requirements in industrial computing, data servers, and edge AI, enhancing overall system efficiency.
Housed in a compact 8×8-mm VQFN package, the PMIC reduces board area by 48% and lowers component count to less than 60% compared to discrete designs. It supports Microchip’s PIC64-GX MPU and PolarFire FPGAs with a configurable feature set and operates from -40°C to +105°C. An I2C interface facilitates communication with other system components.
The MCP16701 costs $3 each in lots of 10,000 units.
The post PMIC fine-tunes power for MPUs and FPGAs appeared first on EDN.
PXI testbench strengthens chip security testing

The DS1050A Embedded Security Testbench from Keysight is a scalable PXI-based platform for advanced side-channel analysis (SCA) and fault injection (FI) testing. Designed for modern chips and embedded devices, it builds on the Device Vulnerability Analysis product line, offering up to 10× higher test effectiveness to help identify and mitigate hardware-level security threats.
This modular platform combines three core components—the M9046A PXIe chassis, M9038A PXIe embedded controller, and Inspector software. It integrates key tools, including oscilloscopes, interface equipment, amplifiers, and trigger generators, into a single chassis, reducing cabling and improving inter-module communication speed.
The 18-slot M9046A PXIe chassis delivers up to 1675 W of power and supports 85 W of cooling per slot, accommodating both Keysight and third-party test modules. Powered by an Intel Core i7-9850HE processor, the M9038A embedded controller provides the computing performance required for complex tests. Inspector software simulates diverse fault conditions, supports data acquisition, and enables advanced cryptanalysis across embedded devices, chips, and smart cards.
For more information on the DS1050A Embedded Security Testbench, or to request a price quote, click the product page link below.
The post PXI testbench strengthens chip security testing appeared first on EDN.
Sensor brings cinematic HDR video to smartphones

Omnivision’s OV50X CMOS image sensor delivers movie-grade video capture with ultra-high dynamic range (HDR) for premium smartphones. Based on the company’s TheiaCel and dual conversion gain (DCG) technologies, the color sensor achieves single-exposure HDR approaching 110 dB—reportedly the highest available in smartphones.
The OV50X is a 50-Mpixel sensor with a 1.6-µm pixel pitch and an 8192×6144 active array in a 1-in. optical format. It supports 4-cell binning, providing 12.5-Mpixel output at up to 180 frames/s, or 60 frames/s with three-exposure HDR. The sensor also enables 8K video with dual analog gain HDR and on-sensor crop-zoom capability.
TheiaCel employs lateral overflow integration capacitor (LOFIC) technology in combination with Omnivision’s proprietary DCG HDR to capture high-quality images and video in difficult lighting conditions. Quad phase detection (QPD) with 100% sensor coverage enables fast, precise autofocus across the entire frame—even in low light.
The OV50X image sensor is currently sampling, with mass production slated for Q3 2025.
The post Sensor brings cinematic HDR video to smartphones appeared first on EDN.
GaN transistors integrate Schottky diode

Medium-voltage CoolGaN G5 transistors from Infineon include a built-in Schottky diode to minimize dead-time losses and enhance system efficiency. The integrated diode also streamlines power stage design and helps reduce BOM cost.
In hard-switching designs, GaN devices can suffer from higher power losses due to body diode behavior, especially with long controller dead times. CoolGaN G5 transistors address this by integrating a Schottky diode, improving efficiency across applications such as telecom IBCs, DC/DC converters, USB-C chargers, power supplies, and motor drives.
GaN transistor reverse conduction voltage (VRC) depends on the threshold voltage (VTH) and OFF-state gate bias (VGS), as there is no body diode. Since VTH is typically higher than the turn-on voltage of silicon diodes, reverse conduction losses increase in third-quadrant operation. The CoolGaN transistor reduces these losses, improves compatibility with high-side gate drivers, and allows broader controller compatibility due to relaxed dead-time.
The first device in the CoolGaN G5 series with an integrated Schottky diode is a 100-V, 1.5-mΩ transistor in a 3×5-mm PQFN package. Engineering samples and a target datasheet are available upon request.
The post GaN transistors integrate Schottky diode appeared first on EDN.
Shoot-through

This phenomenon has nothing to do with “Gunsmoke” or with “Have Gun, Will Travel”. (Do you remember those old TV shows?) The phrase “shoot- through” describes unwanted and possibly destructive pulses of current flowing through power semiconductors in certain power supply designs.
In half-bridge and full-bridge power inverters, we have one pair (half-bridge) or two pairs (full-bridge) of power switching devices connected in series from a rail voltage to a rail voltage return. Those devices could be power MOSFETs, IGBTs, or whatever but the requirement in each case is the same. That requirement is that the two devices in each pair turn on and off in alternate fashion. If the upper one is on, the lower one is off. If the upper one is off, the lower one is on.
The circuit board seen in Figure 1 was one such design based on a full-bridge power inverter, and it had a shoot- through issue.
Figure 1 A full-bridge circuit board with a shoot-through issue and the test arrangement used to assess it.
A super simplified SPICE simulation shows conceptually what was going amiss with that circuit board, Figure 2.
Figure 2 A SPICE simulation that conceptually walks through the shoot-through problem occurring on the circuit in Figure 1.
S1 represents the board’s Q1 and Q2 upper switches and S2 represents the board’s Q4 and Q3 lower switches. At each switching transition, there was a brief moment when one switch had not quite turned off by the time its corresponding switch had turned on. With both switching devices on at the same time, however brief that “same” time was, there would be a pulse of current flowing from the board’s rail through the two switches an into the board’s rail return. That current pulse would be of essentially unlimited magnitude and the two switching devices could and would suffer damage.
Electromagnetic interference issues arose as well, but that’s a separate discussion.
Old hands will undoubtedly recognize the following, but let’s take a look at the remedy shown in Figure 3.
Figure 3 Shoot-through problem solved by introducing two diodes to speed up the switchs’ turn-off times.
The capacitors C1 and C2 represent the input gate capacitances of the power MOSFETs that served as the switches. The shoot-through issue would arise when one of those capacitances was not fully discharged before the other capacitance got raised to its own full charge. Adding two diodes sped up the capacitance discharge times so that essentially full discharge was achieved for each FET before the other one could turn on.
Having thus prevented simultaneous turn-ons, the troublesome current pulses on that circuit board were eliminated.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Shoot-thru suppression
- Tip of the Week: How to best implement a synchronous buck converter
- MOSFET Qrr: Ignore at your peril in the pursuit of power efficiency
- EMI and circuit components: Where the rubber meets the road
The post Shoot-through appeared first on EDN.
Addressing hardware failures and silent data corruption in AI chips

Meta trained one of its AI models, called Llama 3, in 2024 and published the results in a widely covered paper. During a 54-day period of pre-training, Llama 3 experienced 466 job interruptions, 419 of which were unexpected. Upon further investigation, Meta learned 78% of those hiccups were caused by hardware issues such as GPU and host component failures.
Hardware issues like these don’t just cause job interruptions. They can also lead to silent data corruption (SDC), causing unwanted data loss or inaccuracies that often go undetected for extended periods.
While Meta’s pre-training interruptions were unexpected, they shouldn’t be entirely surprising. AI models like Llama 3 have massive processing demands that require colossal computing clusters. For training alone, AI workloads can require hundreds of thousands of nodes and associated GPUs working in unison for weeks or months at a time.
The intensity and scale of AI processing and switching create a tremendous amount of heat, voltage fluctuations and noise, all of which place unprecedented stress on computational hardware. The GPUs and underlying silicon can degrade more rapidly than they would under normal (or what used to be normal) conditions. Performance and reliability wane accordingly.
This is especially true for sub-5 nm process technologies, where silicon degradation and faulty behavior are observed upon manufacturing and in the field.
But what can be done about it? How can unanticipated interruptions and SDC be mitigated? And how can chip design teams ensure optimal performance and reliability as the industry pushes forward with newer, bigger AI workloads that demand even more processing capacity and scale?
Ensuring silicon reliability, availability and serviceability (RAS)
Certain AI players like Meta have established monitoring and diagnostics capabilities to improve the availability and reliability of their computing environments. But with processing demands, hardware failures and SDC issues on the rise, there is a distinct need for test and telemetry capabilities at deeper levels—all the way down to the silicon and multi-die packages within each XPU/GPU as well as the interconnects that bring them together.
The key is silicon lifecycle management (SLM) solutions that help ensure end-to-end RAS, from design and manufacturing to bring-up and in-field operation.
With better visibility, monitoring, and diagnostics at the silicon level, design teams can:
- Gain telemetry-based insights into why chips are failing or why SDC is occurring.
- Identify voltage or timing degradation, overheating, and mechanical failures in silicon components, multi-die packages, and high-speed interconnects.
- Conduct more precise thermal and power characterization for AI workloads.
- Detect, characterize, and resolve radiation, voltage noise, and mechanism failures that can lead to undetected bit flips and SDC.
- Improve silicon yield, quality, and in-field RAS.
- Implement reliability-focused techniques—like triple modular redundancy and dual core lock step—during the register-transfer level (RTL) design phase to mitigate SDC.
- Establish an accurate pre-silicon aging simulation methodology to detect sensitive or vulnerable circuits and replace them with aging-resilient circuits.
- Improve outlier detection on reliability models, which helps minimize in-field SDC.
Silicon lifecycle management (SLM) solutions help ensure end-to-end reliability, availability, and serviceability. Source: Synopsys
An SML design example
SLM IP and analytics solutions help improve silicon health and provide operational metrics at each phase of the system lifecycle. This includes environmental monitoring for understanding and optimizing silicon performance based on the operating environment of the device; structural monitoring to identify performance variations from design to in-field operation; and functional monitoring to track the health and anomalies of critical device functions.
Below are the key features and capabilities that SLM IP provides:
- Process, voltage and temperature monitors
- Help ensure optimal operation while maximizing performance, power, and reliability.
- Highly accurate and distributed monitoring throughout the die, enabling thermal management via frequency throttling.
- Path margin monitors
- Measure timing margin of 1000+ synthetic and functional paths (in-test and in-field).
- Enable silicon performance optimization based on actual margins.
- Automated path selection, IP insertion, and scan generation.
- Clock and delay monitors
- Measure the delay between the edges of one or more signals.
- Check the quality of the clock duty cycle.
- Measure memory read access time tracking with built-in self-test (BIST).
- Characterize digital delay lines.
- UCIe monitor, test and repair
- Monitor signal integrity of die-to-die UCIe lane(s).
- Generate algorithmic BIST patterns to detect interconnect fault types, including lane-to-lane crosstalk.
- Perform cumulative lane repair with redundancy allocation (upon manufacturing and in-field).
- High-speed access and test
- Enable testing over functional interfaces (PCIe, USB and SPI).
- For in-field operation as well as wafer sort, final test, and system-level test.
- Can be used in conjunction with automated test equipment.
- Help conduct in-field remote diagnoses and lower-cost test via reduced pin count.
- HBM external test and repair
- Comprehensive, silicon-proven DRAM stack test, repair and diagnostics engine.
- Support third-party HBM DRAM stack providers.
- Provide high-performance die to die interconnect test and repair support.
- Operate in conjunction with HBM PHY and support a range of HBM protocols and configurations.
- SLM hierarchical subsystem
- Automated hierarchical SLM and test manageability solution for system-on-chips (SoCs).
- Automated integration and access of all IP/cores with in-system scheduling.
- Pre-validated, ready ATE patterns with pattern porting.
Silicon test and telemetry in the age of AI
With the scale and processing demands of AI devices and workloads on the rise, system reliability, silicon health and SDC issues are becoming more widespread. While there is no single solution or antidote for avoiding these issues, deeper and more comprehensive test, repair, and telemetry—at the silicon level—can help mitigate them. The ability to detect or predict in-field chip degradation is particularly valuable, enabling corrective action before sudden or catastrophic system failures occur.
Delivering end-to-end visibility through RAS, silicon test, repair, and telemetry will be increasingly important as we move toward the age of AI.
Shankar Krishnamoorthy is chief product development officer at Synopsys.
Krishna Adusumalli is R&D engineer at Synopsys.
Jyotika Athavale is architecture engineering director at Synopsys.
Yervant Zorian is chief architect at Synopsys.
Related Content
- Uncovering Silent Data Errors with AI
- 11 steps to successful hardware troubleshooting
- Self-testing in embedded systems: Hardware failure
- Understanding and combating silent data corruption
- Test solutions to confront silent data corruption in ICs
The post Addressing hardware failures and silent data corruption in AI chips appeared first on EDN.