Українською
  In English
EDN Network
Portable power station battery capacity extension: Curious coordination

I’m still awaiting an opportunity, when I have spare time, the snow’s absent from the deck and winds are calm, to test out those two 220W solar panels I already mentioned I bought last year:
for parallel-combining and mating with my EcoFlow DELTA 2 portable power station:
While I remain on more-favorable-conditions standby, I’ve got two other pieces of EcoFlow gear also in the queue to tell you about. One, the 800W Alternator Charger that I mentioned in a more recent piece, isn’t an installation high-priority right now, so hands-on results prose will also need to wait.
But the other (and eventually also its replacement; hold that thought), which I pressed into service as soon as it arrived, is the topic of today’s post. It’s the DELTA 2 Smart Extra Battery, which mates to the DELTA 2 base unit over a thick dual-XT150-connectors-inclusive cable and combo-doubles the effective subsequently delivered storage capacity:
Here’s what my two identical-sized (15.7 x 8.3 x 11 in/400 x 211 x 281 mm) albeit different-weight (DELTA 2 base unit: 27 lbs/12 kg, DELTA 2 Smart Extra Battery: 21 lbs/9.5 kg) devices look like in their normal intended stacked configuration:
And here’s my more haphazard, enthusiastic initial out-of-box hookup of them:
In the latter photo, if you look closely, you can already discern why I returned the original Smart Extra Battery, which (like both its companion and its replacement) was a factory-refurbished unit from EcoFlow’s eBay storefront. Notice the brightness difference between it and the more intense DELTA 2’s displays. I should note upfront that at the time I took that photo, both devices’ screens still had the factory-installed clear plastic protectors on them, so there might have been some resultant muting. But presumably it would have dimmed both units’ displays equally.
The displays are odd in and of themselves. When I’d take a screen protector off, I’d see freakish “static” (for lack of a better word) scattered all over it for a few (dozen) seconds, and I could also subsequently simulate a semblance of the same effect by rubbing my thumb over the display. This photo shows the artifacts to a limited degree (note, in particular, the lower left quadrant):
My root-cause research has been to-date fruitless; I’d welcome reader suggestions on what core display technology EcoFlow is using and what specific effect is at play when these artifacts appear. Fortunately, if I wait long enough, they eventually disappear!
As for the defective display in particular, its behavior was interesting, too. LCDs, for example, typically document a viewing angle specification, which is the maximum off-axis angle at which the display still delivers optimum brightness, contrast and other attributes. Beyond that point, typically to either side but also vertically, image quality drops off. With the DELTA 2 display, it was optimum when viewed straight on, with drop-off both from above and below. With the original Smart Extra Battery display, conversely, quality was optimum when viewed from below, almost (or maybe exactly) as if the root cause was a misaligned LCD polarizer. Here are closeups of both devices’ displays, captured straight on in both cases, post-charging:
After checking with Reddit to confirm that what I was experiencing was atypical, I reached out to EcoFlow’s eBay support team, who promptly and thoroughly took care of me (and no, they didn’t know I was a “press guy”, either), with Fedex picking up the pre-paid return-shipping defective unit at my front door:
and a replacement, quick-shipped to me as soon as the original arrived back at EcoFlow.
That’s better!
The Smart Extra Battery appears within the app screens for the DELTA 2, vs as a distinct device:
Here’s the thick interconnect cable:
I’d initially thought EcoFlow forgot to include it, but eventually found it (plus some documentation) in a storage compartment on top of the device:
Here are close-ups of the XT150 connectors, both at-device (the ones on the sides of the DELTA 2 and Smart Extra Battery are identical) and on-cable (they’re the same on both ends):
I checked for available firmware updates after first-time connecting them; one was available.
I don’t know if it was related to the capacity expansion specifically or was just timing-coincidental, and if it was for the DELTA 2 (with in-progress status shown in the next photo), Smart Extra Battery or both…but it completed uneventfully and successfully.
Returning to the original unit, as that’s what I’d predominantly photo-documented, it initially arrived only 30% “full”:
With the DELTA 2 running the show, first-time charging of the Smart Extra Battery was initially rapid and high power-drawing; note the incoming power measured at it:
and flowing both into and out of the already-fully-charged DELTA 2:
As the charging process progressed, the current flow into the Smart Extra Battery slowed, eventually to a (comparative) trickle:
until it finished. Note the high reported Smart Extra Battery temperature immediately after charge completion, both in an absolute sense and relative to the normal-temperature screenshot shown earlier!
In closing, allow me to explain the “Curious Coordination” bit in the title of this writeup. I’d upfront assumed that if I lost premises power and needed to harness the electrons previously collected within the DELTA 2/Smart Extra Battery combo instead, the Smart Extra Battery would be drained first. Such a sequence would theoretically allow me to, for example, then disconnect the Smart Extra Battery and replace it with another already-fully-charged one I might have sat around to further extend the setup’s total usable timespan prior to complete depletion.
In saying this, I realize that the feasibility of such a scenario isn’t likely, since the Smart Extra Battery can’t be charged directly from AC (or solar, for that matter) but instead requires an XT150-equipped “smart” source such as a (second, in this scenario) DELTA 2. That said, what I discovered to be the case when I finally got the gear in my hands was the exact opposite; the DELTA 2 battery drained first, down to a nearly (but not completely) empty point, then the discharge source switched to the extra battery. And that said, further research has educated me that actual behavior varies depending on how much current is demanded by whatever the combo is powering; in heavy-load scenarios, the two devices’ battery packs drain in parallel.
What are your thoughts on this behavior, and/or anything else I’ve mentioned here? Share them with your fellow readers (and me!) in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- A holiday shopping guide for engineers: 2024 edition
- The Energizer 200W portable solar panel: A solid offering, save for a connector too fragile
- EcoFlow’s Delta 2: Abundant Stored Energy (and Charging Options) for You
The post Portable power station battery capacity extension: Curious coordination appeared first on EDN.
LM4041 voltage regulator impersonates precision current source

The LM4041 has been around for over 20 years. During those decades, while primarily marketed as a precision adjustable shunt regulator, this classic device also found its way into alternative applications. These include voltage comparators, overvoltage protectors, voltage limiters, etc. Voltage, voltage, voltage, must it always be voltage? It gets tedious. Surely this popular precision chip, while admittedly rather—um—“mature”, must have untapped potential for doing something that doesn’t start with voltage.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The Design Idea (DI) presented in Figure 1 offers the 4041 a usual, possibly weird, maybe even new role to play. It’s a precision current source.
Figure 1 Weirdly, the “CATHODE” serves as the sense pin for active current source regulation.
The above block diagram shows how the 4041 works at a conceptual level:
Sourced current = Is = (V+ – (Vc + 1.24v))/R1
Is > 0, V+ < 15v, Is < 20 mA
The series connection subtracts an internal 1.24-V precision reference from the external voltage input on the CATHODE pin. The internal op-amp subtracts the voltage input on the FB pin from that difference, then amplifies and applies the result to the pass transistor. If it’s positive [(V+ – 1.24) > Vc], the transistor turns on and shunts current from CATHODE to ANODE. Otherwise, it turns off.
When a 4041 is connected in the traditional fashion (FB connected to CATHODE and ANODE grounded), the scheme works like a shunt voltage regulator should, forcing CATHODE to the internal 1.24-V reference voltage. But what will happen if the FB pin is connected to a constant control voltage [Vc < (V+ – 1.24v)] and CATHODE—and instead of being connected to FB—floats freely on current sensing resistor R1?
What happens is the current gets regulated instead of the voltage. Because Vc is fixed and can’t be pulled up to make FB = CATHODE – 1.24, CATHODE must be pulled down until equality is achieved. For this to happen, a programmed current, Is, must be passed that is given by:
Is = (V+ – (Vc – 1.24))/R1.
Figure 2 illustrates how this relationship can be used (assuming a 5-V rail that’s accurate enough) to make a floated-cathode 4041 regulate a constant current source of:
Is = (5v – 2.5 – 1.23)/R1 = 1.27v /R1
It also illustrates how adding a booster transistor Q1 can accommodate applications needing current or power beyond Z1’s modest limits. Notice that Z1’s accuracy will be unimpaired because, with whatever fraction of Is that Q1 causes to bypass, Z1 is summed back in before passing through R1.
Figure 2 The booster transistor Q1 can handle current beyond 4041 max Is and dissipation limits.
Figure 3 shows how Is can be digitally linearly programmed with PWM.
Figure 3 Schematic showing the DAC control of Is. Is = Df amps, where Df = PWM duty factor. The asterisked resistors should be 1% or better.
Incoming 5-Vpp, 10-kHz PWM causes Q2 to switch R5, creating a variable average resistance = R5/Df. Thanks to the 2.5-V Z1 reference, the result is a 0 to 1.22 mA current into Q1’s source. This is summed with a constant 1.22 mA bias from R4 and level shifted by Q1 to make a 1.22 to 2.44 V control voltage, Vc, for current source Z2.
The result is a linear 0- to 1-A output current, Is, into a grounded load where Is = Df amps. Voltage compliance is 0 to 12 V. The 8-bit compatible PWM ripple filtering is 2nd order using “Cancel PWM DAC ripple with analog subtraction.”
R3C1 provides the first-stage ripple filter and R7C2 the second. The C1 and C2 values shown are scaled for Fpwm = 10 kHz to provide an 8-bit settling time of 6 ms. If a different PWM frequency is used, scale both capacitors by 10kHz/Fpwm.
A hot topic is that Q4 can be called on to dissipate more than 10 W, so don’t skimp on heatsink capacity.
Q3 is a safety shutdown feature. It removes Q1 gate drive when +5 falls below about 3 V, shutting off the current source and protecting the load when controller logic is powered down.
Figure 4 adds zero and span pots to implement a single-pass calibration for best accuracy:
- Set Df = 0% and adjust single turn ZERO trim for zero output current
- Set Df = 100% and adjust single turn CAL trim for 1.0 A output
- Done.
Figure 4 Additional zero and span pots to implement a single-pass calibration for best accuracy.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- PWM-programmed LM317 constant current source
- Low-cost precision adjustable current reference and application
- A negative current source with PWM input and LM337 output
- A high-performance current source
- Simple, precise, bi-directional current source
The post LM4041 voltage regulator impersonates precision current source appeared first on EDN.
Did you put X- and Y-capacitors on your AC input?

X- and Y-capacitors are commonly used to filter AC power-source electromagnetic interference (EMI) noise and are often referred to as safety capacitors. Here is a detailed view of these capacitors, related design practices and regulatory standards, and profile of supporting power ICs. Bill Schweber also provides a sneak peek into how they operate in AC power line circuits.
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- When the AC line meets the CFL/LED lamp
- How digital capacitor ICs ease antenna tuning
- What would you ask an entry-level analog hire?
- Active filtering: Attenuating switching-supply EMI
The post Did you put X- and Y-capacitors on your AC input? appeared first on EDN.
Walmart’s onn. 4K streaming box: A Google TV upgrade doesn’t clobber its cost

Within my teardown published last summer of Walmart’s “onn.”-branded original Android TV-based streaming receiver, the UHD Streaming Device:
I mentioned that I already had Google TV operating system-based successors for both the “box” and “stick” Android TV form factor (subsequently dissected by me and published last December) sitting on my shelves awaiting my teardown attention. That time is now, specifically for the onn. Google TV 4K Streaming Box I’d bought at intro in April 2023 for $19.88 (the exact same price as its Android TV-based forebear):
The sizes of the two device generations are near-identical, although it’s near-impossible to find published dimension specs for either device online, only for the retail packaging containing them. As such, however, a correction is in order. I’d said in my earlier teardown that the Android TV version of the device was 4.9” both long-and-wide, and 0.8” tall: it’s actually 2.8” (70mm, to be precise) in both length and width, with a height of ~0.5” (13 mm). And the newer Google TV-based variant is ~3.1” (78mm) both long and wide and ~0.7” (18 mm) tall.
Here are more “stock” shots of the newer device that we’ll be dissecting today, along with its bundled remote control and other accessories:
Eagle-eyed readers may have already noticed the sole layout difference between the two generations’ devices. The reset switch and status LED are standalone along one side in the original Android TV version, whereas they’re at either side of, and on the same side as, the HDMI connector in the new Google TV variant. The two generations’ remote controls also vary slightly, although I bet the foundation hardware design is identical. The lower right button in the original gave user-access favoritism to HBO Max (previously HBO Go, now known as just “Max”):
whereas now it’s Paramount+ getting the special treatment (a transition which I’m guessing was motivated by the more recent membership partnership between the two companies and implemented via a relabel of that button along with an integrated-software tweak).
Next, let’s look at some “real-life” shots, beginning with the outside packaging:
Note that, versus the front-of-box picture of its precursor that follows, Walmart’s now referring to it as capable of up to “4K” output resolution, versus the previous, less trendy “UHD”:
Also, it’s now called a “box”, versus a “device”. Hold that latter thought until next month…now, back to today’s patient…
The two sides are comparatively info-deficient:
The bottom marks a return to info-rich form:
While the top as usual never fails to elicit a chuckle from yours truly:
Let’s see what’s inside:
That’s quite a complex cardboard assemblage!
The first thing you’ll see when you flip up the top flap:
are our patient, currently swathed in protective opaque plastic, and a quick start guide that you can find in PDF form here, both as-usual accompanied in the photo by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes.
Below them, in the lower level of the cardboard assemblage, are the aforementioned remote control and a 1-meter (3.28 ft) HDMI cable:
Here’s the backside of the remote control; note the added sticker (versus its predecessor) above the battery compartment with re-pairing instructions, along with the differing information on the smaller sticker in the upper right corner within the battery compartment:
I realized after typing the previous words that since I hadn’t done a teardown of the remote control last time, I hadn’t taken a picture of its opened backside, either. Fortunately, it was still inhabiting my office, so…here you go!
Also originally located in the lower level of the cardboard assemblage are the AC adapter, an oval-shaped piece of double-sided adhesive for attaching the device to a flat surface, and a set of AAA batteries for the remote control:
Here’s the micro-USB jack that plugs into the on-device power connector:
And here are the power adapter’s specs:
which are comparable, “wall wart” form factor variances aside, with those of its predecessor:
Finally, here are some overview images of our patient, first from above:
Here’s the micro-USB side:
This side’s bare on this generation of the device:
but, as previously mentioned, contained the status LED and reset switch in the prior generation:
They’ve moved one side over this time, straddling the HDMI cable (I realized after taking this shot and subsequently moving in with the disassembly that the status LED was behind the penny; stand by for another look at it to come shortly!):
The last (left) side, in contrast, is bare in both generations:
Finally, here’s the device from below:
And here’s a closeup of the label, listing (among other things) the FCC ID, 2AYYS-8822K4VTG (no, I don’t know why there are 28 different FCC documents posted for this ID, either!):
Now to get inside. Ordinarily, I’d start out by peeling off that label and seeing if there are any screw heads visible underneath. But since last time’s initial focus on the gap between the two case pieces panned out, I decided to try going down that same path again:
with the same successful outcome (a reminder at the start that we’re now looking at the underside of the inside of the device):
Check out the hefty piece of metal covering roughly half of the interior and linked to the Faraday cage on the PCB, presumably for both thermal-transfer and cushioning purposes, via two spongy pieces still attached to the latter:
I’m also presuming that the metal piece adds rigidity to the overall assembly. So why doesn’t it cover the entirety of the inside? They’re not visible yet, but I’m guessing there are Bluetooth and Wi-Fi antenna somewhere whose transmit and receive potential would have been notably attenuated had there been intermediary metal shielding between them and the outside world:
See those three screws? I’m betting we can get that PCB out of the remaining top portion of the case if we remove them first:
Yep!
Before we get any further, let me show you that status LED that was previously penny-obscured:
It’s not the actual LED, of course; that’s on the PCB. It’s the emissive end of the light guide (aka, light pipe, light tube) visible in the upper left corner of the inside of the upper chassis, with its companion switch “plunger” at upper right. Note, too, that this time one (gray) of the “spongy pieces” ended up stuck to this side’s metal shielding, which once again covers only ~half of the inside area:
The other (pink) “spongy piece” is still stuck to one of the two Faraday cages on the top side of the PCB, now visible for the first time:
In the upper right corner is the aforementioned LED (cluster, actually). At bottom, as previously forecasted unencumbered by intermediary shielding thanks to their locations, are the 2.4 GHz and 5 GHz Wi-Fi antennae. Along the right edge is what I believe to be the PCB-embedded Bluetooth antenna. And as for those Faraday cages, you know what comes next:
They actually came off quite easily, leaving me cautiously optimistic that I might eventually be able to pop them back on and restore this device to full functionality (which I’ll wait to try until after this teardown is published; stay tuned for a debrief on the outcome in the comments):
Let’s zoom in and see what’s under those cage lids:
Within the upper one’s boundary are two notable ICs: a Samsung K4A8G165WC-BCTD DDR4-2666 8 Gbit SDRAM and, to its right, the system’s “brains”, an Amlogic S905Y4 app processor.
And what about the lower cage region?
This one’s an enigma. That it contains the Wi-Fi and Bluetooth transceivers, and other circuitry is pretty much a given, considering its proximity to the antennae (among other factors). And it very well could be one and the same as the Askey Computer 8822CS, seemingly with Realtek wireless transceiver silicon inside, that was in the earlier Android TV version of device. Both devices support the exact same Bluetooth (5.0) and Wi-Fi (2.4/5GHz 802.11 a/b/g/n/ac MIMO) protocol generations, and the module packaging looks quite similar in both albeit rotated 90° in one PCB layout versus the other:
That said, unfortunately, there’s no definitively identifying sticker atop the module this time, as existed previously. If it is the same, I hope the manufacturer did a better job with its soldering this time around!
Now let’s flip the PCB back over to the bottom side we’ve already seen before, albeit now freed from its prior case captivity:
I’ll direct your attention first to the now clearly visible reset switch at upper right, along with the now obscured light guide at upper left. I’m guessing that the black spongey material makes sure that as much of the light originating at the PCB on the other side makes it outside as possible, versus inefficiently illuminating the device interior instead.
Once again, the Faraday Cage lifts off cleanly and easily:
The Faraday cage was previously located atop the PCB’s upper outlined region:
Unsurprisingly, another Samsung K4A8G165WC-BCTD DDR4-2666 8 Gbit SDRAM is there, for 2 GBytes of total system memory.
The region below it, conversely, is another enigma of this design:
Its similar outline to the others suggests that a Faraday cage should have originally been there, too. But it wasn’t; you’ve seen the pictorial proof. Did the assembler forget to include it when building this particular device? Or did the manufacturer end up deciding it wasn’t necessary at all? Dunno. What I do know is that within it is nonvolatile storage, specifically the exact same Samsung KLM8G1GETF-B041 8 GByte eMMC flash memory module that we saw last time!
More generally, what surprises me the most about this design is its high degree of commonality with its predecessor despite its evolved operating system foundation:
- Same Bluetooth and Wi-Fi generations
- Same amount and speed bin of DRAM, albeit from different suppliers, and
- Same amount of flash memory, in the same form factor, from the same supplier
The SoCs are also similar, albeit not identical. The Amlogic S905Y2 seen last time dates from 2018, runs at 1.8 GHz and is a second-generation offering (therefore the “2” at the end). This time it’s the 2022-era Amlogic S905Y4, with essentially the same CPU (quad-core Arm Cortex-A53) and GPU (Mali-G31 MP2) subsystems, and fabricated on the same 12-nm lithography process, albeit running 200 MHz faster (2 GHz). The other notable difference is the 4th-gen (therefore “4” at the end) SoC’s added decoding support for the AV1 video codec, along with both HDR10 and HDR10+ high dynamic range (HDR) support.
Amlogic also offers the Amlogic S905X4; the fundamental difference between “Y” and “X” variants of a particular SoC involves the latter’s integration of wired Ethernet support. This latter chip is found in the high-end onn. Google TV 4K Pro Streaming Device, introduced last year, more sizeable (7.71 x 4.92 x 2.71 in.) than its predecessors, and now normally selling for $49.88, although I occasionally see it on sale for ~$10 less:
The 4K Pro software-exposes two additional capabilities of the 4th-generation Amlogic S905 not enabled in the less expensive non-Pro version of the device: Dolby Vision HDR and Dolby Atmos audio. It also integrates 50% more RAM (to 3 GBytes) and 4x the nonvolatile flash storage (to 32 GBytes), along with making wireless connectivity generational advancements (Wi Fi 6: 2.4/5GHz 802.11ax), embedding a microphone array and swapping out geriatric micro-USB for USB-C. And although it’s 2.5x the price of its non-Pro sibling, everything’s relative; since Google has now obsoleted the entire Chromecast line, including the HD and 4K versions of the Chromecast with Google TV, the only Google-branded option left is the $99.99 Google TV Streamer successor.
I’ve also got an onn. Google TV 4K Pro Streaming Device sitting here which, near term, I’ll be swapping into service in place of its Google Chromecast with Google TV (4K) predecessor. Near-term, stand by for an in-use review; eventually, I’m sure I’ll be tearing it down, too. And even nearer term, keep an eye out for my teardown of the “stick” form factor onn. Google TV Full HD Streaming Device, currently scheduled to appear at EDN online sometime next month:
For now, I’ll close with some HDMI and micro-USB end shots, both with the front:
and backsides of the PCB pointed “up”:
Along with an invitation for you to share thoughts on anything I’ve revealed and discussed here in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Walmart’s onn. UHD streaming device: Android TV at a compelling price
- Walmart’s onn. FHD streaming stick: Still Android TV, but less thick
- Google’s Chromecast with Google TV: Dissecting the HD edition
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- Google’s Chromecast Ultra: More than just a Stadia consorta
The post Walmart’s onn. 4K streaming box: A Google TV upgrade doesn’t clobber its cost appeared first on EDN.
How software testing guarantees the absence of bugs

Major industries such as electric vehicles (EVs), Internet of Things (IoT), aeronautics, and railways have strict, well-established processes to ensure they can maintain high safety standards throughout their operations. This level of precision, compliance, and enforcement is particularly important for safety-critical industries such as avionics, energy, space and defense, where high emphasis is placed on the development and validation of embedded software that contemporary and newly developed vehicles and vessels rely on to ensure operational safety.
It’s rare for a software glitch on its own to cause a catastrophic event. However, as embedded software systems become more complex, so too does the onus on developers to make sure their software is able to operate within that complexity bug-free.
That’s because the increasing interconnectivity between multiple information systems has transformed the critical domains like medical devices, infrastructure, transportation, and nuclear engineering. Then there are issues like asset security, risk management, and security architecture that require safe and secure operation of equipment and systems. This necessity for safety is not only acute from operational safety perspectives, but also in terms of cybersecurity.
However, despite the application of rigorous testing processes and procedures that are already in place, subtle bugs are still missed by testing techniques that don’t provide full coverage and don’t embed themselves deeply within operational environments. They are unacceptable errors that cannot be allowed to remain undetected and potentially metastasize but finding them and rooting them out is still a major challenge for most.
While the software driving embedded compute systems becomes more complex and, therefore, more vulnerable, increasingly strict safety regulations designed to protect human lives are coming into force, which means that software development teams need to devise innovative solutions that enable them to proactively address safety and security issues. They should also be able to do so quickly to respond to demand without compromising test result integrity.
This need is particularly significant among critical software companies who depend heavily on traditional testing methods. Even when following highly focused, tried and true testing processes, there are for many software development engineers a nagging concern that a bug could have slipped through undetected.
That’s because they sometimes do, which leaves many quality assurance and product managers, especially in critical industries, to lose sleep over whether they have done enough to ensure software safety.
One major software supplier in the aerospace industry recently faced such a dilemma when it approached TrustInSoft with a problem.
A customer of the software supplier had discovered an undetected bug in one of several software modules that had been supplied to them, and the software was already fully operational. Once informed of the issue and being directed to resolve it, the supplier needed months to locate, understand, and ultimately rectify the bug, resulting in substantial costs for bug detection and software reengineering. The situation also had a negative impact on the supplier’s reputation and its business relationships with other customers.
That’s when they realized they needed a better, more conclusive way to ward off such incursions and do so confidently.
As a first step, the software supplier consulted TrustInSoft to see if it’s possible to confirm that the bug that had taken the software supplier months to identify was not only truly gone, but that no others were lurking undetected.
In just a few days, analysis revealed several previously undiscovered bugs in addition to what had caused the initial alarm. Each of these subtle bugs would have been extremely difficult, if not impossible, to detect using conventional methods, which is most likely why they were missed.
TrustInSoft Analyzer’s use of formal methods gives developers definitive proof that their source code is free from memory-safety issues, runtime errors, and security vulnerabilities. The analyzer’s technology is based on rigorously specified mathematical models that verify a software’s properties and behaviors against precisely defined specifications. It can, as a result, identify every potential security vulnerability within the source code.
The integration of formal methods enables users to conduct truly exhaustive analyses. What that means in practice is that complex formal method analysis techniques can be applied to—and keep pace with—increasingly sophisticated software packages. For many organizations, this intensive verification and validation process is now a requirement for safety and security-critical software development teams.
A significant advantage of formal method tools over traditional static analysis tools for both enterprise and open-source testing is the ability to efficiently perform the equivalent of billions of tests in a single run, which is unprecedented in conventional testing environments.
Critical industries provide essential services that have direct importance to our lives. But any defects in the software code at the heart of many of those industries can pose serious risks to human safety. TrustInSoft Analyzer’s ability to mathematically guarantee the absence of bugs in critical software is therefore essential to establish and maintain operational safety before it’s too late.
Caroline Guillaume is CEO of TrustInSoft.
Related Content
- Embedded Software Testing Basics
- Don’t Let Assumptions Wreck Your Code
- Software Testing Needs More Automation
- 5 Software Testing Challenges (and How to Avoid Them)
- Performance-Regression Pitfalls Every Project Should Avoid
The post How software testing guarantees the absence of bugs appeared first on EDN.
Architectural opportunities propel software-defined vehicles forward

At the end of last year, the global software-defined vehicle (SDV) market size was valued at $49.3 billion. With a compound annual growth rate exceeding 25%, the industry is set to skyrocket over the next decade. But this anticipated growth hinges on automakers addressing fundamental architectural and organizational barriers. To me, 2025 will be a pivotal year for SDVs, provided the industry focuses on overcoming these challenges rather than chasing incremental enhancements.
Moving beyond the in-cabin experienceIn recent years, innovations in the realm of SDVs have primarily focused on enhancing passenger experience with infotainment systems, high-resolution touchscreens, voice-controlled car assistance, and personalization features ranging from seat positions to climate control, and even customizable options based on individual profiles.
While enhancements of these sorts have elevated the in-cabin experience to essentially replicate that of a smartphone, the next frontier in the automotive revolution lies in reimagining the very architecture of vehicles.
To truly advance the future of SDVs, I believe OEMs must partner with technology companies to architect configurable systems that enable SDV features to be unlocked on demand, unified infrastructures that optimize efficiency, and the integration of software and hardware teams at organizations. Together, these changes signal a fundamental redefinition of what it means to build and operate a vehicle in the era of software-driven mobility.
1. Cost of sluggish software updatesThe entire transition to SDVs was built on the premise that OEMs could continuously improve their products, deploy new features, and offer better user experience throughout the vehicle’s lifecycle, all without having to upgrade the hardware. This has created a new business model of automakers depending on software as a service to drive revenue streams. Companies like Apple have shelved plans to build a car, instead opting to control digital content within vehicles with Apple CarPlay. As automakers rely on users purchasing software to generate revenue, the frequency of software updates has risen. However, these updates introduce a new set of challenges to both vehicles and their drivers.
When over-the-air updates are slow or poorly executed, it can cause delayed functionality in other areas of the vehicle by rendering certain features unavailable until the software update is complete. Lacking specific features can have significant implications for a user’s convenience but also surfaces safety concerns. In other instances, drivers could experience downtime where the vehicle is unusable while updates are installed, as the process may require the car to remain parked and powered off.
Rapid reconfiguration of SDV softwareModern users will soon ditch their car manufacturers who continue to deliver slow over-the-air updates that impair the use of their car, as seamless and convenient functionality remains a priority. To stay competitive, OEMs need to upgrade their vehicle architectures with configurable platforms to grant users access to features on the fly without friction.
Advanced semiconductor solutions will play a critical role in this transformation, by facilitating the seamless integration of sophisticated electronic systems like advanced driver-assistance systems (ADAS) and in-vehicle entertainment platforms. These technological advancements are essential for delivering enhanced functionality and connected experiences that define next-generation SDVs.
To support this shift, cutting-edge semiconductor technologies such as fully-depleted silicon-on-insulator (FD-SOI) and Fin field-effect transistor (FinFET) with magnetoresistive random access memory (MRAM) are emerging as key enablers. These innovations enable the rapid reconfiguration of SDVs, significantly reducing update times and minimizing disruption for drivers. High-speed, low-power non-volatile memory (NVM) further accelerates this progress, facilitating feature updates in a fraction of the time required by traditional flash memory. Cars that evolve as fast as smartphones, giving users access to new features instantly and painlessly, will enhance customer loyalty and open up new revenue streams for automakers, Figure 1.
Figure 1 Cars that evolve as fast as smartphones using key semiconductor technologies such as FD-SOI, FinFET, and MRAM will give users access to new features instantly and painlessly. Source: Getty Images
2. Inefficiencies of distinct automotive domainsThe present design of automotive architecture also lends itself to challenges, as today’s vehicles are built around a central architecture that is split into distinct domains: motion control, ADAS, and entertainment. These domains function independently, each with their own control unit.
This current domain-based system has led to inefficiencies across the board. With domains housed in separate infrastructures, there are increased costs, weight, and energy consumption associated with computing. Especially as OEMs increasingly integrate new software and AI into the systems of SDVs, the domain architecture of cars presents the following challenges:
- Different software modules must run on the same hardware without interference.
- Software portability across different hardware in automotive systems is often limited.
- AI is the least hardware-agnostic component in automotive applications, complicating integration without close collaboration between hardware and software systems.
The inefficiencies of domain-based systems will continue to be amplified as SDVs become more sophisticated, with an increasing reliance on AI, connectivity, and real-time data processing, highlighting the need for upgrades to the architecture.
Optimizing a centralized architectureOEMs are already trending toward a more unified hardware structure by moving from distinct silos to an optimized central architecture under a single house, and I anticipate a stronger shift toward this trend in the coming years. By sharing infrastructure like cooling systems, power supplies, and communication networks, this shift is accompanied by greater efficiency, both lowering costs and improving performance.
As we look to the future, the next logical step in automotive innovation will be to merge domains into a single system-on-chip (SoC) to easily port software between engines, reducing R&D costs and driving further innovation. In addition, chiplet technology ensures the functional safety of automotive systems by maintaining freedom of interference, while also enabling the integration of various AI engines into SDVs, paving the way for more agile innovation without overhauling entire vehicles (Figure 2).
Figure 2 Merge multiple domains into a singular, central SoC is key to realizing SDVs. This architectural shift inherently relies upon chiplet technology to ensure the functional safety of automotive systems. Source: Getty Images
3. The reorganization companies must faceMany of these software and hardware architectural challenges stem from the current organization of companies in the industry. Historically, automotive companies have operated in silos, with hardware and software development functioning as distinct, and often disconnected entities. This legacy approach is increasingly incompatible with the demands of SDVs.
Bringing software to the forefrontMoving forward, automakers must shift their focus from being hardware-centric manufacturers to becoming software-first innovators. Similar to technology companies, automakers must adopt new business models that allow for continuous improvement and rapid iteration. This involves restructuring organizations to promote cross-functional collaboration, bringing traditionally isolated departments together to ensure seamless integration between hardware and software components.
While restructuring any business requires significant effort, this transformation will also reap meaningful benefits. By prioritizing software first, automakers will be able to deliver vehicles with scalable, future-proofed architectures while also keeping customers satisfied as seamless over-the-air updates remain a defining factor of the SDV experience.
Semiconductors: The future of SDV architectureThe SDV revolution stands at a crossroads; while the in-cabin experience has made leaps in advancements, the architecture of vehicles must evolve to meet future consumer demands. Semiconductors will play an essential role in the future of SDV architecture, enabling seamless software updates without disruption, centralizing domains to maximize efficiency, and driving synergy between software and hardware teams.
Sudipto Bose, Senior Director of Automotive Business Unit, GlobalFoundries.
Related Content
- CES 2025: Wirelessly upgrading SDVs
- CES 2025: Moving toward software-defined vehicles
- Software-defined vehicle (SDV): A technology to watch in 2025
- Will open-source software come to SDV rescue?
The post Architectural opportunities propel software-defined vehicles forward appeared first on EDN.
Why optical technologies matter in machine vision systems

Machine vision systems are becoming increasingly common across multiple industries. Manufacturers use them to streamline quality control, self-driving vehicles implement them to navigate, and robots rely on them to work safely alongside humans. Amid these rising use cases, design engineers must focus on the importance of reliable and cost-effective optical technologies.
While artificial intelligence (AI) algorithms may take most of the spotlight in machine vision, optical systems providing the data these models analyze are crucial, too. Therefore, by designing better camera and sensor arrays, design engineers can foster optimal machine vision on several fronts.
Optical systems are central to machine vision accuracy before the underlying AI model starts working. These algorithms are only effective when they have sufficient relevant data for training, and this data requires cameras to capture it.
Some organizations have turned to using AI-generated synthetic data in training, but this is not a perfect solution. These images may contain errors and hallucinations, hindering the model’s accuracy. Consequently, they often require real-world information to complement them, which must come from high-quality sources.
Developing high-resolution camera technologies with large dynamic ranges gives AI teams the tools necessary to capture detailed images of real-world objects. As a result, it becomes easier to train more reliable machine vision models.
Expanding machine vision applications
Machine vision algorithms need high-definition visual inputs during deployment. Even the most accurate model can produce inconsistent results if the images it analyzes aren’t clear or consistent enough.
External factors like lighting can limit measurement accuracy, so designers must pay attention to these considerations in their optical systems, not just the cameras themselves. Sufficient light from the right angles to minimize shadows and sensors to adjust the focus accordingly can impact reliability.
Next, video data and still images are not the only optical inputs to consider in a machine vision system. Design engineers can also explore a variety of technologies to complement conventional visual data.
For instance, lidar is an increasingly popular choice. More than half of all new cars today come with at least one radar sensor to enable functions like lane departure warnings. So, lidar is following a similar trajectory as self-driving features grow.
Complementing a camera with lidar sensors can provide these machine vision systems with a broader range of data. More input diversity makes errors less likely, especially when operating conditions may vary. Laser measurements and infrared cameras could likewise expand the roles machine vision serves.
The demand for high-quality inputs means the optical technologies in a machine vision system are often some of its most expensive components. By focusing on developing lower-cost solutions that maintain acceptable quality levels, designers can make them more accessible.
It’s worth noting that advances in camera technology have already brought the cost of such a solution from $1 million to $100,000 on the high end. Further innovation could have a similar effect.
Machine vision needs reliable optical technologies
AI is only as accurate as its input data. So, machine vision needs advanced optical technologies to reach its full potential. Design engineers hoping to capitalize on this field should focus on optical components to push the industry forward.
Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.
Related Content
- What Is Machine Vision All About?
- Know Your Machine Vision Components
- Video Cameras and Machine Vision: A Technology Overview
- How Advancements in Machine Vision Propel Factory Revolution
- Machine Vision Approach Addresses Limitations of Standard 3D Sensing Technologies
The post Why optical technologies matter in machine vision systems appeared first on EDN.
Automotive chips improve ADAS reliability

TI has expanded its automotive portfolio with a high-speed lidar laser driver, BAW-based clocks, and a mmWave radar sensor. These devices support the development of adaptable ADAS for safer, more automated driving.
The LMH13000 is claimed to be the first laser driver with an ultra-fast 800-ps rise time, enabling up to 30% longer distance measurements than discrete implementations and enhancing real-time decision making. It integrates LVDS, CMOS, and TTL control signals, eliminating the need for large capacitors or additional external circuitry. The device delivers up to 5 A of adjustable output current with just 2% variation across an ambient temperature range of -40°C to +125°C.
By leveraging bulk acoustic wave (BAW) technology, the CDC6C-Q1 oscillator and the LMK3H0102-Q1 and LMK3C0105-Q1 clock generators provide 100× greater reliability than quartz-based clocks, with a failure-in-time (FIT) rate as low as 0.3. These devices improve clocking precision in next-generation vehicle subsystems.
TI’s AWR2944P front and corner radar sensor builds on the AWR2944 platform, offering a higher signal-to-noise ratio, enhanced compute performance, expanded memory, and an integrated radar hardware accelerator. The accelerator enables the system’s MCU and DSP to perform machine learning tasks for edge AI applications.
Preproduction quantities of the LMH13000, CDC6C-Q1, LMK3H0102-Q1, LMK3C0105-Q1, and AWR2944P are available now on TI.com. Additional output current options and an automotive-qualified version of the LMH13000 are expected in 2026.
The post Automotive chips improve ADAS reliability appeared first on EDN.
PMIC fine-tunes power for MPUs and FPGAs

Designed for high-end MPU and FPGA systems, the Microchip MCP16701 PMIC integrates eight 1.5-A buck converters that can be paralleled and are duty cycle-capable. It also includes four 300-mA LDO regulators and a controller to drive external MOSFETs.
The MCP16701 enables dynamic VOUT adjustment across all converters, from 0.6 V to 1.6 V in 12.5-mV steps and from 1.6 V to 3.8 V in 25-mV steps. This flexibility allows precise power tuning for specific requirements in industrial computing, data servers, and edge AI, enhancing overall system efficiency.
Housed in a compact 8×8-mm VQFN package, the PMIC reduces board area by 48% and lowers component count to less than 60% compared to discrete designs. It supports Microchip’s PIC64-GX MPU and PolarFire FPGAs with a configurable feature set and operates from -40°C to +105°C. An I2C interface facilitates communication with other system components.
The MCP16701 costs $3 each in lots of 10,000 units.
The post PMIC fine-tunes power for MPUs and FPGAs appeared first on EDN.
PXI testbench strengthens chip security testing

The DS1050A Embedded Security Testbench from Keysight is a scalable PXI-based platform for advanced side-channel analysis (SCA) and fault injection (FI) testing. Designed for modern chips and embedded devices, it builds on the Device Vulnerability Analysis product line, offering up to 10× higher test effectiveness to help identify and mitigate hardware-level security threats.
This modular platform combines three core components—the M9046A PXIe chassis, M9038A PXIe embedded controller, and Inspector software. It integrates key tools, including oscilloscopes, interface equipment, amplifiers, and trigger generators, into a single chassis, reducing cabling and improving inter-module communication speed.
The 18-slot M9046A PXIe chassis delivers up to 1675 W of power and supports 85 W of cooling per slot, accommodating both Keysight and third-party test modules. Powered by an Intel Core i7-9850HE processor, the M9038A embedded controller provides the computing performance required for complex tests. Inspector software simulates diverse fault conditions, supports data acquisition, and enables advanced cryptanalysis across embedded devices, chips, and smart cards.
For more information on the DS1050A Embedded Security Testbench, or to request a price quote, click the product page link below.
The post PXI testbench strengthens chip security testing appeared first on EDN.
Sensor brings cinematic HDR video to smartphones

Omnivision’s OV50X CMOS image sensor delivers movie-grade video capture with ultra-high dynamic range (HDR) for premium smartphones. Based on the company’s TheiaCel and dual conversion gain (DCG) technologies, the color sensor achieves single-exposure HDR approaching 110 dB—reportedly the highest available in smartphones.
The OV50X is a 50-Mpixel sensor with a 1.6-µm pixel pitch and an 8192×6144 active array in a 1-in. optical format. It supports 4-cell binning, providing 12.5-Mpixel output at up to 180 frames/s, or 60 frames/s with three-exposure HDR. The sensor also enables 8K video with dual analog gain HDR and on-sensor crop-zoom capability.
TheiaCel employs lateral overflow integration capacitor (LOFIC) technology in combination with Omnivision’s proprietary DCG HDR to capture high-quality images and video in difficult lighting conditions. Quad phase detection (QPD) with 100% sensor coverage enables fast, precise autofocus across the entire frame—even in low light.
The OV50X image sensor is currently sampling, with mass production slated for Q3 2025.
The post Sensor brings cinematic HDR video to smartphones appeared first on EDN.
GaN transistors integrate Schottky diode

Medium-voltage CoolGaN G5 transistors from Infineon include a built-in Schottky diode to minimize dead-time losses and enhance system efficiency. The integrated diode also streamlines power stage design and helps reduce BOM cost.
In hard-switching designs, GaN devices can suffer from higher power losses due to body diode behavior, especially with long controller dead times. CoolGaN G5 transistors address this by integrating a Schottky diode, improving efficiency across applications such as telecom IBCs, DC/DC converters, USB-C chargers, power supplies, and motor drives.
GaN transistor reverse conduction voltage (VRC) depends on the threshold voltage (VTH) and OFF-state gate bias (VGS), as there is no body diode. Since VTH is typically higher than the turn-on voltage of silicon diodes, reverse conduction losses increase in third-quadrant operation. The CoolGaN transistor reduces these losses, improves compatibility with high-side gate drivers, and allows broader controller compatibility due to relaxed dead-time.
The first device in the CoolGaN G5 series with an integrated Schottky diode is a 100-V, 1.5-mΩ transistor in a 3×5-mm PQFN package. Engineering samples and a target datasheet are available upon request.
The post GaN transistors integrate Schottky diode appeared first on EDN.
Shoot-through

This phenomenon has nothing to do with “Gunsmoke” or with “Have Gun, Will Travel”. (Do you remember those old TV shows?) The phrase “shoot- through” describes unwanted and possibly destructive pulses of current flowing through power semiconductors in certain power supply designs.
In half-bridge and full-bridge power inverters, we have one pair (half-bridge) or two pairs (full-bridge) of power switching devices connected in series from a rail voltage to a rail voltage return. Those devices could be power MOSFETs, IGBTs, or whatever but the requirement in each case is the same. That requirement is that the two devices in each pair turn on and off in alternate fashion. If the upper one is on, the lower one is off. If the upper one is off, the lower one is on.
The circuit board seen in Figure 1 was one such design based on a full-bridge power inverter, and it had a shoot- through issue.
Figure 1 A full-bridge circuit board with a shoot-through issue and the test arrangement used to assess it.
A super simplified SPICE simulation shows conceptually what was going amiss with that circuit board, Figure 2.
Figure 2 A SPICE simulation that conceptually walks through the shoot-through problem occurring on the circuit in Figure 1.
S1 represents the board’s Q1 and Q2 upper switches and S2 represents the board’s Q4 and Q3 lower switches. At each switching transition, there was a brief moment when one switch had not quite turned off by the time its corresponding switch had turned on. With both switching devices on at the same time, however brief that “same” time was, there would be a pulse of current flowing from the board’s rail through the two switches an into the board’s rail return. That current pulse would be of essentially unlimited magnitude and the two switching devices could and would suffer damage.
Electromagnetic interference issues arose as well, but that’s a separate discussion.
Old hands will undoubtedly recognize the following, but let’s take a look at the remedy shown in Figure 3.
Figure 3 Shoot-through problem solved by introducing two diodes to speed up the switchs’ turn-off times.
The capacitors C1 and C2 represent the input gate capacitances of the power MOSFETs that served as the switches. The shoot-through issue would arise when one of those capacitances was not fully discharged before the other capacitance got raised to its own full charge. Adding two diodes sped up the capacitance discharge times so that essentially full discharge was achieved for each FET before the other one could turn on.
Having thus prevented simultaneous turn-ons, the troublesome current pulses on that circuit board were eliminated.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Shoot-thru suppression
- Tip of the Week: How to best implement a synchronous buck converter
- MOSFET Qrr: Ignore at your peril in the pursuit of power efficiency
- EMI and circuit components: Where the rubber meets the road
The post Shoot-through appeared first on EDN.
Addressing hardware failures and silent data corruption in AI chips

Meta trained one of its AI models, called Llama 3, in 2024 and published the results in a widely covered paper. During a 54-day period of pre-training, Llama 3 experienced 466 job interruptions, 419 of which were unexpected. Upon further investigation, Meta learned 78% of those hiccups were caused by hardware issues such as GPU and host component failures.
Hardware issues like these don’t just cause job interruptions. They can also lead to silent data corruption (SDC), causing unwanted data loss or inaccuracies that often go undetected for extended periods.
While Meta’s pre-training interruptions were unexpected, they shouldn’t be entirely surprising. AI models like Llama 3 have massive processing demands that require colossal computing clusters. For training alone, AI workloads can require hundreds of thousands of nodes and associated GPUs working in unison for weeks or months at a time.
The intensity and scale of AI processing and switching create a tremendous amount of heat, voltage fluctuations and noise, all of which place unprecedented stress on computational hardware. The GPUs and underlying silicon can degrade more rapidly than they would under normal (or what used to be normal) conditions. Performance and reliability wane accordingly.
This is especially true for sub-5 nm process technologies, where silicon degradation and faulty behavior are observed upon manufacturing and in the field.
But what can be done about it? How can unanticipated interruptions and SDC be mitigated? And how can chip design teams ensure optimal performance and reliability as the industry pushes forward with newer, bigger AI workloads that demand even more processing capacity and scale?
Ensuring silicon reliability, availability and serviceability (RAS)
Certain AI players like Meta have established monitoring and diagnostics capabilities to improve the availability and reliability of their computing environments. But with processing demands, hardware failures and SDC issues on the rise, there is a distinct need for test and telemetry capabilities at deeper levels—all the way down to the silicon and multi-die packages within each XPU/GPU as well as the interconnects that bring them together.
The key is silicon lifecycle management (SLM) solutions that help ensure end-to-end RAS, from design and manufacturing to bring-up and in-field operation.
With better visibility, monitoring, and diagnostics at the silicon level, design teams can:
- Gain telemetry-based insights into why chips are failing or why SDC is occurring.
- Identify voltage or timing degradation, overheating, and mechanical failures in silicon components, multi-die packages, and high-speed interconnects.
- Conduct more precise thermal and power characterization for AI workloads.
- Detect, characterize, and resolve radiation, voltage noise, and mechanism failures that can lead to undetected bit flips and SDC.
- Improve silicon yield, quality, and in-field RAS.
- Implement reliability-focused techniques—like triple modular redundancy and dual core lock step—during the register-transfer level (RTL) design phase to mitigate SDC.
- Establish an accurate pre-silicon aging simulation methodology to detect sensitive or vulnerable circuits and replace them with aging-resilient circuits.
- Improve outlier detection on reliability models, which helps minimize in-field SDC.
Silicon lifecycle management (SLM) solutions help ensure end-to-end reliability, availability, and serviceability. Source: Synopsys
An SML design example
SLM IP and analytics solutions help improve silicon health and provide operational metrics at each phase of the system lifecycle. This includes environmental monitoring for understanding and optimizing silicon performance based on the operating environment of the device; structural monitoring to identify performance variations from design to in-field operation; and functional monitoring to track the health and anomalies of critical device functions.
Below are the key features and capabilities that SLM IP provides:
- Process, voltage and temperature monitors
- Help ensure optimal operation while maximizing performance, power, and reliability.
- Highly accurate and distributed monitoring throughout the die, enabling thermal management via frequency throttling.
- Path margin monitors
- Measure timing margin of 1000+ synthetic and functional paths (in-test and in-field).
- Enable silicon performance optimization based on actual margins.
- Automated path selection, IP insertion, and scan generation.
- Clock and delay monitors
- Measure the delay between the edges of one or more signals.
- Check the quality of the clock duty cycle.
- Measure memory read access time tracking with built-in self-test (BIST).
- Characterize digital delay lines.
- UCIe monitor, test and repair
- Monitor signal integrity of die-to-die UCIe lane(s).
- Generate algorithmic BIST patterns to detect interconnect fault types, including lane-to-lane crosstalk.
- Perform cumulative lane repair with redundancy allocation (upon manufacturing and in-field).
- High-speed access and test
- Enable testing over functional interfaces (PCIe, USB and SPI).
- For in-field operation as well as wafer sort, final test, and system-level test.
- Can be used in conjunction with automated test equipment.
- Help conduct in-field remote diagnoses and lower-cost test via reduced pin count.
- HBM external test and repair
- Comprehensive, silicon-proven DRAM stack test, repair and diagnostics engine.
- Support third-party HBM DRAM stack providers.
- Provide high-performance die to die interconnect test and repair support.
- Operate in conjunction with HBM PHY and support a range of HBM protocols and configurations.
- SLM hierarchical subsystem
- Automated hierarchical SLM and test manageability solution for system-on-chips (SoCs).
- Automated integration and access of all IP/cores with in-system scheduling.
- Pre-validated, ready ATE patterns with pattern porting.
Silicon test and telemetry in the age of AI
With the scale and processing demands of AI devices and workloads on the rise, system reliability, silicon health and SDC issues are becoming more widespread. While there is no single solution or antidote for avoiding these issues, deeper and more comprehensive test, repair, and telemetry—at the silicon level—can help mitigate them. The ability to detect or predict in-field chip degradation is particularly valuable, enabling corrective action before sudden or catastrophic system failures occur.
Delivering end-to-end visibility through RAS, silicon test, repair, and telemetry will be increasingly important as we move toward the age of AI.
Shankar Krishnamoorthy is chief product development officer at Synopsys.
Krishna Adusumalli is R&D engineer at Synopsys.
Jyotika Athavale is architecture engineering director at Synopsys.
Yervant Zorian is chief architect at Synopsys.
Related Content
- Uncovering Silent Data Errors with AI
- 11 steps to successful hardware troubleshooting
- Self-testing in embedded systems: Hardware failure
- Understanding and combating silent data corruption
- Test solutions to confront silent data corruption in ICs
The post Addressing hardware failures and silent data corruption in AI chips appeared first on EDN.
Photo tachometer sensor accommodates ambient light

Tachometry, the measurement of the speed of spin of rotating objects, is a common application. Some of those objects, however, have quirky aspects that make them extra interesting, even scary. One such category includes outdoor noncontact sensing of large, fast, and potentially hazardous objects like windmills, waterwheels, and aircraft propellers. The tachometer peripheral illustrated in Figure 1 implements optical sensing using available ambient light that provides a logic-level signal to a microcontroller digital input and is easily adaptable to different light levels and mechanical contexts.
Figure 1 Logarithmic contrast detection accommodates several decades of variability in available illumination.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Safe sensing of large rotating objects is best done from a safe (large) distance and passive available-light optical methods are the obvious solution. Unless elaborate lens systems are used in front of the detector, the optical signal is apt to have a relatively low-amplitude due to the tendency of the rotating object (propeller blade, etc.) to fill only a small fraction of the field of view of simple detectors. This tachometer (Figure 1) makes do with an uncomplicated detector (phototransistor Q1 with a simple light shield) by following the detector with a high-gain, AC coupled, logarithmic, threshold detector.
Q1’s photocurrent produces a signal across Q2 and Q3 that varies by ~500 µV pp for every 1% change in incident light intensity that’s roughly (e.g. neglecting various tempcos) given by:
V ~ 0.12 log10(Iq1/Io)
Io ~ 10 fA
This approximate log relationship works over a range of nanoamps to milliamps of photocurrent and is therefore able to provide reliable circuit operation despite several orders of magnitude variation in available light intensity. A1 and the surrounding discrete components comprise high gain (80 dB) amplification that presents a 5-Vpp square-wave to the attached microcontroller DIO pin.
Programming of the I/O pin internal logic for pulse counting allows a simple software routine to divide the accumulated count by the associated time interval and by the number of counted optical features of the rotating object (e.g., number of blades on the propeller) to produce an accurate RPM reading.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Analyze mechanical measurements with digitizers and software
- Monitoring, control, and protection options in DC fans for air cooling
- Motor controller operates without tachometer feedback
- Small Tachometer
- Sparkplug Wire Sensor & Digital Tachometer – Getting Started
The post Photo tachometer sensor accommodates ambient light appeared first on EDN.
How NoC architecture solves MCU design challenges

Microcontrollers (MCUs) have undergone a remarkable transformation, evolving from basic controllers into specialized processing units capable of handling increasingly complex tasks. Once confined to simple command execution, they now support diverse functions that require rapid decision-making, heightened security, and low-power operation.
Their role has expanded across industries, from managing complex control systems in industrial automation to supporting safety-critical vehicle applications and power-efficient operations in connected devices.
As MCUs take on greater workloads, the conventional bus-based interconnects that once sufficed now limit performance and scalability. Adding artificial intelligence (AI) accelerators, machine learning technology, reconfigurable logic, and secure processing elements demands a more advanced on-chip communication infrastructure.
To meet these needs, designers are adopting network-on-chip (NoC) architectures, which provide a structured approach to data movement, alleviating congestion and optimizing power efficiency. Compared to traditional crossbar-based interconnects, NoCs reduce routing congestion through packetization and serialization, enabling more efficient data flow while reducing wire count.
This is how efficient packetization works in network-on-chip (NoC) communications. Source: Arteris
MCU vendors adopt NoC interconnect
Many MCU vendors relied on proprietary interconnect solutions for years, evolving from basic crossbars to custom in-house NoC implementations. However, increasing design complexity encompassing AI/ML integration, security requirements, and real-time processing has made these solutions costly and challenging to maintain.
Moreover, as advanced packaging techniques and die-to-die interconnects become more common, maintaining in-house interconnects has grown increasingly complex, requiring constant updates for new communication protocols and power management strategies.
To address these challenges, many vendors are transitioning to commercial NoC solutions that offer pre-validated scalability and significantly reduce development overhead. For an engineer designing an AI-driven MCU, an NoC’s ability to streamline communication between accelerators and memory can dramatically impact system efficiency.
Another major driver of this transition is power efficiency. Unlike general-purpose systems-on-chip (SoCs), many MCUs must function within strict power constraints. Advanced NoC architectures enable fine-grained power control through power domain partitioning, clock gating, and dynamic voltage and frequency scaling (DVFS), optimizing energy use while maintaining real-time processing capabilities.
Optimizing performance with NoC architectures
The growing number of heterogeneous processing elements has placed unprecedented demands on interconnect architectures. NoC technology addresses these challenges by offering a scalable, high-performance alternative that reduces routing congestion, optimizes power consumption, and enhances data flow management. NoC enables efficient packetized communication, minimizes wire count, and simplifies integration with diverse processing cores, making it well-suited for today’s MCU requirements.
By structuring data movement efficiently, NoCs eliminate interconnect bottlenecks, improving responsiveness and reducing die area. So, the NoC-based designs achieve up to 30% higher bandwidth efficiency than traditional bus-based architectures, improving overall performance in real-time systems. This enables MCU designers to achieve higher bandwidth efficiency and simplify integration, ensuring their architectures remain adaptable for advanced applications in automotive, industrial, and enterprise computing markets.
Beyond enhancing interconnect efficiency, NoC architectures support multiple topologies, such as mesh and tree configurations, to ensure low-latency communication across specialized processing cores. Their scalable design optimizes interconnect density while minimizing congestion, allowing MCUs to handle increasingly complex workloads. NoCs also improve power efficiency through modularity, dynamic bandwidth allocation, and serialization techniques that reduce wire count.
By implementing advanced serialization, NoC architectures can reduce the number of interconnect wires by nearly 50%, as shown in the above figure, lowering overall die area and reducing power consumption without sacrificing performance. These capabilities enable MCUs to sustain high performance while balancing power constraints and minimizing die area, making NoC solutions essential for next-generation designs requiring real-time processing and efficient data flow.
In addition to improving scalability, NoCs enhance safety with features that help toward achieving ISO 26262 and IEC 61508 compliance. They provide deterministic communication, automated bandwidth and latency adjustments, and built-in deadlock avoidance mechanisms. This reduces the need for extensive manual configuration while ensuring reliable data flow in safety-critical applications.
Interconnects for next-generation MCUs
As MCU workloads grow in complexity, NoC architectures have become essential for managing high-bandwidth, real-time automation, and AI inference-driven applications. Beyond improving data transfer efficiency, NoCs address power management, deterministic communication, and compliance with functional safety standards, making them a crucial component in next-generation MCUs.
To meet increasing integration demands, ranging from AI acceleration to stringent power and reliability constraints, MCU vendors are shifting toward commercial NoC solutions that streamline system design. Automated pipelining, congestion-aware routing, and configurable interconnect frameworks are now key to reducing design complexity while ensuring scalability and long-term adaptability.
Today’s NoC architectures optimize timing closure, minimize wire count, and reduce die area while supporting high-bandwidth, low-latency communication. These NoCs offer a flexible approach, ensuring that next-generation architectures can efficiently handle new workloads and comply with evolving industry standards.
Andy Nightingale, VP of product management and marketing at Arteris, has over 37 years of experience in the high-tech industry, including 23 years in various engineering and product management positions at Arm.
Related Content
- SoC Interconnect: Don’t DIY!
- What is the future for Network-on-Chip?
- Why verification matters in network-on-chip (NoC) design
- SoC design: When is a network-on-chip (NoC) not enough
- Network-on-chip (NoC) interconnect topologies explained
The post How NoC architecture solves MCU design challenges appeared first on EDN.
Aftermarket drone remote ID: Let’s see what’s inside thee

The term “aftermarket” finds most frequent use, in my experience, in describing hardware bought by owners to upgrade vehicles after they initially leave the dealer lot: audio system enhancements, for example, or more powerful headlights. But does it apply equally to drone accessories? Sure (IMHO, of course). For what purposes? Here’s what I wrote last October:
Regardless of whether you fly recreationally or not, you also often (but not always) need to register your drone(s), at $5 per three-year timespan (per-drone for commercial operators, or as a lump sum for your entire drone fleet for recreational flyers). You’ll receive an ID number which you then need to print out and attach to the drone(s) in a visible location. And, as of mid-September 2023, each drone also needs to (again, often but not always) support broadcast of that ID for remote reception purposes…
DJI, for example, firmware-retrofitted many (but not all) of its existing drones with Remote ID broadcast capabilities, along with including Remote ID support in all (relevant; hold that thought for next time) new drones. Unfortunately, my first-generation Mavic Air wasn’t capable of a Remote ID retrofit, or maybe DJI just didn’t bother with it. Instead, I needed to add support myself via a distinct attached (often via an included Velcro strip) Remote ID broadcast module.
I’ll let you go back and read the original writeup to discern the details behind my multiple “often but not always” qualifiers in the previous two paragraphs, which factor into one of this month’s planned blog posts. But, as I also mentioned there, I ended up purchasing Remote ID broadcast modules from two popular device manufacturers (since “since embedded batteries don’t last forever, don’cha know”), Holy Stone and Ruko. And…
I also got a second Holy Stone module, since this seems to be the more popular of the two options) for future-teardown purposes.
The future is now; here’s a “stock” photo of the device we’ll be dissecting today, with dimensions of 1.54” x 1.18” x 0.51”/3.9 x 3 x 1.3 cm and a weight of 13.9 grams (14.2 grams total, including Velcro mounting strips) and a model number variously reported as 230218 and HSRID01:
Some outer box shots to start (I’ve saved you from boring photos of the blank sides):
And opening the box, its contents, with our victim in the middle, within a cushioned envelope:
At bottom is the user manual; I can’t find a digital copy of it on the Holy Stone support site, but Manuals+ hosts it in both HTML and PDF formats. You can also find this documentation (among other interesting info) on the FCC website; the FCC ID, believe it or not, is 2AJ55HOLYSTONEBM. At top is the Velcro mounting pair, also initially cushion-packaged (for unknown reasons):
And now, fully freed from its prior captivity, is our patient, as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (once again, I’ve intentionally saved you from exposure to boring blank-side shots):
A note on this next one; the USB-C port shown is used to recharge the embedded battery:
Prior to disassembly, I plugged the device into my Google Pixel Buds Pro earbuds charging cable (which has USB-C connectors on both ends) to test charge functionality, but the left-side battery indicator LED on the front panel remained un-illuminated. That said, when I punched the device’s front panel power switch, it came to life. The result wasn’t definitive; the battery could have been precharged on the assembly line, with the charging circuitry inside still inoperable.
But, on a hunch, I then instead plugged it into the power cable for my Google Chromecast with Google TV, which has USB-A on the power-source end, and the charge-status LED lit up and began blinking, indicative of charging in progress. What’s with Chinese-sourced gear and its non-cognizance of USB Power Delivery negotiation protocols? The user manual shows and discusses an “original charging cable” with USB-A on one end which, had it actually been included as inferred, would have constrained the possible charging-source options. Just sayin’.
Speaking of “circuitry inside,” note the visible screw head at the bottom of this next shot:
That’s, I suspect, our pathway inside. Before we dive in, however, what should we expect to see there, circuitry-wise? Obviously there’s a battery, likely Li-ion in formulation, along with the aforementioned associated charging circuitry for it. There’s also bound to be some sort of system SoC, plus both volatile (RAM) and nonvolatile memory, the latter holding both the program code and user-programmable FAA-assigned Remote ID. Broadcast of that ID can occur over Bluetooth, Wi-Fi or both, via an accompanying antenna. And for geolocation purposes, there’ll need to be a GPS subsystem, comprising both another antenna and a receiver.
Now that the stage is set, let’s get inside, after both removing the previously shown screw and slicing through the serial number sticker on one side:
Voila:
The wire in the lower right corner is, I suspect, the wireless communications antenna. Given its elementary nature, along with the lack of mention of Wi-Fi in the product documentation, I’m guessing it’s Bluetooth-only. To its left is the square mostly-tan GPS antenna. In the middle is the multifunction switch (power cycling and user (re)configuration). Above it are the two LEDs, for power/charging status (left) and current operating mode (right).
And on both sides of it are Faraday cages, the lids of which we’ll need to rip off (hold that thought) before we can further investigate their contents.
The PCB subsequently lifts right out of the other (back) case half:
revealing the “pouch” battery adhesive-attached to the PCB’s other side:
Peel the battery away (revealing a near-blank PCB underneath).
Peel off the tape, and the battery specs (3.7V, 150mAh, 0.55Wh…why do battery manufacturers frequently feel the need to redundantly provide both of the latter two? Can’t folks multiply anymore?) come into view:
Back to the front of the PCB, post-removal of the two Faraday cages’ tops, as foreshadowed previously:
Now fully visible is the USB-C connector, alongside a rubberized ring that had been around it when fully assembled. As for what’s inside those now-mangled Faraday cages, let’s zoom in:
The landscape-dominant IC within the left-located Faraday cage, unsurprisingly given its GPS antenna proximity, is Bekin’s BK1661, a “fully integrated single-chip L1 GNSS [author note: Global Navigation Satellite System] solution” that, as the acronym infers, supports not only GPS L1 but “Beidou B1, Galileo E1, QZSS L1, and GLONASS G1,” for worldwide usage.
The one to the right, on the other hand, was a mystery (although, given its antenna proximity, I suspected it handled Bluetooth transceiver functionality, among other things) until I came across an enlightening Reddit discussion. The company logo mark on the top of the chip is a combination of the letters J and L. And the part number underneath it is:
BP0E950-21A4
Here’s an excerpt of the initial post in the Reddit discussion thread, titled “How to identify JieLi (JL/π) bluetooth chips”:
If you like to open things, particularly bluetooth audio devices, you may have seen chips from manufacturers like Qualcomm, Bestechnic (BES), Airoha, Vimicro WX, Beken, etc.; but cheaper devices have those mysterious chips marked with A3 or AB (from Bluetrum), or those with the JL or “pi” logo (from JieLi).
Bluetrum and JieLi chips have a printed code (like most IC chips), but those codes don’t match any results on Google or the manufacturer’s websites. Why does this happen? Well, it looks like the label on those chips is specific to the firmware they’re running, and there’s no way to know which chip it is exactly (unless the manufacturer of your bluetooth device displays that information somewhere on the package).
I was recently looking at the datasheet for some JieLi chips I have lying around, and noticed something interesting: on each chip the label is formatted like “abxxxxxxx-YYY”, “acxxxxx-YYYY” or similar, and the characters after the “-” look like they indicate part of the model number of the IC.
…
In conclusion, if you find a JL chip inside your device and the label does not show any results, use the last characters (the ones after the “-“) and add ac69 or ac63 at the beginning (those are the series of the chip, like AC69xx or AC63xx. There are more series that I don’t remember, so if those codes don’t work for you, try searching for others).
…
Also, if you find a chip with only one number before the letter in the character group after the “-“, add a 0 before it and then add a series code at the beginning. (For example: 5A8 -> 05A8 -> AC6905A)
By doing so you will probably find the pinout and datasheet of your bluetooth IC.
Based on the above, what I think we have here is the AC321A4 RISC-based microcontroller with Bluetooth support from Chinese company ZhuHai JieLi Technology. To give you an idea of how much (or, perhaps more accurately, little) it costs, consider the headline of an article I came across on a similar product from the same company, “JieLi Tech AC6329C4 is Another Low Cost MCU but with Bluetooth 5.0 Support.” Check out the price tag in the associated graphic:
That said, an AC6921A also exists from the company, although it seems to be primarily intended for stereo audio Bluetooth, so…
That’s what I’ve got for today, folks. Sound off in the comments with your thoughts!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The (more) modern drone: Which one(s) do I now own?
- LED headlights: Thank goodness for the bright(nes)s
- Drone regulation and electronic augmentation
- Google’s Chromecast with Google TV: Dissecting the HD edition
The post Aftermarket drone remote ID: Let’s see what’s inside thee appeared first on EDN.
Building a low-cost, precision digital oscilloscope – Part 2

Editor’s Note:
In this DI, high school student Tommy Liu modifies a popular low-cost DIY oscilloscope to enhance its input noise rejection and ADC noise with anti-aliasing filtering and IIR filtering.
Part 1 introduces the oscilloscope design and simulation.
This part (Part 2) shows the experimental results of this oscilloscope.
Experimental ResultsThree experiments were conducted to evaluate the performance of our precision-enhanced oscilloscope using both analog and digital signal processing techniques.
First, we test the effect of the new anti-aliasing filter described in Part 1. For this purpose, a 2-kHz sinusoidal signal is amplitude modulated (AM) with a 961-kHz sinusoidal waveform by a Rigol DG1022Z signal generator (Rigol Technologies, Inc., 2016) and is used as the analog input to the oscilloscope.
In this scenario, the low-frequency (2 kHz) sinusoidal waveform is our signal, while the high-frequency tones caused by modulation with 961 kHz sinusoidal represent high frequency noises at the signal source. In the experiment, a 10% modulation depth is used to make the high frequency noise easily identifiable by sight. The time division is set at 20 µs with the ADC sampling frequency of 500 KSPS.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Results of anti-aliasing filterThe original DSO138-mini lacks anti-aliasing filter capability due to its insufficient -3-dB cut-off frequencies (around 500 kHz to 800 kHz). As a result, the high-frequency noise tones caused by modulations pass through the analog front-end, without much attenuation, and are sampled by the ADC at 500 KSPS. This creates aliasing noise tones at the ADC output and can be clearly seen in the displayed waveform on the DSO128-mini (Figure 1).
Figure 1 The aliasing noise tones at the ADC output on the DSO138-mini.
Our new anti-aliasing filter provides a significant lower -3-dB cut-off frequency of around 100 kHz, and effectively filters away most of the out-of-band high frequency noises, in this case, the noise tones caused by the signal modulation with 961 kHz sinusoidal. Figure 2 is a screenshot with the new anti-aliasing filter, indicating a significant reduction in the aliasing noise.
Figure 2 Reduction of the aliasing noise with the new anti-aliasing filter.
Detailed analysis on the captured data with the new anti-aliasing filter indicates a 10 dB to 15 dB (3.2x to 5.6x) improvement over the original DSO138-mini on noise rejection at frequencies higher than the oscilloscope’s signal bandwidth.
In practical applications, high frequency noises with a magnitude of a few millivolts RMS are not uncommon. A 5-mV RMS noise at near 900 kHz is attenuated to 0.73 mV (RMS) with our new anti-aliasing filter versus 2.48 mV (RMS) with the original DSO138-mini. With an ADC full-scale input range of 3.3 V, 0.73 mV RMS is of an effective resolution well above 10 bits (ENOB). With the original DSO138-mini, the ENOB would be at only an 8-bit level.
Results of digital post-processing filterThe second test evaluates the performance of the digital post-processing filter. As explained in Part 1, besides the noises at the analog input, other noise sources in oscilloscopes, such as noises on ADC inside the MCU damage the measurement precision. This is evident in Figure 3, which is a screenshot of the DSO138-mini with its Self-Test mode turned on. In Self-Test mode, an internally generated pulse signal—less susceptible to the noises from the external signal source—is used to test and fine tune the oscilloscope. We can see that there are still ripple noises on the pulse waveform.
Figure 3 Ripples on internally generated pulse signal during self-test mode on the DSO138-mini.
It is not easy to identify the magnitude of these ripples due to the limited pixel resolution of the DSO138-mini’s LCD display (320 x 240). We transferred the captured data to a PC via DSO138-mini’s UART-USB link for precise data analysis. Figure 4 shows the waveform of the captured self-test pulses on a PC. The ripple noises are calculated and shown in Figure 5.
Figure 4 Captured self-test pulse signal waveform on PC for more precision data analysis.
Figure 5 Magnitude of noises on self-test pulse with no digital post-processing.
Considering the voltage division setting (1 V, -20 dB on Input) and attenuation setting (x1), the ripple on the self-test pulse has a peak-peak magnitude of 8 mV. This error is about 10 LSB and the calculated RMS value is about 3 mV, yielding an effective resolution of 8.3 bits. Digital post-processing can be used to suppress some of these noises.
Figure 6 is the waveform after first-order infinite impulse response (IIR) digital filtering (α = 0.25) is performed on the PC, and Figure 7 shows the noises on the self-test pulse.
After IIR filtering, the noise RMS value reduces to about 0.75 mV, or by a factor of 4. This brings back the effective resolution from 8.3 bits to 10.4 bits. We notice that the rise and fall transition edges of the pulse look a bit less sharp than the signal before post-processing.
This is due to the low-pass nature of the IIR filter. With α=0.25, the passband (-3 dB) is at around 23 kHz, covering an input bandwidth up to audio frequencies (20 kHz). For tracking faster signals, such as fast transition edges of a pulse signal, we can relax α to a higher value allowing for more input bandwidth.
Figure 6 Self-test pulse with first-order IIR digital filter where α = 0.25.
Figure 7 Noises on self-test pulse with first-order IIR filter where RMS noise reduces to ~0.75 mV.
The effects of both filtersFinally, we test the overall effect of both the new anti-aliasing filter and the digital post processing by inputting a sinusoidal input of 2 kHz from a signal generator to our new oscilloscope. We can see from Figure 8 that even with the new anti-aliasing filter, there are still some noises on the waveform, due to the ADC noises inside the MCU. The RMS value of the noises is about 2.8 mV and the effective resolution is limited to below 9 bits.
Figure 8 Noises on a 2 kHz sinusoidal input waveform despite having the new anti-aliasing filter.
As shown in Figure 9, with the first-order IIR filter in effect, the waveform cleans up. The RMS noise reduces to 0.7 mV and, again, this brings up the effective resolution from below 9 bits to above 10 bits. Other input frequencies, up to 20 kHz (audio), have also been tested and an overall effective resolution of 10 bits or more was observed with the new anti-aliasing filter and the digital post-processing algorithm.
Figure 9 A 2 kHz sinusoidal input waveform after digital post-processing where the RMS noise reduces to 0.7 mV.
Low-cost oscilloscopeMany traditional low-cost DIY type digital oscilloscopes have two major technical drawbacks, namely inadequate anti-aliasing capability and large ADC noises. As a result, these oscilloscopes can only reach an effective resolution of 8 bits or less, even though most of them are based on an MCU, equipped with built-in 12-bit ADCs.
These problems limit DIY oscilloscopes from more demanding professional high school projects. To address these issues, a well-designed first-order analog low-pass filter at the analog front-end of the oscilloscope, plus a programmable first-order IIR digital post-processing filter, are implemented on a popular low-cost DIY platform (DSO138-mini).
Experimental results verified that the new oscilloscope could maintain an overall effective resolution of 10 bits or above with the presence of high frequency noises at its analog input, up to an input bandwidth of 20 kHz and real-time sampling of 1 MSPS. The implementations are inexpensive—the BOM cost of the new anti-aliasing filter is just the cost of a ceramic capacitor (far less than a dollar), and the digital post-processing program is completely implemented in the PC software.
Costing less than fifty dollars, this precision digital oscilloscope can be used in many high schools. This includes high schools without the funds for pricey commercial models and, thus, enable students to perform a wide range of tasks: from the first-time electrical signal capture and observation to the more demanding precision measurement and signal analysis for complex electrical and electronic projects.
Tommy Liu is currently a junior at Monta Vista High School (MVHS) with a passion for electronics. A dedicated hobbyist since middle school, Tommy has designed and built various projects ranging from FM radios to simple oscilloscopes and signal generators for school use. He aims to pursue Electrical Engineering in college and aspires to become a professional engineer, continuing his exploration in the field of electronics.
Related Content
- Building a low-cost, precision digital oscilloscope—Part 1
- Build your own oscilloscope probes for power measurements (part 1)
- Build your own oscilloscope probes for power measurements (part 2)
- Basic oscilloscope operation
- FFTs and oscilloscopes: A practical guide
The post Building a low-cost, precision digital oscilloscope – Part 2 appeared first on EDN.
The advent of AI-empowered fab-in-a-box

What’s a fab-in-a-box, and how it’s far more efficient in terms of cost, space, and chip manufacturing operations. Alan Patterson speaks to CEOs of Nanotronics and Pragmatic to dig deeper into how these $30 million fabs work while using AI to boost yields and make these mini-fabs more cost-competitive. These “cubefabs” are also worth attention because many markets, including the United States, aim to bolster local chip manufacturing.
Read the full story at EDN’s sister publication, EE Times.
Related Content
- Semiconductor Industry Faces a Seismic Shift
- Semiconductor Capacity Is Up, But Mind the Talent Gap
- Building Semiconductor Capacity for a Hotter, Drier World
- Tapping AI for Leaner, Greener Semiconductor Fab Operations
- SEMICON Europa: Building a Sustainable US$1 Trillion Semiconductor Industry
The post The advent of AI-empowered fab-in-a-box appeared first on EDN.
Single sideband generation

In radio communications, one way to generate single sideband (SSB) signals is to make a double sideband signal by feeding a carrier and a modulation signal into a balanced modulator to create a double sideband (DSB) signal and then filter out one of the two resulting sidebands.
If you filter out the lower sideband, you’re left with the upper sideband and if you filter out the upper sideband, you’re left with the lower sideband. However, another way to generate SSB without that filtering has been called “the phasing method.”
Let’s look at that in the following sketch in Figure 1.
Figure 1 Phasing method of generating an SSB signal where the outputs of Fc and Fm are 90° apart with respect to each other
The outputs of the carrier (Fc) quadrature phase shifter and the modulating signal (Fm) quadrature phase shifter need only be 90° apart with respect to each other. The phase relationships to their respective inputs are irrelevant.
Four cases of SSB generationIn the following equations, those two unimportant phase shifts are called “phi” and “chi” for no particular reason other than their pronunciations happen to rhyme. Mathematically, we examine four cases of SSB generation.
Case 1, where “Fc at 90°” and “Fm at 90°” are both +90°, or in the same directions (Figure 2). Case 2, where “Fc at 90°” and “Fm at 90°” are both -90°, or in the same directions (Figure 3).
Figure 2 Mathematically solving for upper and lower side bands where “Fc at 90°” and “Fm at 90°” are both +90°, or in the same directions.
Figure 3 Mathematically solving for upper and lower side bands where “Fc at 90°” and “Fm at 90°” are both -90°, or in the same directions.
Case 3, where “Fc at 90°” is -90°and “Fm at 90°” is +90°, or in the opposite directions (Figure 4). Case 4, where “Fc at 90°” is +90°and “Fm at 90°” is -90°, or in the opposite directions (Figure 5).
Figure 4 Mathematically solving for upper and lower side bands where “Fc at 90°” is -90°and “Fm at 90°” is +90°, or in the opposite directions
Figure 5 Mathematically solving for upper and lower side bands where “Fc at 90°” is +90°and “Fm at 90°” is -90°, or in the opposite directions.
The quadrature phase shifter for the carrier signal only needs to operate at one frequency, which is that of the carrier itself and which we have called “Fc”. The quadrature phase shifter for the modulating signal however has to operate over a range of frequencies. That device has to develop 90° phase shifts for all the frequency components of that modulating signal and therein lies a challenge.
90° phase shifts for all frequency componentsThere is a mathematical operator called the Hilbert transform which is described here. There, we find an illustration of the Hilbert transformation of a square wave. From that page, we present the sketch in Figure 6.
Figure 6 A square wave and its Hilbert transform, bringing about a 90° phase shift of each frequency component of the input signal in its own time base.
The underlying mathematics of the Hilbert transform is described in terms of a convolution integral but in another sense, you can look at the result as bringing about a 90° phase shift of each frequency component of the input signal in its own time base, in the above case, of a square wave. This phase shift property is the very thing we want for our modulating signal in SSB generation.
In the case of Figure 7, I took each frequency component of a square wave—by which I mean the fundamental frequency plus a large number of properly scaled odd harmonics—and phase shifted each of them by 90° in their respective time frames. I then added up those phase-shifted terms.
Figure 7 A square wave and the result of 90° phase shifts of each harmonic component in that square wave.
Please compare Figure 6 to the result in Figure 5. They look very much the same. The finite number of 90° phase shift and summing steps very nicely approximate the Hilbert transform.
The ideal case for SSB generation can be expressed as starting with a carrier signal, you create a second carrier signal at the same frequency as the first, but phase shifted by 90°. Putting this another way, the first carrier signal and the second carrier signal are in quadrature with respect to one another.
You then take your modulating signal and generate its Hilbert transform. You now have two modulating signals in which each frequency component of the one is in quadrature with the corresponding frequency component of the other.
Using two balanced modulators, you apply one carrier and one modulating signal to one balanced modulator and apply the other carrier and the other modulating signal to the other balanced modulator. The outputs of the two balanced modulators are then either added to each other or subtracted from each other. Based on the four mathematical examples above, you end up with either an upper sideband SSB signal or a lower sideband SSB signal.
This offers high performance and thus the costly filters described in the first paragraph above are not needed.
Practically applying a Hilbert transformAs a practical matter however, instead of actually making a true Hilbert transformer (I have no idea how or even if that could be done.), we can make a variety of different circuits which will give us the 90° phase shifts we need for our modulating signals over some range of operating frequencies with each frequency component 90° shifted in its own time frame.
One of the earliest purchasable devices for doing this over the range of speech frequencies was a resistor-capacitor network called the 2Q4 which was made by a company called Barker and Williamson. The 2Q4 came in a metal can with a vacuum-tube-like octal base. Its dimensions were very close to that of a 6J5 vacuum tube, but the can of the 2Q4 was painted grey instead of black. (Yes, I know that I’m getting old.)
Another approach to obtaining the needed 90° phase relationships of the modulating signals is by using cascaded sets of all-pass filters. That technique is described in “All-pass filter phase shifters.”
One thing to note is that the Hilbert transformation itself and our approximation of it can lead to some really spiky signals. The spikiness we see for the square wave arises for speech waveforms too. This fact has an important practical implication.
SSB transmitters tend to have high peak output powers versus their average output power levels. This is why in amateur radio, while there is an FCC-imposed operating power limit of 1000 watts, the limit for SSB transmission is 2000 watts peak power.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- All-pass filter phase shifters
- Spectral analysis and modulation, part 5: Phase shift keying
- Single-sideband demodulator covers the HF band
- SSB modulator covers HF band
- Impact of phase noise in signal generators
- Choosing a waveform generator: The devil is in the details
- Modulation basics, part 1: Amplitude and frequency modulation
The post Single sideband generation appeared first on EDN.