Українською
  In English
EDN Network
Walmart’s onn. full HD streaming device: Still not thick, just don’t call it a stick

A month back, I tore down Walmart’s onn. 4K Streaming Box, the Google TV-based successor to the company’s initial Android TV-based UHD Streaming Device that I’d dissected mid-last year. And as promised in last month’s coverage, this time I’ll be taking a look at the guts of its “stick” form factor sibling, the Google TV-based Full HD Streaming Device, the successor to the Android TV-based FHD streaming stick predecessor that went “under the knife” last December.
Device, stick, or box?Read through those previous two sentences again and you might catch the meaning behind the “just don’t call it a stick” bit in this writeup’s title; similarly, you might get why last month I wrote:
Also, it’s now called a “box”, versus a “device”. Hold that latter thought until next month…
The word “device” seems to have inconsistent form factor association within Walmart. In the first-generation onn. product line, it referred to the “box”, with the rectangular form factor explicitly called a “stick”. This time around, the “stick” is the “device”, with the square form factor referred to as a “box” instead. Then again, as I mentioned last month, the first generation “box’s” UHD maximum output resolution is now instead referred to as “4K”, and similarly, the “stick” form factor has transitioned from “2K FHD” to “Full HD” in the product name, so…
Anyway…in last month’s piece, I pointed out the surprising-to-me commonality between the hardware in the two “box” generations’ designs. Will the same be the case with the two generations of “stick” devices? And as with Walmart’s “box” devices in comparison to the TiVo RA2400 Stream 4K, will I also encounter commonality between Walmart’s “sticks” and other manufacturers’ devices? There’s only one way to find out…let’s begin with a “stock” shot:
Now for the actual packaging of today’s patient, which set me back $14.88 in November 2023.
The joke never seems to get old, at least for me…you might disagree…
It’s a box-within-a-box!
Flip open the top flap, and we get our first glimpse of the still-protected-by-plastic device inside, along with a sliver of literature (PDF here).
Here they are now freed from their cardboard captivity, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
Underneath are the AC power adapter, an HDMI extension cable, the remote control and a set of batteries for the latter:
Here’s a close-up of the AC power adapter’s micro-USB connector:
and its markings; interestingly, the max input current is higher than that for last month’s “box” PSU (0.25 A vs 0.2 A), although the output current specs are the same (1 A). I suspect that the input current variance is just efficiency-reflective of the sourcing deviation between the two PSUs, not of the respective systems’ actual power requirements. In fact, I’m expecting a lower-power-consumption SoC inside this time, along with decreased memory and the like.
Here are the “male” and “female” ends of the HDMI extension cable:
And here’s the battery compartment-exposed backside of the remote control, which appears to be identical to last month’s “box” remote:
Now for our patient, with dimensions of 3.54 x 1.18 x 0.51 inches (90.5 x 30 x 13 mm), quite close to those of its Android TV-based precursor (3.81 x 1.39 x 0.61 inches). That said, there are some physical design variations between them:
- No passive airflow vents either top or bottom this time, and
- Last time there was no status LED included in the design, and the recessed reset switch and micro-USB power input were on opposite sides of the device. This time, the micro-USB power input is on one end (with the HDMI connector again on the other), and a status LED has been added, next to the reset switch.
A closeup of that last shot reveals, among other things, the FCC ID (2AYYS-8822K2VTG, and no, reminiscent of what I also said last month, I don’t know why there are 21 different FCC documents posted for this ID, either!).
Applying a spudger to the gap between the two case halves get them apart with damage to only one of the plastic tabs.
For orientation purposes, we’re looking at the inside of the top half of the device case, along with the top of the PCB (“top” and “bottom” being somewhat meaningless with a “stick” form factor, as I’ve noted before, but I’m going by where the brand logo is stamped on the case):
The PCB then lifts easily out of the remaining bottom case half.
Here’s the inside of the bottom half of the case, once again accompanied by the top of the PCB:
and now with the PCB flipped over to reveal its bottom side. Note, for example, the light guide (aka, light pipe, light tube) that, as with the one we saw last month, routes the output of the LED on the PCB (at bottom, to the right of the Faraday cage) to the outside world.
Speaking of Faraday cages, let’s flip back to the PCB topside and begin our disassembly. En route to that destination, here are snapshots of both sides:
The heat sink on top clung to the Faraday cage below it stubbornly finally relented in the face of my intense spudger attention.
The Faraday Cage itself was much less removal-resistant:
Focusing in proved to be…interesting, among other things (including initially frustrating).
The IC on the left was easy to ID, although the marking was faint (stay tuned for another photo where it’s clearer, courtesy of augmented lighting). It’s Amlogic’s S805X2, another in a long line of examples of onn. devices based on application processors from this supplier. The S805X2 was introduced in Q2 2020, and Wikipedia lumps it into the company’s fourth-generation product line in seeming contrast to the “2” end character in its product line. The “X”, as I explained last month and versus the “Y” version seen in that teardown, refers to its integration of wired Ethernet support, which is a bit curious, particularly for a “stick” form factor device, albeit not unique (note, for example, Ethernet over micro-USB on the Chromecast Ultra).
Versus the Amlogic S805Y-B seen in the Android TV-based “stick” predecessor, the S805X2 bumps up the quad-core Arm Cortex-A35 processor cluster’s clock speed from 1.5 GHz to 1.8 GHz (vs 2 GHz in the Amlogic S905Y4 seen last month, however), upgrades the GPU from the Mali-450MP to the Mali-G31 MP2, and (like last month’s S905Y4) adds decoding support for the AV1 video codec. And speaking of Chromecasts, I need to give credit where it’s due (the Reddit crowd) on this one; it’s essentially-to-exactly the same SoC found in the “HD” variant of Google’s Chromecast with Google TV. The only variance, for which I can’t find clarifying documentation, is that in this case it’s marked “S805X2-B” whereas the one in Google’s design is the “S805X2G”.
Move to the right and you’ll encounter another example of Chromecast with Google TV commonality…sort of. And this one caused me no shortage of teeth-gnashing until I eventually figured it out. Revisiting my last-December teardown of this device’s Android TV-based predecessor, you’ll find that it contains 1 GByte of system DRAM, comprised of two 4 Gbit memory devices. Last month’s “box” sibling, conversely, touts 2 GBytes of system DRAM, assembled from two 8 Gbit memories. I already knew from the product specs on Walmart’s website that this device embeds 1.5 GBytes of DRAM. And so, since I’d thought memory pretty much always is sold in binary-increment capacities (1, 2, 4, 8, 16…), I figured that as with the similarly 1.5 GByte-equipped Chromecast with Google TV HD Edition, I’d find the two-device combo of 8 Gbit and 4 Gbit memories inside.
Problem is, though, that after identifying the other two notable ICs in this design, which you’ll see next, I could only find one other chip: this one. And it’s marking were unlike any I’d ever seen before. Again, they’re quite faint under ambient light; I tried both a loupe and supplemental lighting to make at least the company logo clearer for both me and thee:
Here’s the four-line mark:
[COMPANY LOGO] ARTMEM
ATL4X12324
M102
325M10
Doing web searches for “ARTMEM”, “ATL4X12324” and the combination of the two got me…basically nothing. Eventually, however, I stumbled across an obscure page on MIT’s website that clued me in to the likely full company name, Artmem Technology. That website is totally in Chinese, however, which didn’t help me at all. But after searching again on the full “Artmem Technology” phrase, I came across the website of another China-based semiconductor supplier, Rayson HI-Tech, which offers an English-language option and identifies Artmem as its subsidiary.
Progress! Diving further into Rayson’s website, specifically to the “Industrial/Automotive LPDDR4/4X” product page, I indeed found a 1.5 GByte product variant (along with other non-binary increment options…3 GBytes and 6 GBytes, specifically) with the following parameters:
- Product model: RS384M32LX4D2BNR-53BT
- Bit width: x32
- Speed (presumably max, and operating voltage-dependent): 3733 Mbps
- Encapsulation mode: FBGA 200-ball
- (Operating) voltage: 1.8/1.1/0.6V
- (Operating) temperature: 25-85°C)
I’m guessing this is our chip, with alternate (subsidiary) supplier branding. Is there an atypical 12 Gbit monolithic memory die inside that package? Or did the company combine more common 8 Gbit and 4 Gbit die side-by-side under a single package “lid”? Or was it a three-die 4 Gbit “stack”? Or did the supplier just “down-bin” a 16 Gbit die to come up with the 12 Gbit guaranteed capacity? I ran this mystery by my long-time colleague Jim Handy, semiconductor memory expert at market analyst firm Objective Analysis, and he had several insights:
- Non-binary packaged unit capacities are more common than I’d realized, especially for LPDDR DRAM variants (which are also commonly spec’d in GByte vs Gbit densities)
- His guess is that there’s a three-die “sandwich” inside, with each die 4 Gbit in capacity, likely sourced from CXMT and/or JHICC, the two major DRAM makers in China, and
- The built-in translation support offered by Google’s Chrome browser works pretty well, judging from the screenshots of Artmem Technology’s English language auto-converted website that he sent me (I’m normally a Mozilla Firefox guy).
Please respond in the comments, readers, if you have additional informed insights on this!
The other notable IC—wireless module, to be precise, as you’ve probably already guessed from its antennas’ proximity—on this side of the PCB and to the right of the mystery DRAM, is much easier to ID. Like its predecessor in last December’s teardown, and unlike its sibling in last month’s teardown, it’s clearly marked on top. This is the 6222B-SRC from Fn-Link, containing a Realtek RTL8822CS Bluetooth-plus-Wi-Fi transceiver (which you can see in the internal photos on the FCC website). There was no separate (PCB-embedded or otherwise) Bluetooth antenna that I could see in this particular design, and Fn-Link’s documentation subsequently confirmed my suspicion that the module optionally supports multiplexing the 2.4-GHz Bluetooth and Wi-Fi functions on the same antenna:
Speaking of which, here are some closeups of those antennas:
Last, but not least, let’s flip the PCB back over again and see what’s underneath that bottom-side Faraday cage we earlier glimpsed:
It’s the nonvolatile memory counterpart to the earlier volatile DRAM; a FORESEE FEMDNN008G-08A39 8 GByte eMMC NAND flash memory module. FORESEE is one of the brand names of a Chinese company called Longsys, who had also acquired the Lexar brand from Micron Technology back in 2017. And speaking of “see”, I think that’s all to see today, at least from me. Let me know what I might have overlooked in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Walmart’s onn. UHD streaming device: Android TV at a compelling price
- Walmart’s onn. FHD streaming stick: Still Android TV, but less thick
- Walmart’s onn. 4K streaming box: A Google TV upgrade doesn’t clobber its cost
- The Google Chromecast Gen 3: Gluey and screwy
- The Google Chromecast Gen 2 (2015): A form factor redesign with beefier Wi-Fi, too
- Google’s Chromecast with Google TV: Dissecting the HD edition
The post Walmart’s onn. full HD streaming device: Still not thick, just don’t call it a stick appeared first on EDN.
Intel ups the advanced packaging ante with EMIB-T

Embedded Multi-die Interconnect Bridge-T (EMIB-T) was a prominent highlight of the Intel Foundry Direct Connect event. Intel is promoting this advanced packaging technology as a key building block for high-speed chiplet designs and has partnered with major EDA and IP houses to accelerate implementations around EMIB-T technology.
As the nomenclature shows, EMIB-T is built around the Embedded Multi-die Interconnect Bridge (EMIB) technology, a high-bandwidth, low-latency, and low-power interconnect for multi-die silicon. EMIB-T stands for EMIB-TSV and it supports high-bandwidth interfaces like HBM4 and Universal Chiplet Interconnect Express (UCIe). In other words, it’s an EMIB implementation that uses the through-silicon via (TSV) technique to send the signal through the bridge with TSVs instead of wrapping the signal around the bridge.
Figure 1 EMIB-T, which adds TSVs to the bridge, can ease the enablement of IP integration from other packaging designs. Source: Intel
Another way to see EMIB-T is the combination of EMIB 2.5D and Foveros 3D packaging technologies for high interconnect densities at die sizes beyond the reticle limit. Foveros is a 3D chip stacking technology that significantly reduces the size of bump pitches, increasing interconnect density.
All three major EDA powerhouses have joined the Intel Foundry Chiplet Alliance Program, which is intrinsically linked to EMIB-T technology. So, all three are working closely with Intel Foundry to develop advanced packaging workflows for EMIB-T. Start with Cadence’s solution, which helps streamline the integration of complex multi-chiplet architectures.
Next, Siemens EDA has announced the certification of a TSV-based reference workflow for EMIB-T. It supports detailed implementations and thermal analysis of the die, EMIB-T and package substrate, signal and power integrity analysis, and package assembly design kit (PADK)-driven verification.
Synopsys is also collaborating with Intel Foundry to develop an EDA workflow for EMIB-T advanced packaging technology using its 3DIC Compiler. In addition to the EDA trio, Intel Foundry has engaged other players for EMIB-T support. For instance, Keysight EDA is working closely with Intel Foundry to bolster the chiplet interoperability ecosystem.
Figure 2 The EMIB-T advanced packaging technology promises power, performance, and area (PPA) advantages for multi-die chiplet designs. Source: Intel
The EMIB-T silicon bridge technology is a major step toward harnessing advanced packaging for the rapidly emerging chiplets world. Intel Foundry Direct Connect highlighted how the Santa Clara, California-based chipmaker sees this advanced packaging technology in its future roadmaps. More technical details about EMIB-T are likely to emerge later in 2025.
Related Content
- Intel’s Embarrassment of Riches: Advanced Packaging
- EDA powerhouses align offerings with Intel’s 18A node
- Heterogeneous Integration Needs Tools, Business Models
- Intel bolsters EMIB packaging with EDA tools enablement
- Intel Foundry: We Are Listening and Learning from Our Customers
The post Intel ups the advanced packaging ante with EMIB-T appeared first on EDN.
Single sideband generation, Part 2

The generation of single sideband (SSB) signals first came to my attention via ham radio back in the early 1960s. My call was then and still is WA2IBH. The best phonetic I had for that call sign was “WA2 I’ve Been Had” but that’s merely a side note.
Most voice communication through ham radio back then was done by amplitude modulation or AM signals. When you heard someone on the air with an AM signal, the voice quality was usually pretty good. As I recall, the E.F. Johnson Viking Ranger transmitter was thought of as having the very best audio quality. Of course, when you had many signals on the air at the same time with different carrier frequencies, heterodyne squeals were an unpleasant fact of life which often degraded the intelligibility of the person whom you wanted to hear.
Enter into service, SSB.
To demodulate an SSB signal, a receiver needs to reinsert a carrier signal to replace the carrier signal that the sender is NOT transmitting. The resultant sound is intelligible, but the idea of audio quality is a lost cause. A human voice in a demodulated SSB transmission is difficult to linguistically describe. Perhaps it might be thought of as listening to a cross between Donald Duck and Mickey Mouse. A big improvement, though, is that there are no heterodyne squeals. All you hear from multiple signals coming through at the same time are distorted but intelligible voices. This is a MAJOR improvement. However, the acceptance of SSB in ham radio was not universally enthusiastic.
Short-wave receivers produced up through the 1950s would have automatic gain control (AGC) built in, but the response times of the AGC function were not well suited to SSB service. Modern AGC designs have “fast attack and slow decay,” meaning that the receiver gain is reduced very quickly upon arrival of an overly strong signal and that receiver gain is subsequently restored slowly. Since SSB signals have amplitudes that are “spiky,” meaning high peak amplitude to average amplitude ratios, the AGC circuits of these older receivers could be “pumped” by SSB signals, even if the receiver were not tuned exactly to the SSB signal’s exact frequency. Reception of pretty much anything else could and often was very badly affected. Modern AGC control is much better.
Many non-SSB users confronted by AGC pumping incorrectly assumed that SSB users were guilty of “splatter,” the descriptive term for the spectral spread of an overmodulated (> 100%) AM transmission. Derogatory terms such as “splatter sideband” and “silly sideband” were in common use.
Today, ham radio voice communication is dominated by SSB.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Single sideband generation
- SSB modulator covers HF band
- Ham radio in the 21st century
- Amateur and ham radio
The post Single sideband generation, Part 2 appeared first on EDN.
Gate driver enables flexible EV inverter design

The STGAP4S galvanically isolated automotive gate driver from ST connects to an external MOSFET-based push-pull buffer to scale gate current capability. This architecture enables control of inverters with varying power ratings, including high-power designs with multiple parallel power switches.
The STGAP4S can deliver gate drive currents in the tens of amperes using small external MOSFETs and handles operating voltages up to 1200 V. It integrates an ADC, a flyback controller, programmable protections, and comprehensive diagnostics. The device is AEC-Q100 and ISO 26262 qualified, supporting system designs up to ASIL D.
Advanced diagnostics in the STGAP4S include self-checks for connections, gate-drive voltages, and internal circuitry such as desaturation and overcurrent detection. Faults are reported via SPI and two diagnostic pins. Protections like active Miller clamping, UVLO, OVLO, desaturation, overcurrent, and over-temperature detection ensure robust designs. Configurable thresholds, deadtime, and deglitch filters—programmable through SPI—enable flexibility while meeting ISO 26262 up to ASIL D.
Now in production, the STGAP4S is available in a SO-36W wide-body DIP, priced from $4.66 each in lots of 1000 units.
The post Gate driver enables flexible EV inverter design appeared first on EDN.
Why is the 2N3904 transistor still up after 60 years?

In the ever-dynamic and fast-moving world of semiconductors, why do some old transistors like 2N3904 keep on going for decades? Bill Schweber takes a closer look at this remarkable premise while analyzing why design engineers still prefer these tried-and-tested devices to reduce risk, cost, and sourcing hassles.
Read the full story at EDN’s sister publication, Planet Analog.
Related Content
- Goodbye, FR-4, we’re going to miss you
- Just give me a decent data sheet, please
- Can Analog’s Reality Chill Today’s Mega-Hype?
- Old vs. New Transistor Radio Exemplifies Advances
The post Why is the 2N3904 transistor still up after 60 years? appeared first on EDN.
Nexperia shrinks Schottky footprint with CFP2-HP

Sixteen planar Schottky diodes for automotive and industrial use are now available from Nexperia in compact CFP2-HP packages. These clip-bonded FlatPower (CFP) packages offer a smaller, high-performance alternative to legacy SMA, SMB, and SMC packages, delivering improved heat dissipation while maintaining a compact 3.45 mm² footprint—particularly in space-constrained automotive designs.
This portfolio extension includes eight industrial-grade parts, such as the PMEG6010EXD, and eight AEC-Q101 qualified automotive-grade parts, such as the PMEG4010EXD-Q. The Schottky diodes provide reverse voltages ranging from 20 V to 60 V and average forward currents of 1 A and 2 A.
Rated for junction temperatures up to 175°C, the CFP2-HP package combines an exposed heatsink and copper clip to enhance thermal performance in a small 2.65×1.3×0.68-mm (including leads) form factor. An optimized lead design ensures consistent solder joints suitable for automated optical inspection.
To learn more about Nexperia’s planar Schottky diodes in CFP2-HP packaging, click here.
The post Nexperia shrinks Schottky footprint with CFP2-HP appeared first on EDN.
SiC MOSFETs trim on-resistance and gate losses

Infineon’s 750-V CoolSiC G2 MOSFETs enhance system efficiency and power density in automotive and industrial power conversion. The second-generation G2 technology provides typical on-resistance values up to 60 mΩ, supporting a wide range of applications such as onboard chargers, DC/DC converters, xEV auxiliaries, and solar inverters. A best-in-class RDS(on) of 4 mΩ is available in the top-side cooled Q-DPAK package, which delivers strong thermal performance and reliability.
G2 technology also offers low RDS(on) × Qoss and RDS(on) × Qfr values, reducing switching losses in both hard- and soft-switching topologies, with strong efficiency in hard-switching use cases. Lower gate charge enables faster switching and reduces gate drive losses, improving performance in high-frequency applications.
The 750-V MOSFETs provide a high VGS(th) of 4.5 V and a low QGD/QGS ratio, enhancing protection against parasitic turn-on. They also support gate voltages down to -11 V, offering extended design margins and improved compatibility with other devices.
Samples of the 750-V CoolSiC G2 MOSFETs in Q-DPAK packages, with RDS(on) values of 4 mΩ, 7 mΩ, 16 mΩ, 25 mΩ, and 60 mΩ, are now available for order. For more information, click here.
The post SiC MOSFETs trim on-resistance and gate losses appeared first on EDN.
Module combines triband Wi-Fi 6E with BLE

Murata has begun mass production of the Type 2FY combo module featuring 2.4-GHz, 5-GHz, and 6-GHz Wi-Fi 6E and Bluetooth LE 5.4. Built on Infineon’s CYW55513 combo chipset, the Type 2FY dual-radio module combines a compact form factor with low power consumption to suit space-constrained IoT devices.
The Bluetooth subsystem of the Type 2FY wireless module—supporting BR, EDR, and LE—enables LE Audio, Advanced Audio Distribution Profile (A2DP), and Hands-Free Profile (HFP) for high-quality audio streaming. It delivers PHY data rates up to 3 Mbps for Bluetooth and 2 Mbps for Bluetooth LE. The WLAN subsystem complies with 802.11a/b/g/n/ac/ax standards and achieves PHY data rates up to 143 Mbps. It uses an SDIO 3.0 interface, while the Bluetooth section connects via a high-speed 4-wire UART and PCM for audio data.
Pin-compatible with Murata’s Type 1MW (CYW43455), the Type 2FY offers a drop-in upgrade that requires no hardware redesign. Its compact 7.9×7.3×1.1-mm form factor is made possible by Murata’s proprietary packaging technology. Although based on the Wi-Fi 6E standard, the module limits bandwidth to 20 MHz to reduce cost.
To learn more about the Type 2FY wireless combo module, click here.
The post Module combines triband Wi-Fi 6E with BLE appeared first on EDN.
Rectifiers meet automotive quality standards

Taiwan Semiconductor offers two series of high-voltage rectifiers, both manufactured to AEC-Q101 standards for reliable automotive performance. The fast-recovery HS1Q series provides a repetitive peak reverse voltage of 1200 V, a forward current of 1 A, and a reverse recovery time of 75 ns. The standard-recovery SxY series includes 1600-V rectifiers with forward currents of 1 A (S1Y) and 2 A (S2Y). Both series are also available in commercial-grade versions.
These devices operate within a junction temperature range of -40°C to +175°C and feature a low forward voltage drop and high surge current capability. They are well-suited for bootstrap, freewheeling, and desaturation functions in IGBT, MOSFET, and wide-bandgap gate drivers, particularly in electric vehicles and high-voltage battery systems.
The HS1Q and SxY rectifiers are available from distributors, including Mouser, Arrow Electronics, and DigiKey. Lead time for production quantities is 8 to 14 weeks. Production part approval process (PPAP) documentation is available.
The post Rectifiers meet automotive quality standards appeared first on EDN.
EDA powerhouses align offerings with Intel’s 18A node

The EDA trio—Cadence, Siemens EDA, and Synopsys—was prominent at the Intel Foundry Direct Connect 2025 while lining up AI-driven analog and digital design flows for Intel’s 18A process node. The offerings also included IPs ranging from SerDes to DDR5 to Universal Chiplet Interconnect Express (UCIe).
Next, these EDA outfits inked advanced packaging partnerships by offering workflows for Intel Foundry’s Embedded Multi-die Interconnect Bridge-T (EMIB-T) technology, which combines the benefits of EMIB 2.5D and Foveros 3D packaging technologies for high interconnect densities at die sizes beyond the reticle limit.
Let’s start with EDA flows.
Cadence has certified its RTL-to-GDS flow for 18A process design kit (PDK), which includes the Cerebrus Intelligent Chip Explorer, Genus Synthesis solution, Innovus Implementation System, Quantus Extraction solution, Quantus Field Solver, Tempus Timing solution, and Pegasus Verification System.
Siemens EDA has certified its Calibre nmPlatform sign-off tool and Solido SPICE and Analog FastSPICE (AFS) software tools for 18A production PDK. Likewise, the qualification of Calibre nmPlatform and Solido Simulation Suite offerings for the Intel 18A-P process node is now underway. These EDA tools are also part of the Intel 14A-E process definition and early runsets already available.
Figure 1 Synopsys unveiled an EDA and IP collaboration roadmap with Intel Foundry at the event.
IP and advanced packaging liaison
Cadence has announced a broad range of IPs for the 18A process node. That includes 112G extended long-reach SerDes, 64G MP PHY for PCIe 6.0, CXL 3.0, and 56G Ethernet, LPDDR5X/5 – 8533 Mbps with multi-standard support, and UCIe 1.0 16G for advanced packaging.
Besides IP offerings, Cadence is partnering with Intel Foundry to develop an advanced packaging workflow to leverage EMIB-T technology. This workflow will streamline the integration of complex multi-chiplet architectures while complying with standards.
Figure 2 Cadence is certifying EDA toolsets and IPs for Intel’s 18A process node.
Meanwhile, Siemens EDA has announced the certification of a reference workflow for EMIB-T technology using through silicon via (TSV) technique. It’s driven by the company’s Innovator3D IC solution, which provides a consolidated cockpit for constructing a digital twin. It also features a unified data model for design planning, prototyping, and predictive analysis of complete package assembly.
Synopsys is also employing its 3DIC Compiler to facilitate a reference workflow that enables efficient EMIB-T designs with early bump and TSV planning and optimization. It also features automated UCIe and HBM routing for high quality of results and fast 3D heterogeneous integration. Here, the 3DIC Compiler facilitates feasibility and partitioning, prototyping and floorplanning, and multiphysics signoff in a single environment.
Related Content
- Intel: Gelsinger’s foundry gamble enters crunch
- Intel Financial Risks, Layoffs, Foundry Ambitions
- Intel 18A Advanced Packaging is Key to Tech Leadership
- Partners Applaud Intel Foundry’s Wider Ecosystem Approach
- Intel comes down to earth after CPUs and foundry business review
The post EDA powerhouses align offerings with Intel’s 18A node appeared first on EDN.
Do you use low-side current sensing?

Sensing of current going to a load is a critical and often mandatory requirement in many designs. While there are many contact and non-contact ways to accomplish this sensing, such as using Hall-effect devices, current transformers (for AC only, of course), Rogowski coils, fluxgate sensors, among others, the in-line resistor is among the most popular due to its small size, low cost, and overall convenience. The concept is simple: measure the voltage across an accurate, known resistor, and use Ohm’s law to determine the current; this can be done with analog circuitry or digital computation.
Terminology
A quick terminology note: this inline resistor is almost always called a “shunt” resistor in application notes and data sheets, but that is a misnomer. The reason is that to “shunt” means to divert some of the current around the point being measured, and that was done is some current-measurement arrangements, especially for power in the pre-electronics era. However, the sensor resistor here is in series, so all the current flows through it.
This misleading terminology has become such an embedded part of our established verbiage that I won’t try to fight that battle. It’s similar to the constant misuse of the word “ground” for circuits which have absolutely no physical of figurative connection to Earth ground, and where “common” would be a more accurate and less confusing term.
Current sense topology
Using a sense resistor is only the first step in the current-sensing decision. The other part is topology: whether to use high-side sensing with a resistor placed between the source and the load, or low-side sensing where it is placed between the load and ground return, Figure 1.
Figure 1 The relative position of the sense resistor and the load between the power rail and ground are the only topological difference distinguishing high-side sensing (left) from low-side sensing (right), but there are significant circuit and system implementations. Source: Microchip
Tradeoffs
As with so many engineering situations, designers must also consider the tradeoffs when choosing between low-side and high-side current sensing. The relative pros and cons of each topology are a good example of the ongoing challenge of engineering tradeoffs at the intersection of power-related and classic analog circuitry.
With the high-side approach, there’s good news, at least at first glance:
- The load is grounded (a major advantage and often a requirement).
- The load is not energized even if there is a short circuit at the power connection.
- The high current that flows if the load is shorted is easily detected.
On the other hand, the high-side downsides are not trivial:
- The common-mode voltage across the sense resistor can be very high (even dangerous) and requires special consideration; it may even need galvanic isolation.
- The sensed voltage across the resistor needs to be level-shifted down to the system operating voltage to be measured and used.
- In general, increased circuit complexity and cost.
Low-side sensing has its own attributes, again starting with its positive attributes:
- The voltage across the resistor is ground referenced, a major benefit.
- The common-mode voltage is low.
- It’s fairly easy to design into the circuit with a single supply.
But with the good news, there are unavoidable low-side complications:
- The load is no longer grounded, which can have serious system-level implications.
- The load can be activated by accidental short to ground.
- The sensing arrangement can cause ground loops.
- A high load current due to a short circuit will not be detected.
Designers’ choice
In looking at the analog side of schematic diagrams over the past few years (I know, it’s an unusual “hobby”), as well as seeing what others were doing in their design discussions, I assumed that most designers were opting for high-side sensing. They were doing so despite the challenges it brings with respect to common-mode voltage, possible need for galvanic (ohmic) isolation, and other issues, especially because they wanted to keep the load grounded. Many vendors offer appropriate amplifiers, analog and digital isolation options, and subsystems so the “pain” of using high-sigh sensing is greatly reduced, and the benefits it offers were easily retained.
But maybe I am mistaken about designers’ choices. Perhaps the reason that there has been so much discussion of high-side sensing is not necessarily that it is more popular, but because it is more complicated and so needs more explanation of its details. In other words, was I confused about the cause of all this attention with the effect?
My low-side misconception
What made re-think the presumed absence of low-side sensing was the recent release of the TSC1801, a new amplifier from ST Microelectronics specially targeting low-side sensing. It features high accuracy (0.5%), high bandwidth (2.1 MHz), has a fixed gain of 20 V/V, and is suitable for bidirectional sensing, Figure 2. The accuracy and tracking of the two internal input resistors is critical to performance in this application category.
Figure 2 The block diagram of the TSC1801 low-side current-sensing amplifier is conventional, but it’s the performance that counts; the matching and tracking of the 1-kΩ input-resistor pair is critical. Source: ST Microelectronics
It made me wonder: if only few designers are choosing low-side sensing, and it since it is relatively easy to implement, why would a part like this be needed when there are already many suitable amplifiers available?
The device also challenged another one of my apparent misconceptions: that automotive designs won’t use low-side sensing because their loads must be grounded. If that’s the case, why does ST explicitly call out automotive applications in the part’s collateral (I know, application talk is easy to do) but also provide this part with the automotive AEC-Q100 qualification? Unlike marketing “talk,” that’s a relatively costly step in design and production.
So, my probably unanswerable question is this: what’s the split between use of high-side versus low-side sensing in designs? How does that split vary with end-application? Is some market-research firm willing to look into it for me?
If you want to know more about the two current-sensing options, there are many good sources available online (see References). While there is some overlap among them, as you’d expect, some offer additional interesting perspectives as well based on their products and expertise.
Have you ever had to defend your choice of one or the other in a design? What were the arguments for and against the approach you chose?
Related Content
- The self-powered current loop: Still a viable transducer-interface option
- E-fuses: warming up to higher-current applications
- Sub-milliohm resistors bring current-sense benefits but also challenges
- Use optical fiber as an isolated current sensor?
- Current-sense resistor brings two related and challenging tradeoffs
References (and there are many more!)
- All About Circuits, “Resistive Current Sensing: Low-Side vs. High-Side Sensing”
- Analog Devices, “ AN-105: Current Sense Circuit Collection: Making Sense of Current”
- Microchip Technology, “High-side versus Low-side Current Sensing”
- Renesas, “Current Sensing with Low-Voltage Precision Op-Amps”
- Rohm, “Low-Side Current Sensing Circuit Design”
- Texas Instruments, “Precision, low-side current measurement”
- Texas Instruments, “An Engineer’s Guide to Current Sensing”
- Texas Instruments, “Low-Side Current Sense Circuit Integration”
The post Do you use low-side current sensing? appeared first on EDN.
Power Tips #140: Designing a data center power architecture with supply and processor rail-monitoring solutions

Machine intelligence enables a new era of productivity and is becoming an integral part of our lives and societies across many disciplines and functions. Machine intelligence relies on computing platforms that execute code, decipher data, and learn from trillions of data points in fractions of a second. The computing hardware for machine intelligence needs to be fast, extremely reliable, and powerful. Designers must combine solid design practices with self-diagnostics and continuous monitoring schemes to prevent or manage potential faults such as data corruption or communication errors in the system.
An essential element in such monitoring systems is the supervision and monitoring of power rails throughout the system. In this article, I’ll examine and describe some of the best practices for designing supply and processor rail-monitoring solutions in enterprise applications.
Understanding power architecturesEnterprise computing relies upon a complex power architecture that delivers energy from AC sources to every point of load in the system. Figure 1 is a high-level illustration of elements in a server rack.
Figure 1 High-level server rack diagram with distributed battery backup units (BBUs) and power supply units (PSUs) connected to a busbar that then distributes AC power thought to the rack. Source: Texas Instruments
A high-efficiency—typically >91% for a titanium-grade design—PSU converts and then distributes AC power (208 V or 240 V) to 48 V throughout the rack. The power distribution board (PDB) then converts DC power to various voltages, typically 12 V, 5 V, and 3.3 V, for feeding to subsystems including the motherboard, storage, network interface cards (NICs), and switches, and system cooling. Each of these subsystems, in turn, has its own locally managed power architecture. A battery backup unit (BBU) maintains system power during any AC line disruptions.
Designing for durabilityEach subsystem requires a reliable power design and monitoring. Let’s examine some of these subsystems further.
The PSUPSUs have several types of monitoring to ensure reliable operation and delivery. They monitor the AC mains’ output voltage while also detecting internal temperature, over- and under-voltage conditions, and short circuits.
Server designs also require N+1 redundancy: “N” represents the minimum number of necessary PSUs to meet server power needs. An additional PSU (“+1”) is available if one of the other PSUs encounters a temporary or permanent fault or failure.
The PDBAs mentioned earlier, the PDB converts a 48-V input to several DC rails, including 12 V, 5 V, and 3.3 V. Although comparators with simple shunt references can be used to monitor each of these rails for overvoltage and undervoltage conditions, modern-day voltage supervisors offer a small footprint and ease of design and provide additional benefits such as hysteresis and input-sense delay for noise immunity, an adjustable output delay to avoid false triggers during power up, and higher accuracy for the highest detection reliability.
Many new voltage supervisors, such as the Texas Instruments (TI) TPS3760, are rated for voltages as high as 70 V, and can monitor 48 V and other bus voltages directly without needing a low-dropout regulator or dedicated power rail. In addition to real-time supervision, advanced monitoring integrated circuits can provide telemetry data on the most vital rail voltages to enable predictive maintenance and historical fault analysis, significantly reducing system downtime.
Another design consideration is early power failure detection. These circuits monitor specific supply rails for sudden voltage drops and alert the host or processor to take swift action in anticipation of a power loss. A high-speed and precise undervoltage supervisor performs this function. Figure 2 illustrates an example of this type of design and its timing diagram.
Figure 2 A voltage supervisor example with a timing diagram, monitoring the 0.85 to 6.0 V supply rail for sudden voltage drops to take action in the event of a power loss. Source: Texas Instruments
The motherboardMotherboard power rails present designers with a different set of challenges, which I’ll examine in more detail in this section.
Processor rail monitoringModern processors are very sensitive to variations in their power supply rails. There are many reasons for this, but it is mostly because these processors operate at voltages as low as 0.7 V with reduced tolerance for voltage fluctuations and utilize features such as dynamic voltage and frequency scaling.
Consequently, the processors require high-precision window voltage supervisors. Window supervisors monitor the supply voltage for both overvoltage and undervoltage conditions. Devices targeted for these applications, such as TI’s TPS389006, have an accuracy of ±6 mV. Designers can adjust the glitch filter up to 650 ns through the I2C registers.
Another essential aspect of power-rail design is the system’s ability to maintain stability during rapid load transients. Modern processors can shift from idle to full load in microseconds, causing sharp voltage droops or overshoots if the power supply and monitoring systems are not designed with fast loop responses and the appropriate output capacitance.
Proper power-up and power-down supply sequencing is also essential for the motherboard and processor. Sequencing ensures proper system initialization—for instance, a processor may require that the memory controller be operational before executing instructions. Sequencing also prevents large inrush currents and voltage spikes during power-up. During power-down, sequencing maintains data integrity by giving memory and storage devices enough time to save data or complete operations before losing power.
Figure 3 provides a design example for the monitoring and sequencing of the supply rails.
Figure 3 Supply-rail monitoring and sequencing examples for proper system initialization. Source: Texas Instruments
Finally, managing inrush current is vital for systems with hot-swappable components to avoid tripping circuit protection or destabilizing the power bus. Hot-swap controllers equipped with integrated current limiting and fault detection ensure smooth insertion and removal without disrupting other active subsystems.
Future trendsThe enterprise industry is poised to transition to a 400 VDC power-distribution system, which would increase efficiencies by eliminating redundant power-conversion stages and I²R losses and reduce copper usage and costs. Such high-voltage systems will demand even more high-powered rail monitoring, with faster fault detection and isolation, to maintain safety and system uptime. A new generation of high-voltage monitoring solutions is emerging to address the future design needs in this space.
Compelling power architectures are essential for ensuring reliable and uninterrupted operation in enterprise systems. Combining solid power-design practices with real-time monitoring and early fault detection helps prevent unexpected failures and protects critical workloads. As system complexity grows and power architectures evolve, especially with the shift toward higher voltage distribution, careful planning and rail supervision will continue playing a role in delivering safe and efficient performance.
Masoud Beheshti leads application engineering and marketing for Linear Power at Texas Instruments. He brings extensive experience in power management, having held roles in system engineering, product line management, and marketing and applications leadership. Masoud holds a bachelor’s degree in electrical engineering from Ryerson University and an MBA with concentrations in marketing and finance from Southern Methodist University.
Related Content
- Power Tips #139: How to simplify AC/DC flyback design with a self-biased converter
- Data center power meets rising energy demands amid AI boom
- Data center next generation power supply solutions for improved efficiency
- Optimize data-center power delivery architecture
The post Power Tips #140: Designing a data center power architecture with supply and processor rail-monitoring solutions appeared first on EDN.
Clearing out the data clutter

I’ve been working on an article about vacuum tube triodes. Yes, they’re still being used in the manufacture of high-end audio equipment and in musical instrument amplifiers. A triode has three electrodes: a plate (in American parlance, “anode” in the UK), a control grid, and a cathode.
Figure 1 contains a typical graph of plate currents verses plate voltages for different grid voltages, with grid voltages labeled on each curve as 0, -0.5, -1. 0…-5.0. All voltages are with respect to the cathode. Pretty clear, right?
Figure 1 A typical graph of triode characteristics from a manufacturer’s datasheet.
Part of the article involves measuring triode characteristics and constructing graphs in Excel which display the measured data. Figure 2 shows the first attempt to present this graphically.
Figure 2 A simple display of the acquired data, the colors shown are defaults selected by Excel.
The data for the left-most curve was entered first; the one immediately to the right next, and so on. Excel assigns curve colors in the order shown by default. There doesn’t seem to be any order to the progression of colors that might aid in scanning through the LEGEND table on the right to find a curve’s grid-voltage name.
And some of the colors are so similar that it can be challenging to find the right association. There’s also no easy way to label the LEGEND table to indicate the type of information it contains other than adding a text box to the chart. But if you reposition the chart, the text box must be moved separately.
There must be a better way to convey this information to the reader. Suppose the colors could be changed to a more recognizable progression, such as the visible spectrum-related order of the color bands on a resistor which indicates its resistance. Furthermore, what if this reordering could be automated with a keyboard click for any chart? We’re talking Excel macros, right? We could manually make the change for one graph and record the steps as a macro. But we’d have to know how many curves a particular graph had to use such. Hmmm.
Ok, let’s instead create a macro using the subroutine “sub” feature in Excel’s built-in Visual Basic for Applications (VBA) code. The code should be easily able to handle a chart with any number of curves. Now, I’ve worked with VBA, but I’m no expert. So, when I come across a feature I need but I’m not familiar with, I have to do an online search, find a reference that I can understand, and apply and test it. Rinse and repeat. This is tedious. Is there a work-around for a time-crunched, lazy guy like me? Turns out the answer is yes: AI.
I asked one of these well-known beasts how I might automatically re-order the colors assigned to Excel chart curves. The code it returned in reply worked the first time and came with comments! I’ve made a few changes and added some comments of my own to produce the code listed in Appendix 1. Clicking to select the chart shown in Figure 2 and running this code produces the results seen in Figure 3.
Figure 3 The curve colors progress in the same order as the resistance color-code bands on resistors, and backgrounds were colored for better visibility of the yellow and white curves.
In addition to reordering the colors, the code has thickened the curves and added a background color of light grey for better visibility. All the code is commented, and the background and curve thicknesses can be easily modified. You’ll notice that there are eleven curves but only ten colors, so the -5.0-volt curve is the same color as the 0.0-volt curve; the colors automatically repeat.
But one of the features of the code is its ability to change what’s called the “dashstyle” of the curves each time the colors repeat. I believe that the code is adequately commented to allow a user to locate and modify or eliminate this behavior if desired.
Labeling the curves
I was happy with this until I looked back at the chart in Figure 1. Why refer to a legend on the side of the graph if I could put the grid-voltage curve names right next to the curves themselves? I went back to the AI engine to ask for help. This time, I got code that didn’t work the first time. But that didn’t stop me; when I described the problems I was seeing specifically, I got debugging help! Clicking to select the chart rendered in Figure 2, the Appendix 2 code produced the graph seen in Figure 4.
Figure 4 Each curve’s grid-voltage name is placed next to the end of the curve.
Maybe you’d like to combine effects by running the Appendix 1 code on Figure 4’s chart to produce that seen in Figure 5.
Figure 5 The Appendix 1 and Appendix 2 codes are run sequentially: first the code which appends the curve names near the ends of the curves, and then the code which reorders the curve colors.
There’s no longer any need for the legend box, so I manually deleted it after running the codes.
I found the two VBA programs presented in the first two Appendices to provide a simple, quick, and automatic means to enhance the readability of basic graphs in Excel. I’m keeping them in my Excel toolbox. For those unfamiliar with how to use VBA, Appendix 3 should prove helpful.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- Tell us your Tale
- What is the biggest mistake you have made as an engineer?
- Visualizing Data with Arduino
- PUT a reset in its place
Code to specify the colors assigned to curves on a chart. Select a chart and run the macro associated with this code.
Sub ApplySpectrumColors() Dim cht As Chart, series As series, i As Integer Dim colors_ As Variant, line_type As Variant, the_weight As Variant ' Define the spectrum colors as RGB values ' (see https://www.teoalida.com/wordpress/wp-content/uploads/Excel-colors-with-RGB-values-by-Teoalida.png) colors_ = Array(RGB(32, 0, 0), RGB(160, 140, 0), RGB(255, 128, 128), RGB(255, 192, 128), RGB(255, 255, 0), RGB(0, 192, 0), _ RGB(96, 255, 255), RGB(176, 96, 255), RGB(211, 211, 211), RGB(255, 255, 255)) ' Define the line types. See https://learn.microsoft.com/en-us/office/vba/api/office.msolinedashstyle line_type = Array(msoLineSolid, msoLineLongDash, msoLineDashDot, msoLineSquareDot) ' Define line_type weights (thicknesses) the_weight = Array(3, 3, 4, 4) ' Reference the active chart On Error Resume Next Set cht = ActiveChart On Error GoTo 0 If cht Is Nothing Then MsgBox "Please select a chart before running this script.", vbExclamation Exit Sub End If ' Loop through each series and assign spectrum colors,line styles and weights i = 0 For Each series In cht.SeriesCollection series.Format.Line.ForeColor.RGB = colors_(i Mod (UBound(colors_) + 1)) series.Format.Line.DashStyle = line_type(Int(i / (UBound(colors_) + 1)) Mod (UBound(line_type) + 1)) series.Format.Line.Weight = the_weight(Int(i / (UBound(colors_) + 1)) Mod (UBound(the_weight) + 1)) i = i + 1 Next series ' Change Plot Area Background Color cht.PlotArea.Format.Fill.ForeColor.RGB = RGB(236, 236, 236) ' Change Legend Background Color cht.Legend.Format.Fill.ForeColor.RGB = RGB(236, 236, 236) MsgBox "Spectrum colors applied successfully!", vbInformation End Sub APPENDIX 2Code to place the names of each curve next to that curve on a chart. Select a chart and run the macro associated with this code.
Sub LabelCurvesWithStyle() Dim cht As Chart, srs As series, pt As Point, i As Integer, seriesCount As Integer Dim validSeriesCount As Integer, lastValue As Variant On Error Resume Next Set cht = ActiveChart ' Get the active chart On Error GoTo 0 If cht Is Nothing Then ' If no chart is selected MsgBox "No chart is selected. Click on a chart and try again.", vbExclamation, "Error" Exit Sub End If seriesCount = cht.SeriesCollection.Count 'number of series in the chart validSeriesCount = 0 ' Loop through each series in the chart For Each srs In cht.SeriesCollection If srs.Points.Count > 0 Then i = srs.Points.Count ' Last point in the series lastValue = srs.Values(i) ' Get the last Y value ' Check if last value is numeric before labeling If IsNumeric(lastValue) And Not IsEmpty(lastValue) Then Set pt = srs.Points(i) ' Add a label pt.HasDataLabel = True pt.DataLabel.Text = srs.Name pt.DataLabel.Position = xlLabelPositionRight ' for otherlabel positions, see ' https://learn.microsoft.com/en-us/office/vba/api/Excel.XlDataLabelPosition With pt.DataLabel.Font ' Set font styling .Name = "Arial" ' Font type .Size = 10 ' Font size .Bold = True ' Make text bold .Color = RGB(255, 0, 0) ' Font color (Red) '.Italic = True ' Uncomment for italic text End With validSeriesCount = validSeriesCount + 1 Else MsgBox ("Series labeled " & srs.Name & " has non-numeric data.") End If End If Next srs If validSeriesCount < seriesCount Or validSeriesCount = 0 Then MsgBox "Non-numeric data found in at least one series. No labels applied." End If End Sub APPENDIX 3For those unfamiliar with Excel’s VBA, this AI-generated tutorial should be helpful.
The post Clearing out the data clutter appeared first on EDN.
Firmware development: Redefining root cause analysis with AI

As semiconductor devices become smaller and more complex, the product development lifecycle grows increasingly intricate. So, from early builds to pre-qualification testing, firmware development and validation teams face escalating challenges in ensuring quality and performance. As a result, traditional root cause analysis (RCA) methods—performing manual checks, static rules, or post-mortem analysis—struggle to keep up with the complexity and velocity of modern firmware releases.
However, artificial intelligence (AI) and machine learning (ML) are changing the game. These technologies empower firmware teams to detect, diagnose, and prevent failures at scale—across performance testing, qualification cycles, and system integration—ushering in a new era of intelligent RCA.
But first let’s take a closer look at RCA challenges in firmware development.
RCA challenges in firmware development
RCA in firmware development, particularly for SSDs, is like finding a needle in a moving haystack. Engineers face several key challenges:
- Vast amounts of telemetry and debug logs: Firmware systems generate massive telemetry and debug logs. Manually sifting through this data to identify the root cause can be time-consuming, delaying development cycles.
- Elusive, intermittent failures: Firmware failures can be sporadic and difficult to reproduce, especially under high-stress conditions like heavy I/O workloads, making diagnosis even harder.
- Invisible code behavior changes: Minor firmware updates can introduce subtle issues that conventional diagnostics miss, complicating the identification of new bugs.
- Noisy, inconsistent defect signals: Defects often produce erratic and inconsistent signals, making it difficult to pinpoint the true source of failure without extensive testing.
These issues impact product timelines and customer qualifications. AI, rather than replacing engineers, enhances their ability to detect anomalies, reduce troubleshooting time, and improve the overall RCA process, speeding up diagnosis and uncovering hidden issues.
AI-driven approaches in RCA
Below are the AI techniques that streamline the RCA process, speeding up identification of root causes and improving firmware reliability.
- Anomaly detection: Unsupervised models like autoencoders and isolation forests detect abnormal patterns in real-time without requiring labeled failure data. These models learn normal behavior and flag deviations, helping to identify potential issues—like performance degradation—early in the process before they escalate.
- Predictive modeling: Machine learning algorithms such as XGBoost and neural networks analyze trends in historical test and telemetry data to predict future issues, like bugs or regressions. These models allow engineers to act proactively, preventing failures by predicting them before they occur.
- Correlation and pattern discovery: AI connects data across sources like test logs, code commits, and environmental factors to identify hidden relationships. It can pinpoint the root cause of issues faster by correlating failures with specific code changes, configurations, or conditions that traditional methods might overlook.
AI’s role in firmware validation
In firmware development—especially in NVMe devices and embedded systems—code changes can directly impact product stability and customer satisfaction. So, AI is now playing a critical role in this space.
- Monitoring I/O behavior: ML tracks latency, power, and throughput to flag regressions across firmware builds.
- Failure attribution: Historical test and return data are mined to correlate firmware changes with observed anomalies.
- Simulation: Generative models stress-test edge cases—such as power loss scenarios—to uncover potential flaws earlier in the cycle.
In an SSD development project, a firmware update intended to optimize memory management can cause subtle write workload failures during system integration. Traditional quality assurance (QA) can miss these failures, as they are intermittent and appear only under specific conditions.
However, Isolation Forest, an unsupervised machine learning model, is used to monitor real-time system behavior. The model detects timing anomalies tied to the firmware’s background garbage collection process by analyzing telemetry data, including latency and throughput. Isolation Forest identifies deviations from normal patterns, pinpointing the issues like delays introduced by changes in the garbage collection algorithm.
With these insights, engineers can root-cause and fix the issue within days, avoiding qualification delays. Without AI-based detection, there is a chance that this issue goes unnoticed, causing significant delays and customer qualification risks.
Benefits of AI-powered RCA
First and foremost, its speeds up the process by cutting debug time from weeks to hours. The AI-powered RCA also offers accuracy for multi-variable issues. Regarding scalability, it can monitor thousands of signals and logs continuously. Finally, the AI-powered RCA enables predictive action before issues reach customers.
Below is an outline of future directions for AI in RCA methods:
- Explainable AI for building trust in ML decisions.
- Multi-modal models for unifying logs, telemetry, images, and notes.
- Digital twins to simulate firmware behavior under varied scenarios.
AI is no longer optional; it’s becoming central to firmware development. On the other hand, root cause analysis is evolving into a fast, intelligent, and predictive practice. So, as firmware complexity grows, those who harness AI will lead in reliability and time-to-market.
For engineers, adopting AI isn’t about surrendering control—it’s about unlocking superhuman diagnostic capability.
Karan Puniani is a staff test engineer at Micron Technology.
Related Content
- 5 Tips for speeding firmware development
- Development tool evolution – hardware/firmware
- Use virtual machines to ease firmware development
- Will Generative AI Help or Harm Embedded Software Developers?
- No code: Passing Fad or Gaining Adoption for Embedded Development?
The post Firmware development: Redefining root cause analysis with AI appeared first on EDN.
LabVIEW gets an AI makeover with Nigel’s launch

Artificial intelligence (AI) makes a foray into the test and measurement world. An AI assistant trained across the NI software suite and built on Emerson’s secure cloud network can analyze code, offer suggestions for changes, and allow users to ask questions to employ correct tools across nearly 700 functions more quickly.
The Nigel AI Advisor will be integrated into LabVIEW and TestStand by July 2025 and will be available in most existing licenses at no extra cost. LabVIEW, a graphical programming environment, is primarily used by engineers for data acquisition, instrument control, and industrial automation. On the other hand, TestStand is management software that automates, accelerates, and standardizes the test process.
Figure 1 TestStand users can ask questions and get answers inside the window on right. Source: Emerson
Austin Hill, section manager of test software at Emerson, acknowledges that NI engineers have been working on integrating AI and machine learning into the company’s software for many years. “We are the software company, so we have been making critical investments in the capabilities of our software,” he said during an interview with EDN. “Our big focus this year is integration with AI, specifically generative AI.”
Nigel AI Advisor—unveiled during the NI Connect 2025 conference—promises users a step change in productivity. “This is just the beginning,” Hill added. “Nigel will keep getting smarter and better in years to come.” He told EDN that NI engineers are trying to thread the needle on how and where AI will change our industry.
“Besides large language models (LLMs), there are other pieces that we are trying to work around,” Hill said. “We have built hooks in LabVIEW and TestStand that allow us to update frequently, which enables us to work with the next generation of GPUs and compute.”
Figure 2 Here is what temperature monitoring looks like in LabVIEW with Nigel’s aid. Source: Emerson
Regarding Nigel’s capabilities, Hill calls it an AI experience that spans the software platform. “It’s going to help users onboard much faster, especially the new ones,” he said. “It’ll help users find out the functions they need, the suitable examples, and where they are getting errors.”
As users edit sequences or look at test reports, they can have Nigel in the window to ask questions. “That can turn a novice LabVIEW user into an expert LabVIEW user much faster,” Hill added.
Emerson will demonstrate Nigel AI capabilities at the NI Connect conference, which will be held from 28 to 30 April 2025 in Fort Woth, Texas.
Related Content
- The evolution of LabView
- LabVIEW Creator Talks its Past and Future
- National Instruments Releases Free Editions of LabVIEW
- NI’s Kevin Schultz: Embracing AI for Both Test and Design
- LabVIEW 7.1: Graphical Real-Time Development Leaps Ahead
The post LabVIEW gets an AI makeover with Nigel’s launch appeared first on EDN.
Precision programmable current sink

The TL431 has been around for nearly 50 years. During those decades, while primarily marketed as a precision adjustable shunt regulator, this legacy device also found its way into alternative applications. These include voltage comparators, audio amplifiers, current sources, overvoltage protectors, etc. Sadly, in almost every example from this mighty menagerie of circuits, the 431’s “anode” pin sinks to the same lowly fate. It gets grounded.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Current sink regulation
The design idea presented here offers that poor persecuted pin a more buoyant role to play, Figure 1.
Figure 1 The floated anode serves as a sense pin for active current sink regulation.
The Figure 1 block diagram shows how the 431 works at a conceptual level where:
Sink current = Is = (Vc – 2.5v)/R1 = 0 to 1/R1 as Vc = 2.5v to 3.5v
Vs < 37v, Is < 100mA, Is(Vs – R1Is) < 500mW @ 50oC ambient
Series connection adds an internal 2.5-V precision reference to external voltage input on the ANODE pin. The op-amp subtracts this sum from the voltage input on the REF pin, then amplifies and applies the difference to the pass transistor. If the difference is positive (sum < REF), the transistor turns on and shunts current from CATHODE to ANODE. Otherwise (sum > REF), it turns off.
If the 431 is connected in the traditional fashion (REF connected to CATHODE and ANODE grounded). In that case, the scheme works like a shunt voltage regulator should, forcing CATHODE to a resistor-string-programmed multiple of the internal 2.5-V reference voltage. But what will happen if the REF pin is connected to a constant control voltage (Vc > 2.5 V); and the ANODE, instead of being grounded, floats freely on current sensing resistor R1?
What happens is the current gets regulated instead of the voltage. Because Vc is fixed and can’t be pulled down to make REF = ANODE + 2.5, ANODE must be pulled up until equality is achieved. For this to happen:
Is = (Vc – 2.5v)/R1
Constant current sink regulation of 1/R1
Figure 2 illustrates how a fixed voltage divider might be used (assuming a 5-V rail that’s accurate enough) to use a floated-anode Z1 to regulate a constant sink current of:
Is = (3.5v – 2.5v)/R1 = 1/R1
It also illustrates adding a booster transistor Q1 to accommodate applications needing current or power beyond Z1’s modest TO92ish limits. Notice that Z1’s accuracy will be unimpaired because whatever fraction of Is that Q1 causes to bypass Z1 is summed back in before passing through R1.
Figure 2 Booster transistor Q1 can handle current and voltage beyond 431 max Ic and dissipation limits, while the 3.5-V voltage divider programs a constant Is.
Programming sink current with DAC
Figure 3 shows how Is might be digitally programmed with a 2.5-V DAC signal. Note the DAC signal is inverted (Is = max when Vx = 0) while Z2 provides the necessary level shift:
Is = (2.5v – Vx)/(2.5R1) = 0 to 1/R1 as Vx = 2.5v to 0
Figure 3 DAC control of Is, the DAC signal is inverted, while Z2 provides the necessary level shift.
Programming sink current to Df/R1 with DAC
Figure 4 shows an alternate programming method using PWM with Is = Df /R1 where Df equals the 0 to 1 (0% to 100%) PWM duty factor:
Is = (2.5R2/(R3/Df))/R1 as Df = 0 to 1
Df = IsR1R3/(2.5R2)
Df = Is R1
Figure 4 PWM control of Is, where Is is the ratio of the PWM duty factor and R1.
The 8-bit PWM resolution and 10-kHz frequency are assumed. The R2C1 single-pole ripple filter has a time constant of approximately 64x the PWM period (10 kHz = 100 µs assumed) for 1-lsb peak-to-peak max ripple and 38-ms max settling time.
Speeding up settling time
One shortcoming of Figure 4 is the long settling time (~40 ms to 8 bits) imposed by the single-pole R1C1\2 ripple filter. If an extra resistor and capacitor won’t break the bank, that can be sped up by a factor of about 5 (~8 ms) with Figure 5’s R5C2 providing 2nd-order analog-subtraction filtration.
Figure 5 The addition of R5 and C2 provides faster settling times with a 2nd-order ripple filter.
Programmable current sink application circuit
Finally, Figure 6 shows the Figure 4 circuit combined with an inexpensive 24-W AC adapter and a 5-V regulator to power a small digital testing system. Be sure to adequately heatsink Q1.
Figure 6 The combined current sink and small system power supply where the max Is is 1 A, Max Vs is 20 V, and Is = Df.
Thanks for the implicit suggestion, Ashutosh!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Cancel PWM DAC ripple with analog subtraction
- TL431 Model
- PWM-programmed LM317 constant current source
- A negative current source with PWM input and LM337 output
- A high-performance current source
- VCO using the TL431 reference
The post Precision programmable current sink appeared first on EDN.
Tell us your Tale!

Dear EDN Readers,
We’re thrilled to announce the successful expansion of our Design Ideas section. Thanks to your support, we now publish two new DIs every week!
We’re also excited to revitalize our Tales from the Cube column. This platform allows engineers to share their unique experiences in solving challenging design issues—whether they encountered a product failure, dealt with troublesome equipment, or tackled a persistent problem on a personal project.
We aim to regularly update this column with fresh content. With your contributions, we hope to gradually breathe new life into Tales from the Cube with new articles to feature in our Fun Friday newsletter.
Here are some FAQs to get you started:
What are Tales from the Cube articles?
Tales from the Cube are generally brief, focused narratives where engineers outline how they arrive at a solution to a specific design challenge or an innovative approach to a design task. This can relate to a personal project, a contract, or a corporate design dilemma. Here are some basic guidelines that might help you as you write out your article:
- 600-1000 words
- 1-2 images
- One-sentence summary of your story that goes along with the title
- A short author bio
What technology areas are allowed?
We’re open to a wide range of technology areas, including (but not limited to) analog and digital circuits, RF and microwave, programmable logic, hardware-design languages, systems, programming tips, utilities, test equipment and techniques, power, and more. If you’re not sure about your topic, just email us at editors@aspencore.com.
Do I get paid for a Tales from the Cube article?
Yes! Monetary compensation for each Tales from the Cube article is $200 USD, not enough to keep the lights on, but it does offer you an avenue to tell your unique engineering story to tens of thousands of engineers globally and engage in some interesting conversations about your engineering remedy.
How can I submit a Tales from the Cube article?
Feel free to email us at editors@aspencore.com with your questions, thoughts, or a completed article. So, Tell us your Tale!
The post Tell us your Tale! appeared first on EDN.
Revealing the infrasonic underworld cheaply, Part 1

Editor’s Note:
Part 1 of this DI uses an electret mic to create infrasound. It starts with a basic equalization circuit validated with a DIY test fixture and simulations, and ends with a deeper analysis of the circuit’s real response.
Part 2 includes refinements to make the circuit more usable while extending its detectable spectrum with an additional technique that allows us to hear the infrasonic signals.
Although electret microphones are ubiquitous, they are more versatile than might be expected. With some extra equalization, their frequency responses can be made to range from earthquakes to bats. While this Design Idea (DI) ignores those furry mammals, it does show how to get a reasonably flat response down to way below 1 Hz.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Electrets aren’t only used as audio pickups. For decades, they have been employed in security systems to detect unexpected changes of air pressure within rooms, while more recently they can be found in vapes as suck-sensors (or, more technically, “draw sensors”, according to Brian Dipert’s recent teardown).
An excellent description of their construction and use, complete with tear-down pictures, can be found here. The capsules I had to hand were very similar to those shown, being 10 mm in diameter by 6 mm high. Some experiments to check their frequency response—practical details later—showed a steady 6 dB/octave roll-off below about 15 Hz, implying that a filter with an inverse characteristic could flatten the response down to a fraction of a Hertz. And so it proved!
Building an equalization circuit
A basic but usable circuit capable of doing this is given in Figure 1.
Figure 1 Simple equalization can extend the low-frequency response of an electret microphone down to well under 1 Hz.
While this exposes some problems, which we’ll address later, it works and serves to show what’s going on. R1 is chosen to give about half the rail voltage across the mic, and A1 boosts the signal by ~21 dB. At very low frequencies, A2’s stage has a maximum gain of ~30 dB. This falls by 6 dB/octave from ~160 mHz upwards, reaching unity gain at ~4.8 Hz. C3/4 and R7/8 top and tail the response, and A3 boosts the level appropriately. (Not shown is a rail-splitter, defining the central, common rail.) The op-amps used were MCP6022s because of their low input offset voltage.
The low 3-dB point is largely determined by C1/R2. (Adjusting the values of R5, R6, and C2 and adding an extra resistor in series with C2 would, in principle, let us equalize a specific mic to give a flat response from a few hundred millihertz up to its upper limit.)
Figure 2 shows the overall response to changes in air pressure, with 3 dB points at about 500 mHz and 12 Hz. While this is an LTspice-derived trace, it closely matches real-world measurements.
Figure 2 The response of Figure 1’s circuit to air-pressure changes at different frequencies.
Validating the frequency response
That confidence about the actual response may raise some eyebrows, given the difficulty in getting decent bass performance in even the best of hi-fi systems. A custom test rig was called for, using a small speaker to produce pressure changes in a sealed chamber containing a mic-under-test. It’s shown in Figure 3.
Figure 3 Two views of a test rig allowing sub-Hz measurements of a microphone’s frequency response.
The rig comprises an IP68 die-cast box fitted with a 50 mm plastic-coned speaker (42 ohms) and a jam-jar lid, the jar itself being the test chamber for the mic, which, when fitted with pins, could be swapped. Everything was sealed with lots of epoxy, plus some varnish in case of pinholes. A generous smear of silicone grease guaranteed that the jar seated almost hermetically. The speaker was driven by a custom sine-wave oscillator based on a simple squashed-triwave design and covering from 90 mHz to 11 Hz in two ranges.
This is actually the Mark 3 version. Mark 1 was based on a cut-down, wide-mouthed tablet bottle with a speaker fixed to it, which was adequate for initial tests but let in too much ambient noise for serious work. Mark 2 added a jam jar as a baffle behind the speaker, but the bottle’s walls were still too flexible. The more rigidly-constructed Mark 3 worked well, with an unequalized frequency response that was flat within a decibel from about 20 to 200 Hz. (It had a major cavity resonance at about 550 Hz, too high to affect our results.)
Simulations, mostly in hardware
To verify the performance of the rig itself at the lowest frequencies, some simulation was needed—but in hardware, not just with SPICE. Stripping a spare mic down to its bare JFET (a Sanyo 2SK156) and adding some components to that meant that it could be driven electrically rather than acoustically while still looking like the real thing to the circuit—or almost. The main divergence did not affect the frequency response, but did throw light on some unexpected behavior. The simple schematic is in Figure 4; the concept also worked well in LTspice, using their default “NJF” JFET, and formed part of Figure 2’s simulation.
Figure 4 A circuit that simulates an electret microphone in real life.
Once the circuit had settled down, the measured frequency responses using the test rig and the simulated mic matched closely, as did the LTspice sim. With the simulated mic, settling took a few seconds, as expected given the circuit’s long time constants, but with a real mic, it took many times as long. Perhaps the diaphragm was relaxing, or something? Another mic, torn down until only the JFET remained, behaved similarly (and, with its floating gate lead, made a near-electrometer-quality mains-hum probe!).
Curious behavior in a JFET, and how to fix it
It seemed that the FET’s gate was misbehaving—why? Perhaps charge was being injected when power is applied, and then leaking slowly away? Ramping the voltage up gently made some difference, but not enough to explain things fully. It appears that leakage is dominant, and that charge on the gate slowly equalizes, producing a long, slow “tail” which is still just fast enough to produce an offset at the circuit’s output, even with two C-R networks attempting to block it. With the low impedance on the simulated mic’s gate, such effects are negligible. It’s stuff that would never show up in audio work.
From this, we can deduce that the mic’s low 3-dB point is determined not by the FET’s time-constant but by the “acoustics” within the mic. But that extra, inherent time constant still needs addressing if the circuit is to settle in a reasonable time. If the gate must slowly drift towards equilibrium owing to leakage, could we inject a packet of charge at start-up to compensate? Experiments using the circuit of Figure 5 were successful, albeit empirically; the values given are cut-and-try ones. Shorting R1 for about 3 ms gave a pulse of double the final voltage across the mic, and that proved to be optimum for the available capsules in the circuit as built. The settling time is still around 10–15 seconds, but that’s a lot better than over a minute.
Figure 5 A few milliseconds of over-voltage applied across the mic at start-up injects enough charge to counterbalance much of the FET’s longer-term start-up drift.
This is also useful in the case of an overload, which sends the output off-scale. If that happens, you can now use the time-honored method of switching off, waiting a few seconds, and switching back on again!
Real-life response
Figure 6 shows the actual response as measured using the test rig. It’s a composite of two scans, one for each range. (Because tuning was done manually, the frequency scale is only roughly logarithmic.) R9 was set to about 50k, so the output stage had a gain of around 6.
Figure 6. The response of the circuit in Figure 1, measured using Figure 3’s test rig.
The upper trace is the driving waveform for the speaker, showing that a positive-going output from the circuit corresponds to increased pressure within the rig’s chamber. (From this, we can infer that the negatively-poled side of the electret film itself faces the JFET’s gate. That makes sense, because a serious acoustic insult like a handclap right in front of the mic will then charge the gate negatively, and excess negative charge drains away more easily through the JFET’s gate-source diode than positive charge can, speeding recovery from any such overload.)
Note how the baseline wanders. That is mostly due to 1/f or flicker noise in the mic capsule’s JFET; both the bare JFET and the simulated mic show a similar effect, while a resistor is much quieter. We can extend the LF response further, but only at the expense of a worse S/N ratio. And below a Hertz or two, the effects of wind and weather seem to be dominant, anyway.
Viewing the results
There are several further desirable refinements and additions, but they must wait for Part 2. We’ll close this part with some ways of seeing what’s lurking below our ears’ cutoff point. (And Part 2 will also show how to listen to it.)
An oscilloscope (usually bulky, static, and power-hungry) is too obvious to mention, so we won’t. A cheap 50–0–50 µA meter connected between the output and common via a suitable resistor worked, but its response was 50% down at ~2 Hz.
A pair of LEDs, perhaps red and green for positive- and negative-going swings, looked good, though the limited swings available with the 5 V rail meant that the drive circuit needed to be somewhat elaborate, as shown in Figure 7. Caution! Its power must come directly from the power input to avoid the LEDs’ currents disturbing the mic’s supply, which would (and did) cause distortion and even (very) low-frequency oscillation. A good, stable power source is needed anyway.
Figure 7 One LED lights up on positive swings and the other on negative ones, the intensities being proportional to the signal levels.
Part 2 will extend the detectable spectrum a little while mostly concentrating on making the basic circuit more usable. An audible output will mean that we will no longer have to worry about the Zen-like problem of, “if we can’t hear it, should we call it a sound?”
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- Squashed triangles: sines, but with teeth?
- A pitch-linear VCO, part 1: Getting it going
- Earplugs ready? Let’s make some noise!
- Supersized log-scale audio meter
The post Revealing the infrasonic underworld cheaply, Part 1 appeared first on EDN.
The advent of recyclable materials for PCBs

Conventional PCB manufacturing—which is wasteful, energy intensive, and harmful to the environment—increasingly calls for electronics recycling to reduce material waste and energy requirements through less material production.
Figure 1 The conventional PCB world is ripe with recycling opportunities. Source: IDTechEx
IDTechEx’s new report, “Sustainable Electronics and Semiconductor Manufacturing 2025-2035: Players, Markets, Forecasts,” outlines new recyclable materials for PCBs and provides updates on their full-scale commercial readiness. Below is a sneak peek at these recyclable and biodegradable materials and how they facilitate sustainability in electronics manufacturing.
- New PCB substrates
While FR4, a glass-reinforced epoxy resin laminate, is a substrate of choice for PCBs due to being lightweight, strong, and cheap, it’s non-recyclable and can contain toxic halogenated flame retardants. That calls for alternative substrates that are biodegradable or recyclable.
Jiva’s Soluboard, a biodegradable substrate made from the natural fiber flax and jute, is emerging as a promising new material as it dissolves in 90°C water. That facilitates component recycling and precious metal recovery at the product’s end of life. Companies like Infineon, Jaguar, and Microsoft are currently testing if this new material can combat rising electronics waste levels.
Figure 2 Soluboard is a fully recyclable and biodegradable PCB substrate. Source: Jiva Materials
- Polylactic acid in flexible PCBs
Conventional flexible PCBs, built around plastic polyimide, are also ripe for alternative materials. Polylactic acid, currently in the prototype-scale validation phase, emerges as a sustainable material that can be sourced from organic industrial waste and is also biodegradable.
Polylactic acid can withstand temperatures of up to 140°C, which is lower than that of polyimide and FR4. However, it’s compatible with manufacturing processes such as silver ink sintering. Companies and research institutes like VTT are now demonstrating the potential of polylactic acid in flexible PCBs.
- Recycled tin
Around 180,000 tonnes of primary tin are used in electronics globally. It’s primarily sourced from mines in China, Indonesia and Myanmar and is causing significant environmental damage. Enter recycled tin, which is produced by smelting waste metal and metal oxide. It boasts the same quality as primary tin, which is confirmed by X-ray diffraction.
However, merely 30% of tin is currently recycled worldwide, so there is a greater need for regulatory drivers to encourage increased metal recycling. One example is Germany’s National Circular Economy Strategy (NKWS) unveiled in 2024, aiming to half per capita raw material consumption by 2045.
Figure 3 A boost in recycled tin relies on a strong regulatory push. Source: Mayerhofer Electronik
Mayerhofer Electronik was the first to demonstrate the use of recycled tin for soldering in its electronics manufacturing processes. Now, Apple has committed to using secondary tin in all products by 2035.
- Regeneration systems to minimize copper waste
It’s a widely known fact that copper is used wastefully in PCBs. This is how it happens: a flat sheet of copper is applied to the substrate before holes are drilled. Inevitably, a circuit pattern produced by etching away the excess copper requires large volumes of chemical etchants like ferric (III) chloride and cupric (II) chloride. As a result, around 70% of the copper initially applied to the board is often removed.
Here, additive manufacturing, in which copper is only applied where required, offers the solution in a method that requires no manufacturing switch. An etchant regeneration system recovers both copper etched from the laminate and etchant chemicals. The recycled copper can serve as an additional revenue stream for the electronics manufacturer.
Related Content
- The problem with recycling
- PCB materials: Recycle, reuse, dispose?
- Trends and Challenges in PCB Manufacturing
- Process for recycling turns up components ready for reuse
The post The advent of recyclable materials for PCBs appeared first on EDN.
Another simple flip ON flop OFF circuit

Editor’s Note: This Design Idea (DI) offers another alternative to the “To press ON or hold OFF? This does both for AC voltages” that was originally inspired by Nick Cornford’s DI: “To press ON or hold OFF? This does both.”
Figure 1 gives a simple circuit for the PUSH ON, PUSH OFF function with only a few inexpensive components. In this design, the output is connected to the input of the gadget when you press the push button (PB) once. For the next push, the output is disconnected.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This is an attractive alternative to bulkier ON/OFF switches for DC circuits. The circuit has a fairly simple explanation. U1 is a counter.
Figure 1 A Flip ON Flop OFF circuit for DC voltages. The gadget is connected to the output terminals of the PB. With an adequate heat sink for MOSFET Q1, the output current can go up to 50 A.
During power on, R2/C2 resets the counter to zero. When you push PB momentarily once, a pulse is generated and shaped by a Schmidt trigger inverter U2 (A & C), which counter U1 counts. Hence, the LSB (Q1) output of U1 becomes HIGH, making MOSFET Q1 conduct. At this point, the output gets the input DC voltage.
When you push PB momentarily again, another pulse is generated and counted by U1. Hence, its LSB (Q1) output goes LOW and MOSFET Q1 stops conducting and output is disconnected from input. This action continues, making output ON and OFF for each push of PB.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- To press ON or hold OFF? This does both for AC voltages
- To press on or hold off? This does both.
- Smart TV power-ON aid
- Latching power switch uses momentary pushbutton
- A new and improved latching power switch
- Latching power switch uses momentary-action pushbutton
The post Another simple flip ON flop OFF circuit appeared first on EDN.