EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 2 години 38 хв тому

Analog IC longevity is an underappreciated reality

Пн, 03/09/2026 - 11:06

I recently saw an announcement from a major IC vendor, posted in September 2025, letting users know that “STMicroelectronics sets 20-year availability for popular automotive microcontrollers.” The news is that ST was committed to maintaining the cited parts for 20 years instead of their present 15-year assurance.

“Good for them” was my first thought, as that’s the right thig to do for both their OEM and actual vehicle customers. After all, with the average age of cars on the road in the United States approaching 15 years and with little sign of slowing down or even leveling off, that makes sense.

There are two presumed reasons for the longer lifetime. First, cars are built better; the “rust-bucket” and “fall apart” tendencies of many of those pre-1980/90 cars have greatly diminished due to better design, materials, paints, tests, and processes. Second, the cost of a new car is so high that even costly repairs make sense for many.

Ironically, those less reliable, mostly mechanical cars did have one major virtue: they were repairable then and can generally be repaired/restored even today. Many of their old parts are available via specialty sources either as “new old stock” (NOS) or slightly used. And those that can’t be sourced can be machined or 3D printed if the owner has time and resources.

The issue is not limited solely to cars; unavailable mechanical assemblies are a very different case than electronic ones. In 2022, a team at Verisurf was contracted by the U.S. Air Force to reverse engineer and recreate a 300-piece “throttle quadrant” from the E-3 Airborne Early Warning and Control System (AWACS), by disassembling an existing unit piece-by-piece (Figure 1). See “Reverse Engineering the Boeing E-3 Sentry’s Secondary Flight Controls”.

Figure 1 This throttle quadrant from an E-3 AWACS radar aircraft was recreated via precise piece-by-piece measurement and fabrication of each of its 300 pieces. Source: Verisurf

They used a combination of tools, including basic calipers, advanced metrology systems, CAD/CAM software, close-up photographs, and more to capture and then recreate this control unit-top tolerances of better than 0.005 inches.

For the computers-on-wheels electronics of today’s cars, it’s a very different reality. Will you be able to get an engine control module, or one of the other hundred or so MCU-based modules, even 15 years from now? I’m betting the answer is “no” or “very unlikely,” but we’ll have to wait and see how that story unfolds.

The issue of unavailable parts is not limited solely to automobiles, although that is the largest and most visible application. Unlike most consumer products, there are many areas where useful lives of 20, 30, and more years are expected. Among these are industrial applications, railways, mil/aero, critical infrastructure, and even some home systems such as HVACs.

The challenge of replacement parts and their relatively low volume is not being ignored, as the ST announcement shows. The U.S. Defense Microelectronics Agency (DMEA) has instituted an Advanced Technology Supplier Program V (ATSP V) with 13 companies that, among other objectives, includes approaches to developing and creating components in ultra-low volumes for repair and replacement.

What about “analog”?

With all these legitimate concerns about long-term component availability, there’s one interesting aspect. One fact does stand out: unlike digital ICs and processors, the analog world has a different mindset. Analog-circuit designers tend to stick with a component that they have used successfully, even if it’s a few years old and could easily be replaced by a nominally better part.

Ther are several reasons for this tactic. Once an analog part is in the signal chain meeting specs, there’s a reluctance to take change on a new part and design which may have unknown issues and idiosyncrasies. Factors such as parasitics, layout, and power-supply sensitivity (to cite a few) likely will affect design validation, in contrast to the field experience with the existing design.

There are classic analog parts that have been available for decades, and while not recommended for new designs, they are still available if needed for repair, replacement, or even a newer design. Even better, if they are not available, there is often a drop-in replacement with superior performance; this is especially the case for basic 8-pin op amps.

I can think of three “ancient” analog components as examples:

  • The AD574 “complete” 12-bit A/D converter from Analog Devices, introduced in the 1978–1980, became the industry-standard ADC for microprocessor interfacing (Figure 2). It was notable for integrating a buried Zener reference clock and 3-state output buffers for direct 8/16-bit bus interfacing. While its die and process have been upgraded and it’s now available in other packages, you can still get it in the original 28-pin housing.

Figure 2 The 12-bit ADC was the first complete unit with “tight” specifications and is still offered 45 years after its initial release. Source: Analog Devices

  • The INA133 instrumentation amplifier from Burr-Brown was introduced around 1998 (Burr-Brown was acquired by Texas Instruments in 2000), and it’s still offered in a variety of packages and grades by TI (Figure 3). Like AD574, it’s not recommended for new designs; you can see its top-tier specifications on page 40 of the 2000 Burr-Brown Product Selection Guide.

Figure 3 Burr-Brown’s INA133 instrumentation amplifier provided excellent performance with modest power requirements and has been continuously available since its introduction in 1998. Source: Texas Instruments

  • Finally, we can’t look at the 555 timer-oscillator-multivibrator, a clear contender as one of the most classic components of all time and the longest-lived along with the 74 op amp (Figure 4). Devised by Hans Camenzind and marketed as an 8-pin DIP by Signetics in 1971, it’s still available in many versions, including duals and quads as well as CMOS variations. Despite its age, it’s often used to solve annoying timing and oscillator problems at low cost, and there are many “cookbooks” showing innovative ways in which it can be used.

Figure 4 It’s very likely that no IC has spawned more creative and clever design ideas and handbooks and solved as many circuit problems as the 555 timer-oscillator-multivibrator. Source: Wikipedia

There are others, of course, such as the 60-year-old 2N3905 or 2N2222 transistors—it doesn’t get more basic than that.

While many analog components have a long and viable life with their original or descendent vendors, there is even a solution for the many cases where that source does not want to manufacture or support that IC forever. Companies such as Rochester Electronics work out a formal arrangement and license to take over the rights, tooling, support, and test procedures for the parts. Users who need the part don’t need to consider grey-market or even counterfeit products; instead, they get ICs which are 100% legitimate but via a different supplier.

ST’s announcement is welcome, of course. I wish that more vendors would make that sort of commitment, difficult as it may be, or at least commit to licensing unwanted products to non-competing vendors. For now, if you want long-term continuity, stick with analog parts as much as possible.

Have you ever had to deal with repairing a product having electronic components that were no longer available, or even doing regular production on a long-lived product where you needed more than just a few? Did you find parts, or did you have to do a full redesign? How painful was that process?

Related Content

The post Analog IC longevity is an underappreciated reality appeared first on EDN.

Risk assessment in the workplace

Птн, 03/06/2026 - 15:00

Risks come in more than one form. There are risks that arise from science and technology, and there are risks that arise from human motivations, which are not always of an obvious sort. This is about the latter.

I had a client company that was owned by a husband and wife for whom I had once solved a power supply thermal runaway problem. I had measured temperature rise versus time and temperature fall versus time, and of course, the two were not exactly the same. Their difference was quite pronounced when I first looked at the issue, but they were almost identical to each other after I had solved their problem. If you’re curious about that, please see the How2Power article here

A couple of years went by, and I got a call from that same company about a different power supply that also seemed to have a thermal runaway problem. By then, sadly, the husband had passed away, and only the wife remained to run the business.

During the first time frame, the wife had displayed a hair-trigger temper. Any moment of uncertainty as events unfolded would result in a raging torrent from her, to which her husband would make great efforts to calm her down. I would hear lines like “It’s okay. It’s oh-kay! Please relax. Things are going well.” to which she would then go silent, but now she didn’t have anyone to give her any assurance when it was needed.

An employee who had been promoted to Chief Engineer was my new point of contact. I explained to him that I would examine the thermal rise and thermal fall traits of this new power supply to see if indeed the same situation pertained as it did in the first case or not.

“There’s no need for that. I’ve already made those measurements.” He handed me a sheet of paper with columns of numbers, purportedly the data I had planned to acquire. That night, I examined those numbers and discovered that if you plotted the thermal rise and inverted a plot of the thermal fall, the two curves precisely lined up and were EXACTLY the same!! There was absolutely zero difference. They were totally spot on, no ifs, ands, buts, hows, whys, or wherefores, exactly the same, which meant that the rising and falling curves given to me were not the results of actual testing. They were false.

Confronted with a Chief Engineer whom I then knew to be dishonest and confronted with the woman whom I knew to be extremely volatile and prone to bursts of rage, I assessed the risk of dealing with it all to be unacceptable.

I made up some excuse (I don’t remember what it was) and declined to offer my services.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Risk assessment in the workplace appeared first on EDN.

Last-level cache has become a critical SoC design element

Птн, 03/06/2026 - 11:50

As AI workloads extend across nearly every technology sector, systems must move more data, use memory more efficiently, and respond more predictably than traditional design methodologies allow. These pressures are exposing limitations in conventional system-on-chip (SoC) architectures as compute becomes increasingly heterogeneous and traffic patterns become more complex.

Modern SoCs integrate CPUs, GPUs, NPUs, and specialized accelerators that must operate concurrently, placing unprecedented strain on memory hierarchies and interconnects. To Keep processing units fully utilized requires high-bandwidth, low-latency access to data, making the memory hierarchy as critical to overall system effectiveness as raw performance.

On-chip interconnects move data quickly and predictably, but once requests reach external memory, latency increases, and timing becomes less consistent. As more data accesses go off chip, the gap between compute throughput and data availability widens. In these conditions, processing engines stall while waiting for memory transactions to complete, creating data starvation.

 

The role of last-level cache

To mitigate this imbalance, SoC designers are increasingly turning to last-level cache (LLC). Positioned between external memory and internal subsystems, LLC stores frequently accessed data close to compute resources, allowing requests to be served with significantly lower latency.

Unlike static buffers, an LLC dynamically fetches and evicts cache lines based on runtime behavior without direct CPU intervention. When deployed effectively, this architectural layer delivers measurable benefits, including substantial reductions in external memory traffic and power consumption.

Simply including an LLC does not guarantee improved performance. Configuring the cache correctly is a complex task that must account for workload characteristics, compute-unit behavior, and real-time constraints. Poorly chosen parameters can waste area without meaningful gains, while under-provisioned configurations may fail to alleviate memory bottlenecks.

Architects must carefully determine cache capacity, the number of cache instances, and internal banking structures to support sufficient parallelism. Partitioning strategies must also be defined to ensure that individual IP blocks receive the bandwidth and predictability they require. While some settings can be adjusted later through software, foundational decisions on cache size, banking, and associativity must be finalized early in the development cycle.

The role of last-level cache is shown in successful designs. Source: Arteris

Factors influencing cache behavior

Banking configuration illustrates this trade-off clearly. Increasing the number of cache banks improves internal parallelism and throughput, but it also increases silicon area. Workloads with largely sequential access patterns may see limited benefit from aggressive banking.

In contrast, highly parallel workloads, especially those driven by AI accelerators or GPUs, require substantial internal concurrency to maintain utilization. Because these characteristics vary by application, banking decisions must be informed by realistic workload analysis during the architectural phase.

Cache capacity is just as important. A cache that is too small struggles to achieve acceptable hit rates, pushing excessive traffic to external memory. Conversely, oversizing the cache often yields diminishing returns relative to the additional area consumed. The optimal balance depends on actual runtime behavior rather than theoretical assumptions.

In practice, acceptable hit rates vary widely. Some systems can tolerate moderate miss rates if latency and power reductions outweigh the cost, while real-time applications demand consistently high hit rates to maintain deterministic behavior.

This variability underscores why no single LLC configuration is universally optimal. Mobile devices may require only a few megabytes of cache to balance power efficiency and responsiveness. At the same time, servers and HPC platforms often deploy tens or hundreds of megabytes to reduce DRAM pressure. Despite these differences, successful designs rely on a common principle in which cache parameters are derived from the workloads the system will actually execute.

Managing shared caches

Diversity in system demands further complicates how an LLC must be structured. Automotive chips built around concurrent vision processing and strict timing requirements operate under very different constraints than data-center platforms optimized for accelerator-heavy inference at scale. Even within a single chip, CPUs, accelerators, and I/O subsystems generate distinct access patterns with different latency sensitivities.

The LLC must accommodate all of them without allowing one workload to interfere with another’s real-time guarantees. This makes early understanding of system-level access behavior essential, since cache configuration otherwise becomes speculative at best.

Partitioning provides a powerful mechanism for preserving determinism in such environments. By allocating portions of cache capacity to specific clients, architects can prevent high-bandwidth workloads from starving latency-sensitive subsystems. This capability is particularly critical in environments that must meet strict timing guarantees. Partition sizes must be tuned carefully, as oversizing wastes area while undersizing risks violating latency requirements.

Configuring a last-level cache is ultimately a multidimensional challenge shaped by workload demands, compute topology, latency requirements, and silicon constraints. Achieving the right balance between performance, determinism, power, and area depends on understanding how an SoC behaves under real operating conditions.

To address this, SoC teams increasingly rely on system-level simulation using realistic data flow profiles generated by multiple on-chip request sources. This approach allows teams to evaluate cache behavior before key architectural decisions are finalized. It helps identify bottlenecks, validate cache sizing, and determine when isolation mechanisms such as partitioning are required to preserve real-time guarantees.

Arteris developed its CodaCache IP, which operates as a configurable last-level cache between on-chip initiators and different types of external memories such as DDR-DRAM, HBM and even NVM for execution in place (EIP) use cases. With CodaCache, architects can equip their SoC fabric with the optimal configuration to address intelligent, scalable, and automated data management in a wide range of applications.

Andre Bonnardot is product marketing manager at Arteris.

Related Content

The post Last-level cache has become a critical SoC design element appeared first on EDN.

Apple’s spring 2026 soirée: The rest of the story

Птн, 03/06/2026 - 01:44

With smartphone and tablet news already discussed, what else did Apple unveil this week? Read on for all the goodies and their details.

As I teased at the end of my prior piece, computers and displays were also on the plate for Apple’s “big week of news” announcements suite. With today’s (as I write this on Wednesday in the late afternoon) New York, London, and Shanghai “Experience” in-person events now concluded:

(No, alas, I wasn’t invited)

I’m guessing that Apple’s wrapped up its rollouts for now, therefore compelling me to revisit my keyboard for concluding part 2. That said, I realized in retrospect that there was one additional earlier hardware announcement that, had I remembered at the time (and in time), I would have also included in part 1, since it also covered mobile devices. So, let’s start there.

AirTag 2

In late April 2021, Apple introduced its first-generation AirTag trackers, leveraging Bluetooth LE connectivity to mate them with owner-paired smartphones and tablets and more broadly (when a tagged device is lost) the broader Find My crowdsourced network ecosystem to assist in identifying their whereabouts and monitoring their movements. Integrated ultrawideband (UWB) support, when also comprehended by the paired mobile device, affords even more precise location discernment (i.e., not just somewhere in the living room, but having fallen between the sofa cushions). And built-in NFC support assists anyone who might find a tag (and whatever it’s attached to), to notify the person that it belongs to. Here’s my first-gen teardown.

Nearly five years later, and quoting Wikipedia:

An updated model with the U2 chip, upgraded Bluetooth, and a louder speaker was released in January 2026 [editor note: Monday the 26th, to be precise]. It has enhanced range for precision detection with iPhones equipped with a U2 chip such as the iPhone 15/Pro or later (excluding iPhone 16e), and also allows an Apple Watch with a U2 chip such as the Apple Watch Series 9 or later, or Apple Watch Ultra 2 or later (excluding Apple Watch SE), to precisely locate items.

Now fast-forwarding a month-plus to this week’s announcements…

The M5 Pro and Max SoCs

2.5 years back, within my coverage of Intel’s then leading-edge and first-time chiplet-implemented Meteor Lake CPU architecture:

I noted that the company was, to at least some degree, following in the footsteps of AMD and Apple, both having already productized chiplet-based designs. In AMD’s case, I was on solid footing with my stance, as the company had already been embedding and interconnecting discrete processors, graphics, and other logic circuits for several years. In Apple’s case, conversely, my definition of a chiplet implementation was a bit more loosey-goosey, at least at the time:

Above is a de-lidded photo of Apple’s M1 SoC. At left is the single-die implementation of the entirety of the logic circuitry, plus cache. And on the right are two DRAM memory chips. Admittedly, the “Ultra” variant of the eventual M1 product family, at far right:

upped the ante a bit more, “stitching together two distinct M1 Max die via a silicon interposer”. But I’ve long wondered when Apple would go “full monty” on disaggregation, mixing-and-matching various slivers of logic silicon attached to and interconnected via a shared packaging substrate, to keep each die’s dimensions to a reasonable manufacturing-yield size and to afford fuller implementation flexibility. To wit, the points I made back in September 2023 remain valid:

  • Leading-edge processes have become incredibly difficult and costly to develop and ramp into high-volume production,
  • That struggle and expense, coupled with the exponentially growing transistor counts on modern ICs, have negatively (and significantly so) impacted large-die manufacturing yields not only during initial semiconductor process ramps but also long-term, and
  • Desirable variability both in process technology (DRAM versus logic, for example), process optimization (low power consumption versus high performance) and IC sourcing (internal fab versus foundry), not to mention the attractiveness of being able to rapidly mix-and-match various feature set combinations to address different (and evolving) market needs, also enhance the appeal of a multi- vs monolithic-die IC implementation.

That time is now, branded as the “Fusion Architecture” and ironically foreshadowed by a then-subtle Apple online store tweak a month ago. Quoting from the press release subhead:

M5 Pro and M5 Max are built using the new Apple-designed Fusion Architecture that connects two dies with advanced IP blocks into a single SoC, delivering significant performance increases that push the limits of what’s possible…

In an interesting twist from the past, this time the two product proliferations seemingly share a common processor die, although the variety and number of guaranteed-functional cores varies both between the two devices and within a given device’s binning variants. Conversely, the graphics core counts diverge more substantially between the two devices. To some degree this is reflective of the high-end “Max” device’s professional content creator target demographic, although I’d wager that it more broadly affords more robust on-device deep learning inference capabilities in conjunction with the chips’ presumed-still-existent neural processing cores. And what of an “Ultra” variant of the M5…is it on the way? Maybe

Tomato, tomahto

Speaking of cores, by the way…sigh. Look back at my M5 SoC (and initial devices based on it) coverage from last October, and you’ll see that, just as with prior generations of both A- and M-based Apple-developed silicon, it contains a mix of both performance (speed- optimized) and efficiency (power consumption-tuned) cores. Here’s the specific press release quote again:

M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4.

All well and good; the Arm-developed architecture analogy is big.LITTLE. Revisiting that page on Arm’s website just now, however, I curiously noticed that whereas it historically called out two different types of cores, now there are apparently three. Check out the subhead:

Arm big.LITTLE technology is a heterogeneous processing architecture that uses up to three types of processors. LITTLE processors are designed for maximum power efficiency, while big processors are designed to provide efficient, sustained compute performance.

Keep in mind that Apple is an Arm architecture licensee, so it develops its own (still instruction set-compatible, of course) cores. That said, beginning with the M5 Pro/Max processing chiplet, Apple has also developed a third core, this one an intermediate half-step between the performance and efficiency endpoints. You might think that Apple would call this new one the “balanced” core, say. But alas, you’d be wrong. Here’s long-time Apple observer Jason Snell, quoted in a post from another Apple prognosticator, “graybeard,” John Gruber:

With every new generation of Apple’s Mac-series processors, I’ve gotten the impression from Apple execs that they’ve been a little frustrated with the perception that their “lesser” efficiency cores were weak sauce. I’ve lost count of the number of briefings and conversations I’ve had where they’ve had to go out of their way to point out that, actually, the lesser cores on an M-series chip are quite fast on their own, in addition to being very good at saving power! Clearly they’ve had enough of that, so they’re changing how those cores are marketed to emphasize their performance, rather than their efficiency.

What did Apple decide to do instead, including a retrofit of published M5 documentation?

  • The prior-named “Performance” core is now instead called, believe it or not, “Super.”
  • The “Efficiency” core retains its original name, for a brief moment of sanity
  • And the new in-between “balanced” core? It’s the recycled ”Performance” moniker.

The following summary table originated with another recent John Gruber post; I’ve simplified the SoC options, reordered the CPU core columns, and added a column for GPU core counts:

 

CPU (Super)

CPU (Performance)

CPU (Efficiency)

GPU

M5

3-4

N/A

6

8-10

M5 Pro

5-6

10-12

N/A

16-20

M5 Max

6

12

N/A

32-40

That’s just…super. Sigh.

(More) M5 MacBook Pros

(nifty video animation, eh?)

“Super” SoCs inside aside, the new 14” and 16” MacBook Pros are effectively identical to their M4-based forebears (note that the sole M5 version initially announced last fall was the 14” model). The only other items of particular note both involve memory. Baseline and upgraded DRAM capacity option prices remain the same as last time, despite current industry memory supply constraints; an upper-end 64 GByte option for the M5 Pro has even been added. And regarding flash memory, Apple has obsoleted last November’s entry-level 512 GByte SSD option for the baseline 14” M5 MacBook Pro, making the new capacity starting point for that product (1 TByte) more expensive than before. That said, it’s now $100 lower than the 1 TByte variant price at intro just a few months ago, and capacity-upgrade prices have also decreased.

The M5 MacBook Air(s)

Here’s another example of not being able to tell, based solely on external appearances, which generation of devices you’re looking at. Coming, as with its M3- and M4-based forebears, in both 13” and 15” versions, the M5 MacBook Air also upgrades to Apple’s N1 network connectivity chip. But, speaking once again of (flash, specifically) memory, and akin to the product line option slimming for the 14” M5 MacBook Pro mentioned in the prior section, the lowest-available capacity for the new devices is 512 GBytes, versus 256 GBytes in previous generation. I’m guessing that the reasoning is two-fold this time; as with the 14” M5 MacBook Pro’s option-culling, the company’s “hiding” its higher flash memory costs by only offering more profitable capacity choices to customers. Plus, by doing so, Apple can more clearly differentiate the MacBook Air from its other products. Speaking of which…

The MacBook Neo

I’ll kick off this section with a few history lessons. Back in 2015, Apple introduced the “new MacBook” (also commonly referred to as the “12” MacBook), with a Retina-resolution display and based on Intel m-series (and later, i-series) CPUs. It slotted between the then-non-Retina MacBook Air and the high-end MacBook Pro in Apple’s product portfolio from a pricing standpoint, even though its processing performance undershot that of the notably less expensive MacBook Air. Plus, it was hampered by the unreliable “butterfly” keyboard. It was discontinued after only three hardware iterations and four years of production.

In addition to its unfavorable price comparison to the MacBook Air, the “new MacBook” was also still competing to a degree against then-popular Windows-based “netbooks”, which were even lower priced. Back in late 2008, Former CEO Steve Jobs had (in)famously quipped re netbooks, “We don’t know how to make a $500 computer that’s not a piece of junk.” Hold that thought.

My last history lesson is, conversely, a Steve Jobs success story. Back in mid-1999, two years (and change) after Jobs’ return to Apple and less than a year after launching the consumer-tailored iMac desktop, Apple unveiled the iBook laptop:

which came in multiple eye-catching, intentionally non-“business” color options:

Quoting Wikipedia:

The line targeted entry-level, consumer and education markets, with lower specifications and prices than the PowerBook, Apple’s higher-end line of laptop computers. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort.

Look again at the image of the iBook’s color options. Now look at the photo at the beginning of this section. See where I’m going?

The newly unveiled MacBook Neo comes in two price tiers: $599 (with a further $100 discount for education customers; take that, Chromebooks) and $699. The higher-end variant gets you twice the SSD capacity—512 GBytes versus 256 GBytes—along with a Touch ID fingerprint reader built into the keyboard. That’s it. 8 GBytes of DRAM, with no upgrade option. No Thunderbolt, only two USB-C ports, one of them supporting only USB 2 speeds. The first-time use of an A-series processor, the (Apple Intelligence-capable) A18 Pro (albeit with one fewer graphics core enabled than the initial version in the iPhone 16 Pro series); that said, it seems to benchmark (at least) roughly on par with the M1 that until recently was still being sold by Walmart in the MacBook Air. And a networking subsystem rumored to come from MediaTek, versus developed internally.

In closing, at least for this section: what’s with the name? Some folks had forecasted that it’d just be called the “MacBook”, but as I’ve already noted, that particular name is now “damaged goods”. Others thought that an “iBook” resurrection was in the cards, but Apple stopped referring to devices via “i” monikers a while ago. That said, “Neo” was definitely not on my bingo card. Maybe someone in Cupertino is a fan of The Matrix, but thought that “MacBook Mr. Anderson” would be too ponderous?

Displays

Having already passed through 2,000 words, I’m going to keep this section short. Apple announced two new Studio Display models, its first updates to this particular product category in many years. They’re both 27” in size, with 5K Retina resolutions, although their refresh rates, dynamic ranges, and other image quality measures vary. The “inexpensive” one starts at $1,599, with its pricier sibling beginning at $3,299; both are available in standard or (upgrade) nano-texture glass options, and mounting and other accessories are also available. And interestingly, at least to me, they don’t work with legacy Intel-based Macs, even the scant few models (one of which I’m currently typing on) that are still supported by MacOS 26. For more details, check out the press release.

And what about…

The M5 Mac mini, whose possibility I alluded to yesterday? Didn’t happen, even though the current M4-based models are popular with the agentic AI enthusiast community (and others). That said, in revisiting my prognostication yesterday afternoon, I remembered that Apple had also skipped the M3 Mac mini generation, and that said, the time-consuming form factor redesign development from the M2 to the M4 might have at least partly explained that delay.

And what of the upgrade to the “vanilla” iPad that lots of folks were forecasting would happen this week? Another nope. The primary rationale here was that it was the only remaining member of Apple’s current product line whose CPU (the A16) doesn’t support Apple Intelligence. But there was no evidence of the telltale indicator of a new product’s arrival: depleted retail inventories of the current model. My guess: Apple will be happily talking about AI again at this year’s WWDC, now that Google’s on board as the company’s development partner, and that’d be a perfect time to announce the “iPad 12”…or maybe “iPad Neo”? I jest (I hope).

Time to put down my cyber-pen and turn it over to you for your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Apple’s spring 2026 soirée: The rest of the story appeared first on EDN.

MCU enables ASIL D safety and control

Чтв, 03/05/2026 - 19:06

Built on a 28-nm process, the Renesas RH850/U2C automotive microcontroller delivers robust connectivity and security for modern E/E architectures. This 32-bit MCU expands the RH850 lineup with a cost-optimized option for chassis and safety systems, battery management, body control, and other ASIL D–rated applications.

The device integrates four RH850 CPU cores running at up to 320 MHz, including two lockstep cores, and up to 8 MB of on-chip flash memory. It combines 10BASE-T1S and TSN Ethernet (1 Gbps/100 Mbps), CAN XL, and I3C with widely used interfaces such as CAN FD, LIN, UART, CXPI, I2C, I2S, and PSI5.

In addition to functional safety support up to ASIL D under ISO 26262, the RH850/U2C meets current cybersecurity requirements in accordance with ISO/SAE 21434. The MCU integrates hardware acceleration for cryptographic algorithms, ranging from post-quantum cryptography (PQC) to those mandated by current Chinese and other international regulations.

The RH850/U2C is available in BGA292 and HLQFP144 packages.

RH850/U2C product page 

Renesas Electronics 

The post MCU enables ASIL D safety and control appeared first on EDN.

VNAs perform production test up to 9 GHz

Чтв, 03/05/2026 - 19:06

With typical measurement speeds of 25 µs/point, Copper Mountain’s three SC series VNAs enable efficient testing in both R&D and manufacturing environments. The SC0402, SC0602, and SC0902 two-port analyzers cover a common frequency start of 9 kHz, with upper ranges of 4.5 GHz, 6.5 GHz, and 9 GHz, respectively.

These instruments offer a typical dynamic range of 130 dB (10 Hz IF BW) for precise characterization of RF components and complex systems. Output power can be adjusted from -50 dBm to +5 dBm, with up to 500,001 measurement points/sweep. Measured parameters include S11, S21, S12, and S22.

Standard software capabilities, available without a paid license, include linear and logarithmic sweeps, power sweeps, and time-domain conversion with gating. Additional functions include S-parameter embedding and de-embedding, limit testing, frequency offset, and vector mixer calibration.

Automation is supported through LabVIEW, Python, MATLAB, .NET, and other programming environments, allowing up to 16 independent channels with 16 traces/channel. A manufacturing test plug-in is available as an add-on to integrate the VNA software into existing automated manufacturing and QA processes.

The SC series VNAs carry MSRPs of $13,995 (SC0402), $15,995 (SC0602), and $17,995 (SC0902).

Copper Mountain Technologies 

The post VNAs perform production test up to 9 GHz appeared first on EDN.

MCU brings USB-C power to embedded devices

Чтв, 03/05/2026 - 19:05

Infineon’s EZ-PD PMG1-B2 MCU integrates a single-port USB Type-C PD controller with a 55-V buck-boost controller for charging 2- to 12-cell Li-ion battery packs. Compliant with the latest USB Type-C and PD specifications, the device accepts an input voltage range of 4.5 V to 55 V with switching frequencies programmable from 200 kHz to 700 kHz.

The MCU targets USB-C-powered embedded devices in consumer, industrial, and communications markets, where devices make use of its integrated functions. Typical applications include cordless power and gardening tools, vacuum cleaners, kitchen appliances, e-bikes, drones, and robots.

The EZ-PD PMG1-B2 features a 32-bit Arm Cortex-M0 processor with 128 KB of flash and 8 KB of SRAM for customizable embedded applications. It integrates analog and digital peripherals—including ADCs, PWMs, UART/I2C/SPI interfaces, and timers—reducing PCB space and BOM. A comprehensive SDK and software suite simplify development and system design.

Production of the EZ-PD PMG1-B2 is expected to begin in the second quarter of 2026. Samples, technical documentation, and evaluation boards are available upon request.

EZ-PD PMG1-B2 product page 

Infineon Technologies 

The post MCU brings USB-C power to embedded devices appeared first on EDN.

Passive limiter shields electronics from RF threats

Чтв, 03/05/2026 - 19:05

Teledyne Microwave UK’s B3LT98026 is a passive wideband limiter designed to protect sensitive receiver front ends in defense and military communication systems. It operates from 0.1 GHz to 20 GHz and withstands up to 10 W peak input power under defined pulse width and duty cycle conditions.

The device enhances the survivability of Radar Electronic Support Measures (R-ESM) and Electronic Warfare (EW) systems operating in complex threat environments. It provides continuous, always-on protection against high-power RF and emerging Directed Energy Weapons (DEWs).

Across the operating band, the limiter maintains a maximum insertion loss/noise figure of 2.0 dB and a maximum input/output VSWR of 1.5:1. A fast 40-ns recovery time enables rapid return to nominal sensitivity following high-power events. The device operates over a temperature range of −20°C to +85°C, supporting deployment in demanding environments.

The compact SMA-based housing supports straightforward integration into existing architectures without requiring system redesign. The B3LT98026 is also compatible with Teledyne’s Phobos mast top unit and can accommodate additional RF elements, such as filters, when required.

The B3LT98026 is now available for evaluation in defense and EW systems.

B3LT98026 product page 

Teledyne Microwave UK 

The post Passive limiter shields electronics from RF threats appeared first on EDN.

Nordic debuts multiple cellular IoT products

Чтв, 03/05/2026 - 19:05

Nordic Semiconductor expands its ultra-low-power cellular IoT portfolio with Cat 1 bis, satellite NTN, and advanced LTE-M/NB-IoT with edge AI. Leveraging the proven nRF91 series, the nRF92 and nRF93 deliver a scalable, secure platform for global connectivity.

The nRF92 LTE-M/NB-IoT and satellite NTN series introduces the company’s smallest, most highly integrated, and power-efficient cellular solution. It combines a high-performance application MCU with Axon neural processing units, a multi-constellation GNSS receiver, Wi-Fi positioning, and sensor coprocessing. Lead customer sampling is underway, with general availability expected in early 2027.

The nRF93M1 is an LTE Cat 1 bis cellular IoT module with integrated MCU, LTE modem, GNSS receiver, and Wi-Fi positioning. It supports up to 10 Mbps downlink and 5 Mbps uplink, offers global LTE coverage, and is designed for low-power, compact applications. The module is compatible with nRF Cloud for device management, firmware updates, and location services. Lead customers are currently developing products with the nRF93M1, with general availability starting mid-2026.

Additionally, Nordic has enhanced the nRF91 LTE-M/NB-IoT series with 3GPP-compliant GEO and LEO satellite NTN connectivity and sub-GHz fallback to maintain connectivity when public networks are unavailable. The company also introduced the nRF91M1 module, a compact Smart Modem that simplifies adding cellular connectivity to host–modem designs.

Nordic Semiconductor 

The post Nordic debuts multiple cellular IoT products appeared first on EDN.

EV system design from components to modules to software

Чтв, 03/05/2026 - 15:01

Electric vehicle (EV) design at the system level is a rapidly evolving landscape encompassing components, hardware modules, and software platforms. So, on the first day of Automotive Tech Forum 2026, which was dedicated to EV designs, a panel titled “Powering the Electric Vehicle: From Semiconductors to Systems” took a deep dive into the system-level intricacies of EV designs.

Carsten Himmele, marketing manager for automotive at Allegro MicroSystems, highlighted the growing presence of silicon carbide (SiC) in traction inverters due to its ability to deliver higher bandwidth and efficiency. However, while talking about motor control for EV traction, he also mentioned challenges in operating in harsher electrical environments.

“SiC brings in higher bandwidth for motor control, but it also makes the electrical environment somewhat harsher,” he said. Himmele added that advanced phase-current sensing and inductive rotor-position sensing are essential for overcoming these challenges. “Moreover, system-grade building blocks reduce the number of external components and improve design efficiency,” he concluded.

That’s where gallium nitride (GaN) offers key advantages, said Alex Lidow, CEO and co-founder of Efficient Power Conversion (EPC). “GaN is smaller, more efficient, and more rugged compared to silicon and SiC,” he said. “It’s particularly effective in 48-V systems, which complement the emerging 800-V architectures.”

Lidow added that while EVs with 48-V systems are now leading the way, GaN devices are 5 to 7 times more efficient than their MOSFET ancestors. “GaN is powering onboard chargers, DC/DC converters, battery cooling pumps, steering systems, and infotainment.”

Rohan Samsi, VP of GaN Business Division at Renesas, also talked about the paradigm shift GaN brings to power converters, enabling simplified single-stage designs. “The bidirectional switch allows you to take out something that was a multi-stage converter and replace it with a single stage.” To achive integration synergy, Samsi emphasized that GaN’s strengths in current sensing, temperature sensing, and gate drive enable holistic EV solutions.

Finally, Kerry Grand, marketing manager for Simulink Automotive at MathWorks, turned the discussion toward the software aspects of design. He was asked to inform the panel on the latest developments in EV traction from a system-integration standpoint. And what does hardware testing uncover about the present and future of EV drivetrain?

Grand began with an insight into EV system-level design through simulation and model-based design. Then he identified enduring challenges in EV system design, including high-voltage isolation, battery life optimization, and thermal management. “Simulating detailed thermal systems offers automotive OEMs the ability to trade off temperature limits without compromising system performance.”

At a time when EV design building blocks like traction inverters and battery management systems (BMS) are continually adding functionality, system-level challenges are a critical area to watch. The panel discussion in Automotive Tech Forum 2026 provides a glimpse of design challenges and viable solutions in this design realm.

You can watch this session along with all sessions from the Automotive Tech Forum 2026 virtual event on demand at www.automotiveforum.eetimes.com.

Related Content

The post EV system design from components to modules to software appeared first on EDN.

Cardiac monitors: Inconspicuous, robust data collectors

Чтв, 03/05/2026 - 15:00

As follow-up to last month’s narrative of a cardiac abnormality thankfully detected by wearable devices, this engineer details the monitoring system he subsequently donned for a month.

Two-plus years ago, my contributor-colleague John Dunn described his most recent experience with a wearable cardiac monitor. And, as any of you who read one of my last-month blog posts already know, I more recently followed in his footsteps. I don’t yet know the outcome of my heart health study; my follow-up appointment with the cardiologist is a week away as I type these words. Regardless, I thought you might still find it interesting to learn about the gear I toted around, stuck to my chest (and in my pocket) for 30 days, and my experiences using it.

The system I used was Philips’ MCOT (Mobile Cardiac Telemetry), specifically its “patch” variant:

Here’s an overview video; others, plus documentation, are at the product support page:

I took several “selfies” of the sensor in place on my chest but ultimately decided to save you all the abject horror of seeing any of them. Instead, I’ll stick with these stock images:

My initial scheduled meeting with the cardiologist took place on December 12, 3+ weeks after our “introduction” at the emergency room. I’d been on both beta blockers (to regulate my heartbeat) and blood thinners (in case my prior irregular rhythm had resulted in the formation of a clot) since my initial visit to the hospital in mid-November. The cardiologist ordered the monitor, which arrived a bit more than a week later; I began wearing it the day after Christmas.

Here’s the box that the system comes in:

Open sesame:

The first thing I saw was the initial sensor patch, along with the return shipping packaging bag. Below it was the template I used for proper placement each time I stuck a patch on my chest:

The bulk of the contents were contained in two inner boxes, the first labeled “Getting Started” and the second referred to as “Monitoring”. Inside the first:

were several primary items:

along with installation and operation overview instructions:

The monitoring device, both here and in subsequent photos accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

whose dimensions and Android operating system foundation, along with the legacy presence of an analog headphone jack alongside the USB-C port:

and a multi-camera rear array in a specific arrangement:

suggest it to be a custom-software derivative of Samsung’s Galaxy A52 smartphone, introduced in March 2021:

It came with the translucent green case pre-installed, by the way. Here are some other overview images of the smartphone…err…monitoring device (its left side was unmemorable so I didn’t bother):

Next up was a small scrub pad used to further prepare my chest for patch application, after initial hair shaving. And, of course, there was the sensor itself:

Its edge arrived already abraded; I’m guessing that it had already been popped open, with its rechargeable battery subsequently replaced, at least once prior to its arrival at my residence:

Now for box #2:

More instructions, of course:

along with more patches, a more detailed instruction booklet, and the dual-charging unit:

The AC/DC adapter has two USB-A outputs:

which can be used in parallel:

One, connected to a red USB-A to USB-C cable, is used for daily recharge of the “monitoring device” (smartphone). The other (black, this time) cable terminates in a charging dock for the sensor, which I used every five days in conjunction with (and in-between) the patch removal and replacement steps:

Here’s how the initial “monitoring device” bootup went (since this was a custom Android-plus-app build, I wasn’t able to grab screenshots directly from the smartphone, perhaps obviously):

After initial charging of both the monitoring device and sensor, I continued the setup process:

Here’s what a patch looks like when you first take it out of the package; top:

and bottom:

Pressing down on the sensor while aligned with the patch base snaps it into place:

A briefly illuminated LED subsequently indicates that the sensor is correctly installed, at which point the monitoring device is able to “see” it (broadcasting over Bluetooth, presumably Low Energy):

At this point, you can peel away the protective clear plastic cover over the back side adhesive:

All that’s left is to press it into place on your chest…and then peel off the existing patch, pop out and recharge the sensor and redo the installation process five days later:

Lather, rinse, and repeat until the total 30-day cycle is over, which the system thoughtfully tracks on your behalf. Then ship it all back to the manufacturer.

The monitoring device, which regularly receives data transmissions from the sensor, periodically then uploads the data to the “cloud” server over an LTE or EV-DO cellular data connection.

If you forget to keep the monitoring device close by, data won’t be lost, at least for a while. There’s an unknown amount of memory onboard the sensor (yes, I searched for a teardown, alas unsuccessfully), albeit presumably not the full 2 GBytes allocated to this alternative device designed solely for local data logging. But the monitoring device will still alert you (both visually and audibly) to the lost wireless (again, presumably Bluetooth’s LE variant) connection:

You’ll also be alerted if the sensor’s integrated battery drops to a low level and recharge is necessary (I proactively did this every five days, as previously noted, since I’d received six total patches):

If you feel like something’s amiss with your “ticker” (heart pounding, fatigue, etc.) you can tap on the icon at the center of the display and the monitoring device will send an alert “flag” for subsequent correlation with the potential cardiac arrythmia data collected at that same time:

And in closing, here are some shots of other monitoring device display screens that I captured:

By the time you see this, assuming I don’t need to reschedule for some reason, I will have met with my cardiologist and gotten the (hopefully positive) results. I’ll follow up in the comments. And please also share your thoughts there! Thanks as always for reading.

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post Cardiac monitors: Inconspicuous, robust data collectors appeared first on EDN.

MWC 2026: Apple, Google, Samsung and Other Contending Contestants

Срд, 03/04/2026 - 21:46

Ever imagine that memory supply (translating to system capacity and price) concerns would ever dominate multiple companies’ announcements? “And so it goes”, to quote Kurt Vonnegut.

The Mobile World Congress (MWC) show, held each year in Barcelona, Spain (one of my favorite cities in the world) and in progress as I write these words, doesn’t have quite the same cachet as previously. Two primary reasons rationalize this impermanence: the cellphone market has subsequently (and notably so) consolidated, and it’s increasingly common for the market participants that remain to announce new products at their own events.

That said, these go-it-alone suppliers still often chronologically cluster their announcements at or near the MWC timeframe. Plus, the conference organizers have broadened the scope of the show beyond just cellphones (nowadays: smartphones) to also encompass other mobile devices such as tablets and laptop computers…although classifying a static desktop-based, AC-powered robot as “mobile” is a stretch, no matter how dynamic its joints and display may be:

Apple, Google, and Samsung were among the companies who made notable(-ish) news over the past week. I’ll cover them chronologically in the following sections.

Mountain View gets the jump on Cupertino (once again)

Last spring, Google unveiled its then-latest cost-focused phone, the Pixel 9a, a few weeks after Apple had rolled out its initial (albeit iPhone 16-numbered) “e” rebrand of prior “SE” multi-gen economical-tuned offerings. I subsequently bought a Pixel 9a for myself, replacing (and leveraging a then-lucrative trade-in value promotion for) my prior backup handset, a Pixel 6a.

That said, Google had already flip-flopped prior longstanding fast-follower precedence with the late summer 2024 launch of the mainstream Pixel 9 and high-end Pixel 9 Pro, which predated their iPhone 16 competitors by a month (versus the historical cadence of being a month belated). The same thing happened last year. And now, Google has extended its “eager beaver” behavior to the entry-level end of its smartphone product suite with the Pixel 10a, which the company sneak-peeked in early February, with a full unveil two weeks later complete with a pre-order opportunity, and shipments starting later this week.

Good news: skyrocketing DRAM and NAND flash memory prices haven’t led to handset price increases (or, alternatively, either integrated memory capacity decreases or the culling of lower-capacity product variants); the Pixel 10a price ($499) is unchanged from its Pixel 9a predecessor. Bad news (albeit good news for me, no longer FOMO-fraught): unless you’re insistent on a completely flat backside absent any camera “bumps”, the design is largely unchanged as well. Same chipset. Same memory generations and speed bins. The display is modestly enhanced—peak brightness, bezel thickness, and cover glass shock resistance—as are the wired and wireless charging power, therefore speeds, but that’s basically it. Oh…and still no Qi magnet inclusion. Hold that thought.

A higher-end attack

A week later, and a week ago, Samsung rolled out its Galaxy S26 product line, which competes against Apple’s iPhone 17 series launched last September, along with new-generation earbuds (but no new smart ring; was Oura’s legal-pressure campaign effective?):

Here again, not much has changed from the year-prior Galaxy S25 predecessors. The “adder” that seemingly got all the media attention, Privacy Display, derives from an OLED display tweak and is only available on the high-end Ultra variant. Unlike Google, Samsung is generationally raising prices, predominantly blaming memory cost increases as the root cause, and is also not offering comparable low-end storage capacity options as with S25-series predecessors. The memory blame assignment is particularly ironic in this case because the Samsung parent company also has a semiconductor (memory, specifically) division under its corporate umbrella.

That said, as my colleague Majeed recently wrote about at length and I’d also noted in my earlier 2026-forecast coverage, HBM memory is AI-cultivating the lion’s share of customer demand (therefore also supplier attention) right now, versus the DDR4- and DDR5-generation DRAM technologies found in computers, smartphones, tablets, and the like. Speaking of AI, Samsung Mobile (like Google, and in partnership with Google, along with Perplexity) is betting on it as a trend-setting differentiator from Apple’s underperforming alternative, no matter that it ended up not being a broadly effective sales pitch motivator last year. That Apple has now partnered with Google, too, must have been a hard pill for Cupertino to swallow. Oh, and by the way, once again, no Qi magnets, although the argument is pretty pervasive, at least to me. Paraphrasing: “Why bother doing so, bumping up the bill-of-materials cost in the process, since most everybody also uses phone cases anyway, and they already come with magnets?”

Not a one-trick pony

All of which leads us to Apple itself, which yesterday (as I’m writing these words on Tuesday afternoon, March 3) released its latest entry-level smartphone, the iPhone 17e:

Minutia first: a year ago, I gave the company grief for busting through the $500 price barrier while, as the original MagSafe innovator, bafflingly leaving magnets off its wireless charging implementation. First World problem solved: unlike with Google and Samsung, as earlier mentioned, they’re there in the iPhone 17e. We can all now once again sleep soundly.

Now, for memory, specifically (in this case) flash memory. Like Samsung but unlike Google, Apple lopped the prior-generation 128 GByte storage capacity option off the low end of the product suite. But unlike both Samsung and Google, the capacity increase comes with no associated price increase; Apple has stuck with $599 for the now-256 GByte variant this time. The SoC is also upgraded, from the A18 to A19 (the same generation as in the iPhone 17), albeit with only 4 GPU cores (versus 5 with the iPhone 17), as is the cellular modem (the newer C1X). And a few other tweaks: a third color option (pink) and updated Ceramic Shield 2 front glass protection.

Since, as I mentioned at the beginning, MWC has expanded beyond phones into tablets (among other things), I’ll also lump into today’s coverage the latest M4 SoC-based generation of the iPad Air, which Apple also announced yesterday.

As before, it comes in both 11” and 13” variants; the N1 networking and C1X cellular chips are also on board for the ride this time. Echoing back to my earlier highlight of the iPhone 17-vs-17e A19 SoC core-count discrepancy, the version of the M4 SoC in the new iPad Air is also downbinned from the ones in the various versions of the M4 iPad Pro, albeit this time from both CPU (both performance and efficiency, in fact) and GPU core-count standpoints, with requisite benchmarking-results impacts. And once again, memory is the most notable news (IMHO, at least) with these devices. But this time, DRAM is in the spotlight. Likely with locally stored AI model sizes in mind, the low-end M4 iPad Air variants deliver a 50% capacity increase (from 8 GBytes to 12 GBytes), still with no corresponding price increase…

…which circles us back to my memory-related comments that kicked off this piece. If volatile (DRAM) and nonvolatile (flash memory) supplies are constrained, and prices are therefore skyrocketing, why is Google able to hold steady on its device pricing, and Apple to go even further, holding prices while simultaneously boosting on-device capacities? Right now, I suspect, both companies’ sizes have enabled them to negotiate favorable pricing and volume contracts with memory suppliers. And further to the “sizes” point, even after those contracts time out, I suspect that both companies will be willing (albeit not necessarily delighted) to endure short-term profit margin pain in order to squeeze smaller, less profitable competitors out of the long-term market.

More to come

When I saw yesterday that Apple had released new public beta versions of its next operating system updates for phones and tablets, but not for computers, I suspected that this delay was only temporary and related to new computers planned for announcement today. And right on schedule, they (therefore it) came this morning; updated versions of the 14” and 16” MacBook Pro, based on the new Pro and Max variants of last fall’s M5 SoC (now also inside the MacBook Air), along with a duet of new displays.

I doubt we’re done; a new low-end MacBook (likely named the Neo) based on the iPhone 16 Pro’s A18 Pro SoC is rumored to still be on queue for Apple’s “big week ahead”, for example, and I can’t help but wonder if we’ll also get a M5-based Mac mini (last updated in November 2024). Stay tuned for more coverage to come from yours truly, hopefully later this week. And until then, let me know your so-far thoughts in the comments!

p.s…Two more MWC-related tidbits. Qualcomm has a promising next-generation SoC for smart watches and other wearables on the way. And speaking of Qualcomm, ready or not, 6G is coming

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post MWC 2026: Apple, Google, Samsung and Other Contending Contestants appeared first on EDN.

Stretching a bit

Срд, 03/04/2026 - 15:00

I love Design Ideas (DIs) with a backstory.  Recently, frequent DI contributor Jayapal Ramalingam published an engaging tale of engineering ingenuity coping with a design feature requirement added unexpectedly and very (very!) late in product development: “Using a single MCU port pin to drive a multi-digit display.”

Jayapal writes, “Imagine a situation where you have only one port line left out, and you are suddenly required to add a four-digit display.”

Yikes!  Add a looming delivery deadline to build suspense, and this becomes a classic nightmare scenario. It could easily develop, from an engineering standpoint, into a horror story straight out of the pages of Stephen King. Well, okay. Almost.

Wow the engineering world with your unique design: Design Ideas Submission Guide

But in a clever plot twist, engineer Jayapal shows how a bit (no pun!) of ingenuity turns this tale of terror into an opportunity for some cool circuit design. In his DI, different durations of software-generated pulses on that lonely port line become the control signals necessary for running the newly needed decimal display.

Crisis and calamity averted.

So I wondered how the same basic plot could make a basis for a more generalized storyline. In this version, not just four digits of numerical binary-coded decimal (BCD), but N bits of arbitrary parallel binary outputs would be driven in a similar solitary serial fashion. And all this would be achieved by the same singleton GPIO port bit. Figure 1 shows how the story takes shape.

Figure 1 A lonely GPIO bit loads a lengthy serial string of parallel registers. 

Incoming pulses of variable length on GPIO are buffered by noninverting gate U1a and drive three sets of inputs. 

  1. Timing circuits U1b (400us R1C3 SER input zero/one discriminator),
  2. U1cd (2.4ms R4C2 parallel RCLK clock AC coupled Schmidt trigger),
  3. SRCLK shift registers serial clock.

As illustrated in Figure 2, the interpulse (idle) state of the GPIO is high = 1. 

Figure 2 GPIO pulse timing.

A serial bit transfer pulse starts when the GPIO goes low = 0, releasing the timing RCs. Whether the pulse shifts to a 0 or 1 bit depends on its duration.  If < 100 μs (T0), the R1C3 timeconstant will still hold SER low when the rising edge of SRCLK clocks the serial registers. This will cause a 0 bit to be shifted in. If > 400 μs (T1), the opposite will occur, and the shift register gets a one.

In this way, a data rate between 2 kbps and 10 kbps (depending on the relative frequencies of ones and zeros) can be maintained as long as the idle period between pulses remains less than 600 μs. Completion of data transfer is signaled by allowing GPIO to remain idle for > TR = 3.5 ms.  This allows R4C2 to time out and a transfer pulse to occur on RCLK, commanding a broadside parallel data transfer from the shift registers to the parallel output bits.

Note that, going back to the original horror story, four BCD digits = 16 bits, two 8-bit shift registers, and 12 ms would be enough logic and time. I think that makes for a pretty good ending for a yarn about a far stretch of a single bit.

Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974.  They have included best Design Idea of the year in 1974 and 2001.

Related Content

The post Stretching a bit appeared first on EDN.

EV design: The truth about 400-V to 800-V battery transition

Срд, 03/04/2026 - 14:45

In electric vehicle (EV) designs, the shift from 400-V to 800-V battery systems is now a pressing issue. So, the panel discussion on the first day of Automotive Tech Forum 2026 was a good venue for a reality check on the future of 800-V EV architectures.

The panel titled “Powering the Electric Vehicle: From Semiconductors to Systems” explored the latest in battery management system (BMS) designs and what battery modeling tells us about the design challenges as we move toward 800-V systems. And how design building blocks like motor control in EV traction are coping with this transition.

The panelists discussed how 800-V EV architectures could reshape vehicle power distribution. Jerry Shi, sector general manager for EV, HEV, and Powertrain at Texas Instruments, spoke about the emerging 800-V EV design landscape, specifically from a drivetrain standpoint. He also outlined critical design challenges and viable solutions in this design arena.

Carsten Himmele, marketing manager for Automotive at Allegro MicroSystems, cautioned about the industry-wide adoption of 800-V battery systems. “The 400-V battery systems will still dominate mainstream markets due to cost and complexity trade-offs.”

Rohan Samsi, VP of GaN Business Division at Renesas, echoed similar sentiments while envisioning a deeper adoption of 800-V architectures to address range anxiety and efficiency concerns. He acknowledged the challenges such as cost, complexity, and consumer preferences. “The trade-offs between 400-V and 800-V architectures relate to component complexity and service warranty costs.”

So, in the 400-V to 800-V transition, there was a consensus that 800-V systems offer advantages in fast charging and reduced weight. However, for now, panelists expect that 400-V systems will remain dominant in mainstream markets due to their affordability.

Related Content

The post EV design: The truth about 400-V to 800-V battery transition appeared first on EDN.

Custom design PWM filters easily

Втр, 03/03/2026 - 15:00

It’s well known that the main job of a pulse width modulator’s filter is to limit the maximum peak-to-peak amplitude of the fPWM Hz-induced ripple. It attenuates this to a specified fraction—Frac of the full-scale PWM output—while passing PWMavg, the average value of the PWM signal.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Although the duty cycle can change instantaneously, the filter’s response to that change takes time to settle. It’s convenient to define the settling time Tfrac to be that after which the transient response remains within ± Frac of PWMavg. (After fully settling, the response variations will be from the ripple only and will remain within ± Frac/2 of PWMavg.) And it’s generally true that the more the ripple is attenuated, the larger Tfrac is. But for a given filter with two or more poles, there is an infinite number of combinations of component values that will limit the maximum ripple to Frac. (Think of the number of poles as being the number of capacitors in an R – C filter.) And yet the value of Tfrac is typically different for each combination. So we have a filter optimization problem: find the component value combination that minimizes Tfrac while satisfying the ripple requirement Frac.

I’ve addressed this issue before in a Design Idea (DI), but the procedure’s complexity was perhaps off-putting and inadequately flexible. I’ve since revisited the problem, finding a somewhat improved and analytically optimal solution. But that improvement alone does not justify a new DI.

So, why this new DI?

What I think does justify this is a spreadsheet that offers greater flexibility in terms of filter requirements and automates all the work for you. Download the files from https://github.com/Christopherrpaul/Customizable-PWM-Filter .

If you use OneDrive or something like it, you must install the files outside the OneDrive folder. (Safely ensconced there, OneDrive doesn’t “see” them and can’t interfere with the spreadsheet’s query of the paths to where certain files are stored locally.)

Open the spreadsheet. In the following, the yellow-highlighted parameters here and on the spreadsheet are inputs to be supplied by the user; the green-highlighted ones are spreadsheet outputs. Tell it your PWM frequency, in Hz, specify the required value of Frac, and press the “Calculate” button.

The Visual Basic Application (VBA)-driven spreadsheet takes that information and determines the values of the filter’s real and complex pole pairs ( the Q and ω0 of the latter ), which give you the optimal, smallest Tfrac, which it also displays.

To produce an implementable filter, it then combines this information with the (default) values of the filter’s capacitors c1, c2, and c3 . (These you can change and again press Calculate.) From all of this, it determines both the exact and the closest standard E96 values for the resistors r1, r2, and r3 needed to complete the filter. The filter itself is the third-order Sallen-Key low-pass depicted in the schematic portion of the spreadsheet screenshot seen in Figure 1.

Figure 1 A screenshot of the spreadsheet that runs the show. See the text.

And since we all like graphs, two have been provided. The one on top shows how ω0 and the real portion of all poles vary with Q. More importantly, it also shows that TFrac generally gets worse (larger) as Q is increased (not surprising, with the concomitant increase in oscillatory amplitudes).

The other graph shows the decay with time of PWMavg minus the absolute value of the transient response, with the voltage displayed on a logarithmic scale. The bumps are evidence of a damped oscillatory behavior.

But as they say in the late-night TV commercials (or at least they used to), “But wait! There’s more!”

How do I know this thing works?

You might ask how you can confirm that this filter will perform as advertised. The answer is easy if you’ve installed LTspice on your computer and you tell the spreadsheet the path starting from the root directory to the LTspice.exe file. Mine’s in C:\Users\chris\AppData\Local\Programs\ADI\LTspice\LTspice.exe.

Don’t worry if you can’t see the entire entry in the Excel cell provided. (NOTE – With the discussions surrounding the ongoing changes in LTspice versions 26.x.y, these files have been developed for use with the stable and still widely used LTspice 17.1.15. This version can still be downloaded and installed: https://ltspice.analog.com/software/LTspice64.exe. I haven’t checked if the files work with the 26.x.y versions.)

Press the “LTspice: Exact…” button. It will automatically launch a simulation using the exact resistor values derived and plot the filter’s response to the two biggest transients: a “full” one from 0 to 100% duty cycle (no PWM ripple) and a “half” one from 0 to 50% duty cycle (maximum possible ripple). See Figure 2 for a sample LTspice run.

Figure 2 An LTspice run using Exact component values for a sample filter.

The responses have been offset to reach their final values at 0 V. Tfrac appears on the plot as a vertical line along with two horizontal lines, which are at ± Frac. You can zoom in to see that the value of Tfrac is indeed correct; it crosses a ± Frac line exactly at the point that the full transition response does. (The full transition always takes a little longer to settle than the half-step transition.)

But alas, alack; this assumes perfect components with 0% tolerances. So the “LTspice Standard…” button launches a simulation of 100 Monte Carlo runs with capacitor and resistor tolerances of 1% and 1% using the E96 resistor values. (You can change all three of these default values and re-run the simulation. In fact, it’s worth considering the overall reduced settling time that can be had with suitably chosen 0.1% resistors added in series with small 1% resistors to more closely approach the exactly calculated values. Better tolerance capacitors would also help, but they tend to be prohibitively expensive.)

As you’ll see, non-zero tolerance variations lead to settling times longer than Tfrac. But by performing an extended number of Monte Carlo runs, you’ll be able to determine the time beyond which even filters made out of real-world components will have settled to Frac.

Filter design constraints

The real portions of all poles in the filter have been constrained to be identical. The reason for this is that these values control the decay rates of the half- and full-step transients, either of which could dictate the overall settling time. Given that the total ripple attenuation is the product of the real parts of both poles, if one were smaller than the other, it would extend the overall settling time beyond that achieved with identical poles. This constraint also simplifies the optimization problem in that there is only one real and one imaginary value of poles to consider, rather than one imaginary and two real values.

Calculated resistor values 

Depending on certain inputs to the spreadsheet, the derived values of the filter resistances might be smaller or larger than you’d like. In that case, the input values of the capacitors could be multiplied by a constant K of your choosing to obtain new resistor values divided by that K.

The spreadsheet’s default capacitor values are in the “Goldilocks” range—large enough that op amp input and PCB capacitances will affect them minimally, but small enough that NPO/COG type capacitors (whose stability with temperature and DC voltages are demanded in filter designs) are not prohibitively expensive. The ratios of one to the other of the default capacitor values have been shown to consistently result in realizable filters. Feel free to experiment with other values and ratios, but be aware that it might not be possible to realize filters with those changes.

Filter drivers

Do not drive the filter from a microprocessor directly. Its non-PWM functions draw currents that lead to small voltage drops across the IC-to-package-pin bonding wires. These induce errors by preventing signals from getting close to the ground and the supply rail. Instead, buffer the microprocessor with dedicated SN74AC04 logic inverters, which will swing to the rails, since they have no other currents to deal with and their outputs are minimally loaded. For a reasonably accurate reference voltage supplying the SN74AC04, consider the REF35.

SN74AC04-induced errors

It’s been pointed out that all digital drivers have different logic high and low resistances. These differences are sources of error that are worst at a 50% duty cycle. The part’s data sheet says that at a 3-V supply, the logic high voltage drop under a 12-mA load over the industrial temperature range could be as high as 560 mV, with a resistance of 45 ohms.

The logic low resistance maximum is a bit better, but there is no spec for the difference. The safe but admittedly ridiculous possibility is that the logic low resistance is 0 ohms, leaving us with a 45-ohm difference. This can be mitigated by paralleling G gates to reduce the drive resistance by that factor to produce a difference of Rdiff = 45/G.

Since no DC current can flow through the filter’s passive components, the fractional full-scale error at 50% is:

.5 · r1 / ( r1 + Rdiff) – .5 = – .5 · Rdiff / ( r1 + Rdiff)

For a b-bit PWM, you’d probably want the error to be less than half of one LSbit or 2-b-1. So you’d require that r1 > Rdiff · 2b.

Consider G = 5. For b = 8, r1 > 2300 ohms. For b = 12, it’s 37 kohms, and for b = 16, 590 kohms. But this brings up a second point: a large b means a relatively small fPWM and therefore a large TFrac. Fortunately, there’s a way around this.

Double up

Summing the contributions of two 8-bit PWMs, one of whose signals’ amplitude is 256 times that of the other, allows both to have an fPWM 256 times larger than that of a single 16-bit PWM. This yields a TFrac reduced by the same factor. Figure 3 shows one way to employ this approach.

Figure 3 Configuration with independent most significant (MSbit) and least significant (LSbit) 8-bit PWMs, the latter contributing 1/256 of the former, to replace a single 16-bit PWM. This arrangement reduces the settling time by a factor of 256.

Op-amp considerations

Figures 1 and 3 lead to the question of which op amp to use. A rail-to-rail input and output unit is warranted. The OPA376 family of singles, duals, and quads is a good answer.

It’s 25 µV at 25°C, ±1 µV/°C from -40 to +85°C, and barely disturbs the accuracy of even a 16-bit PWM. Its input bias current of 10 pA maximum at 25°C, and its typical (no maximum spec) of less than 50 pA at 85°C, introduces errors on par with its offset voltage. Consider the op amp’s output rail-to-rail limitations, however. Either avoid PWM duty cycles at the extremes, or extend the op amp’s supply rails a few tens of millivolts (see its data sheet) beyond those of the PWM.

In approaching your design, you might find the following nomograph in Figure 4 useful.

Figure 4 The above nomograph can aid in selecting the operating point of your design.

Problems, gripes, suggestions, requests, and accolades

The spreadsheet employs VBA numerical iteration routines to find the Q, ω0 pairs and the filter resistors. Although I’ve tested these routines extensively, it’s always possible that one or the other will fail to converge with some combination of input values.

In that case, please let me know by adding a note to the “Comments” section of this DI. This will generate an automatic email alert and will allow the inclusion in our conversation of others who might be interested. Please do not email me unless you have a comment that is truly meant to be private (a marriage proposal?) I encourage feedback of all kinds.

A grudging acknowledgement

I’d be remiss if I did not mention the help I got from a certain widely available AI program in developing this project. This ranged from deriving Inverse Laplace transforms and Newton-Raphson iteration algorithms to VBA coding.

But working with this AI wasn’t all lollipops and rainbows. In the course of the effort, I was reminded of Ronald Reagan’s admonition to “Trust, but verify.” But as things progressed, I dropped the “trust” part.

I found I had to break tasks down into sections, understand each that was provided, test assiduously, and make corrections before proceeding to the next step. Setting a multi-step task was a recipe for disaster. Still, AI is a valuable tool, and I find it even more valuable now that I better understand how to work with it.

I’d be interested in hearing about others’ experiences.

Related Content

 

The post Custom design PWM filters easily appeared first on EDN.

Plant pulse sensors: From soil probes to tree tattoos

Втр, 03/03/2026 - 11:40

Plants do not just grow—they signal. From the subtle moisture shifts in soil to the faint electrical rhythms coursing through leaves and stems, botanical sensors are turning greenery into living data networks.

What began with rugged soil probes has evolved into delicate tree tattoos that map physiological responses in real time. This convergence of biology and electronics is redefining how engineers, agronomists, and hobbyists alike monitor plant health, optimize yields, and even explore new frontiers in bio-inspired design.

Botanical sensors: Giving plants a voice

The term botanical sensor is best understood as an umbrella category rather than a single device. In agricultural technology (AgTech) and plant biology, it encompasses a wide range of instruments designed to monitor plant health and surrounding environmental conditions in real time.

In essence, these sensors give plants a “voice,” allowing them to signal their needs before visible stress, such as wilting, occurs. Unlike conventional weather stations that measure only ambient air, botanical sensors often interface directly with plant physiology or the immediate root zone, capturing data at the source of growth.

Beyond the broad category, it’s useful to distinguish between two key subtypes. In-plant sensors (often called plant wearables) are tiny, flexible devices attached directly to leaves or stems, enabling close monitoring of physiological signals.

In contrast, soil and root micro-environment sensors operate within the rhizosphere—the soil zone surrounding the roots—capturing data on moisture, nutrients, and microbial activity. These complementary approaches provide a layered view of plant health, and we will explore them in greater depth in the next session.

Figure 1 Visualizing plant–sensor interaction: leaf-mounted and root-zone probes capture real-time physiological data. Source: Author

Plant monitoring sensors: Soil, trunk, and surface frontiers

In principle, there is a wide variety of sensors designed to monitor everything from a small succulent on your table to a massive sequoia in a forest. Among these, soil-based sensors are the ones most often found in homes and farms. Rather than measuring the plant directly, they focus on the environment around the roots, where growth truly begins.

Moisture and conductivity sensors reveal water levels and soil salinity, offering insight into nutrient and fertilizer availability. Here, pH sensors track soil acidity, ensuring that nutrients are in a form the plant can actually absorb. Taken together, these instruments provide a root-level perspective that helps growers fine-tune conditions for healthier, more resilient plants.

Figure 2 The multi-parameter root zone soil sensor measures moisture, temperature, and electrical conductivity. Source: Delta-T Devices

For trees and large-scale agriculture, researchers often turn to sensors that measure the pulse of the plant directly. Sap-flow sensors, for instance, are needle probes inserted into the trunk to track how quickly water moves upward—essentially a heart rate monitor for a tree. Dendrometers capture the subtle micro-expansions and contractions of the trunk, revealing how trees shrink slightly during the day as they consume water and swell again at night.

Infrared leaf-temperature sensors add another layer of insight, detecting whether a leaf is sweating through transpiration. When leaves overheat, it usually signals stress: the plant has closed its pores to conserve water. Together, these devices provide a dynamic picture of plant physiology, extending monitoring beyond the soil to the living tissue itself.

Figure 3 The SFM‑5 sap flow sensor enables minimally invasive, high‑precision measurements of sap flow and sapwood water content in most tree species. Source: UGT

Notably, a newer frontier in plant monitoring involves sensors that adhere directly to the plant’s surface, much like a simple patch or sticker. Graphene tattoo sensors are ultra-thin films that can be taped to a leaf, tracking water loss (transpiration) in real time without causing harm.

Biosignal monitors go further, measuring the electrical signals coursing through plant tissue—essentially listening to how a plant reacts to pests, drought, or other stressors before any visible symptoms appear. While these technologies remain largely experimental, they represent an exciting shift from soil and trunk measurements to direct, non-invasive monitoring of plant physiology, offering a glimpse of how future growers may detect stress before it becomes visible.

In essence, there are so many sensors designed to capture a plant’s vital signs. Stomatal aperture reveals how widely the pores are open, regulating gas exchange and water loss. Sap flow tracks the speed at which water and nutrients move through the stem, a direct measure of circulation. Volatile organic compounds serve as chemical distress signals, emitted when plants face pests or disease.

Volumetric water content pinpoints the precise amount of water available in the soil, while electrical conductivity provides a proxy for salinity and nutrient levels. Together, these parameters form a concise diagnostic suite, offering a snapshot of plant health from root hydration to stress signaling.

On a related note, chlorophyll sensor provides a direct measure of a plant’s photosynthetic capacity by gauging how much light is absorbed or reflected by leaves. Handheld meters and clip-on probes often use fluorescence or reflectance techniques to estimate chlorophyll content, which correlates strongly with nitrogen status and overall plant health.

Because chlorophyll levels drop under nutrient deficiency or stress, these sensors are widely used in precision agriculture to guide fertilization decisions and monitor crop vigor. Unlike soil or trunk sensors, chlorophyll sensors give an immediate snapshot of the leaf’s metabolic activity, making them a practical complement to water and nutrient monitoring systems.

Beyond electronic devices, there is also the emerging field of phytosensing, where plants themselves are engineered to act as living detectors. In this approach, a plant might be genetically modified to change color when it encounters a specific toxin in the soil, effectively turning its physiology into a visible alarm system. Phytosensing highlights a future where monitoring does not just rely on external instruments but on the plants’ own biology, transforming them into active participants in environmental sensing.

Connecting the sensors: Interfaces in practice

In practice, mainstream plant monitoring sensors rely on straightforward electrical connections and increasingly on wireless interfaces that tie them into larger IoT systems. Simple moisture probes output analog signals—usually variable voltage or resistance—requiring external circuitry for signal processing and interpretation.

More advanced probes, such as those for pH or electrical conductivity, typically use digital buses like I²C, SPI, or UART, which provide cleaner signals and allow multiple sensors to share the same wiring. Sap-flow sensors, by contrast, generate heat pulses and require timing circuits to measure how quickly the signal moves through the stem, while infrared leaf-temperature sensors may deliver either analog voltages or digital packets depending on design.

Once signals are captured, a microcontroller acts as a hub to convert raw data into usable readings. From there, connectivity options expand: Wi-Fi and Bluetooth are common in greenhouses or indoor setups, while LoRaWAN and Zigbee provide long-range or mesh networking for large farms.

Data is then routed to cloud platforms or local dashboards, where growers can visualize soil moisture, salinity, or canopy stress in real time. Interfaces range from simple panel displays in the field to mobile apps and web dashboards that log trends and trigger alerts.

Practical considerations remain central: sensors must be calibrated regularly, especially EC and pH probes; outdoor devices need waterproofing and corrosion resistance; and power supplies often rely on batteries supplemented by solar trickle charging. The choice of interface—analog, digital, or wireless—depends on scale and cost, but the goal is the same: to make plant vital signs accessible, reliable, and actionable for growers.

Precision agriculture: The IC ecosystem for botanical monitoring

Modern agricultural sensing integrates a diverse set of specialized ICs to track the vital signs of plants. For soil health, the AD5941 precision A/D converter provides advanced impedance spectroscopy capabilities, enabling high-precision moisture and salinity analysis. It also serves as a modern successor platform for electrochemical pH and nutrient testing when paired with suitable sensor electrodes.

Atmospheric monitoring is led by the SHT4x sensors for humidity and the SCD4x sensors for compact photoacoustic CO₂ detection, while BME688 combines gas sensing with integrated AI to detect volatile organic compounds (VOCs) that can signal plant stress.

Light sensing remains critical: the TCS3448 spectral sensor captures multiple wavelength bands, allowing quantification of photosynthetically active radiation (PAR) and enabling growers to fine-tune light recipes for photosynthesis.

Together, these modern ICs transform plant monitoring from guesswork into data-driven precision, optimizing irrigation, nutrient management, and environmental control.

Figure 4 The BME688 module empowers makers and hobbyists to build minimally invasive, AI-driven plant stress monitors through volatile organic compound detection. Source: M5Stack Technology

Closing note

Admittedly, even a jam-packed post cannot do full justice to the fundamentals and applications of botanical sensors. Much remains to be explored before the puzzle is complete—new sensor models, evolving standards, and emerging use cases continue to reshape the field.

Stay tuned for more. Future installments will dive deeper into canopy-level sensing, chlorophyll fluorescence, microclimate monitoring, and innovative energy harvesters that power sensors autonomously. We will also explore how AI-driven analytics can transform raw sensor data into actionable insights for agriculture, forestry, and ecological research.

This overview offers a snapshot of where plant sensing technology stands today, with the promise of richer insights to follow. If you are fascinated by the evolving world of botanical sensors, follow along and join the conversation—together, we will piece the puzzle into a complete picture of plant sensing technology.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Plant pulse sensors: From soil probes to tree tattoos appeared first on EDN.

A defunct Amazon Echo: Where did its acumen go?

Пн, 03/02/2026 - 16:02

A multi-day weather-induced, utility-instigated electricity cutoff thankfully left this engineer’s residence and its contents largely unscathed…with one geriatric smart speaker exception.

Last December’s high-wind-induced extended power outage thankfully didn’t cause notable damage to our home or its contents. But to say we escaped completely unscathed would still be a (slight) overstatement. When we returned home after Xcel Energy restored power, I noticed that several of our Amazon Echo devices—two second-generation Echo Dots:

and a first-generation Echo:

exhibited perpetually rotating topside-light patterns characteristic of an imperfect bootup:

I’ve occasionally encountered this misbehavior before with various Echo-family devices, and as in the past, power-cycling the two Echo Dots got them going again. But despite multiple attempts, both power- and reset switch-based, I couldn’t convince the Echo to resurrect itself:

Oddly enough, albeit presumably indicative of an underlying distributed-processing system architecture, the top-panel microphone mute button still seemed to operate as expected, at least from an LED-illumination response standpoint:

But the Echo never came online or, more broadly, was responsive to voice-command attempts. Not to mention the perpetually rotating topside light whose video you’ve already seen. That said, I’d been using it for a long time, and they’re apparently prone to such misbehavior sooner or later. And that said, its two same-generation siblings in the residence were still working fine, and the first-generation Echo doesn’t support Amazon’s latest Alexa+ enhancements anyway (the second-generation Echo I replaced it with, conversely, is Alexa+-cognizant).

At this point, I’ll reiterate something you’ve read from me in variously worded ways plenty of times before: when a device dies, it frequently then turns into a teardown candidate. I’d already disassembled the first-gen Echo before, for publication more than a decade ago, to be exact:

But as with my Tile Mate teardown a few months back, trying to figure out why a device has died is often reason enough for me to entertain a dissection revisit. Plus, in re-reading my earlier coverage, I was reminded that my teardown presentations have become more verbose (whether that word choice translates into “comprehensive” or “long-winded” is up to you to decide) in the last decade. And although I’m still primarily snapping photos using a smartphone, the integrated camera has gotten a significant upgrade in the intervening years. So…here goes!

Foot first

You’ve already seen the reset switch accessible via a hole in the device’s rubberized “foot”:

Here’s another bottom-side closeup, this time of the various product markings, including the always-informative FCC ID (ZWJ-0823):

As was also the case last time, the “foot” (both here and in subsequent photos accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes) peeled right off:

exposing to view the circuit testing (and firmware programming, I’m also guessing) conductive pad array, along with four screw heads:

I suspect you already know what comes next:

Voilà. Our first, but definitely not our last, PCB is now exposed for visual perusal:

Its functions include internal power generation and management, along with analog-to-digital audio conversion and subsequent amplification for the main (combined midrange and tweeter) and subwoofer speakers. Not to mention the aforementioned hardware reset switch. Let’s next disconnect the associated cabling:

thereby enabling completion of the bottom assembly’s separation from the remainder of the device. To wit, keep in mind in viewing and analyzing both this and subsequent images that the Amazon Echo is currently oriented upside-down, i.e., it’s resting on its top edge:

I’ve got the power

A few more initial images of the PCB from various perspectives and proximities follow:

A bottom assembly flip-over first-time reveals the external power input connector:

And before continuing, I’ll supply a few additional power-related images and comments (power…supply…get it? Ahem). First off, here’s the “wall wart”:

also including closeups of its specs and “barrel” connector:

I’ve had a few second-generation “Dot” devices’ external power supplies fail in the past; the end result is either a flat-out refusal to start at all or a perpetual repetition of partial boots followed by abrupt restarts. In those cases, the consistent “fix” was straightforward and non-wasteful. Since the AC/DC converter with USB-A output was distinct from the USB-A to microUSB cable that fed the device, I could just swap in a replacement for the former and be up and running again in no time. Every time I did this, by the way, I wondered how many Echo Dots prematurely ended up in the landfill due to typical-consumer ignorance of both the exhibited issue’s root cause and simple resolution solution.

The Amazon Echo, as with subsequent-generation Echo Dots, is different: the AC/DC converter and power cable are one integrated unit. And I’d also yet to encounter an Echo-family power supply failure that presented itself as a partial boot followed by a “hang”. Nevertheless, my obvious first step was still to tether a multimeter to the power connector and confirm that I was measuring the expected voltage. And then, since this preliminary outcome wasn’t in and of itself definitive (the power supply could still be peak current-compromised, to the earlier partial-boot-and-restart scenario, and something that the light power supply loading by the multimeter wouldn’t expose), I borrowed a power supply from one of the other first-generation Echoes in the house and sadly confirmed that I still saw the same “hang” behavior as before.

IC details

Back to the bottom of the device. Next, let’s remove the PCB from the assembly, an easily accomplished task:

Here, for comparisons sake, is the comparable PCB (and vantage point view) from my initial February 2016 teardown of the Amazon Echo:

Quoting from that earlier writeup:

At top is the DC power input jack. At bottom is the ribbon cable connector. On the right are the two speaker connectors, one white and the other black. Near the center is a Texas Instruments TLV320DAC3203 stereo audio codec (PDF). On the far left is Texas Instruments’ TPS53312 step-down converter. And the PCB is otherwise covered with assorted large inductors, “can” capacitors, and other passives.

The only thing I might add on revisit of my earlier prose is mention of the multi-LED cluster at center of the PCB, which works in conjunction with the clear plastic lightpipe you probably noticed in the earlier device interior shot, taken post-bottom assembly removal:

to route the device’s power and connectivity status to a backside indicator located above the indent for the power cord. In retrospect, I wish I would have noted not only the perpetually rotating LED pattern up top but also the information communicated (or not) by this secondary LED set, as it might have assisted in diagnosing whatever had gone awry with the Echo.

Next are some side views:

And now for the PCB’s other side:

once again accompanied by that of its decade-plus ago predecessor:

along with a requote of my prior prose:

Flip the PCB over and the comparatively sparse underside contains several notable elements, beyond even more passives. At top is, again, the dual screw-reinforced backside of the DC power input jack. At bottom is the system reset button. In between them are the previously mentioned test points. And in the middle of the left half of the PCB is the audio amplifier, again from Texas Instruments (the TPA3110D2 Class D Stereo device, to be precise).

You more recently saw mention of the TPA3110’s higher power, PFFB-supportive successor, Texas Instruments’ TPA3255, within a class D-based audio amplifier teardown I did last fall.

Symmetrical sound redirection

Onward. With the lightpipe out of the way, the conical black plastic piece underneath it (as currently oriented; above it in normal operation) slides right out:

Below it is the main full-range speaker, handling all but the lowest audio frequencies, which instead route to the subwoofer we’ll see shortly:

The plastic piece’s contours, with the cone end pointed toward the speaker (which normally points downward), uniformly redirect generated sound out the mesh sides of the device:

Locating the brains

At this point, speaking of “mesh”, let’s press “pause” on the speaker and redirect our attention to sliding that metal mostly-mesh outer chassis off instead:

Take the inner assembly:

Rotate it horizontally by 180°, and another PCB appears:

The connector at bottom in this upside-down (versus norm) orientation mates to a flex cable that also routes to the top assembly:

And the one at the other end mates to the earlier-seen flex cable that also ends up at the bottom assembly:

Let’s cut away the foam surrounding much of the insides, so we can see what’s underneath it:

That’s more like it:

There’s that RFID tag again, which I’d first showcased a decade-plus ago:

Here’s our first glimpse of the subwoofer which, like the full-range speaker you saw earlier, points downward in the device’s normal operating orientation. The full-range speaker’s rear housing, which you’ll see shortly, is rounded, and akin to the cone-shaped piece ahead of the full-range speaker, similarly redirects the subwoofer’s primary output out the sides:

And here once again is what I’m calling the “digital PCB”, now free of any foam obscurant:

See those four screw heads? Buh-bye:

In removing one of them, which promptly and firmly re-attached itself to the side of the internal assembly, I was reminded that there’s a sizeable subwoofer-inclusive magnet inside:

As before, I’ll start with the PCB’s outside:

Next is its decade-plus ago, still attached, 90°-rotated-in-comparison counterpart’s image:

And a reprint of the prior associated prose (perhaps obviously referencing locations in the initial teardown’s photo orientation):

In the middle, toward the top is Texas Instruments’ DM3725CUS100 “digital media processor” SoC. It’s fairly diminutive in processing chops, compared to the application processors in modern smartphones and tablets, containing only a single-core 1 GHz ARM Cortex-A8 CPU. My best guess, therefore, is that it primarily handles the Echo’s speech recognition features, with “heavier lifting” redirected to Amazon’s servers via the device’s Wi-Fi connection. Speaking of which, the shiny-packaged IC below the DM3725CUS100 is a Qualcomm Atheros QCA6234X-AM2D Wi-Fi and Bluetooth module, also found in Amazon’s Fire TV and Fire HD tablet. The corresponding antennas are etched into the PCB, on either side.

Volatile and nonvolatile memory are a necessity, of course, and in this case they respectively take the form of a Samsung K4X2G323PD-8GD8 1 Gbit 200 MHz x32 mobile DDR SRAM (in the upper left corner) and Sandisk SDIN7DP2-4G 4 GByte iNAND embedded flash memory drive (below it). A standalone power management IC is also pretty much a guarantee in a product like this, and the Echo doesn’t disappoint; on the right edge of the PCB is a Texas Instruments TPS65910A1.

Next is another set of PCB side shots; note that as with its bottom-located predecessor, this particular board is impressively “meaty” from a thickness standpoint:

Finally, what’s underneath? A decade back, I wrote, “I didn’t bother showing you the underside of this PCB, by the way, because there’s nothing really to see … unless you’re into a bunch of additional passives, that is.” As you’ll see, “a bunch of” was arguably even overstating reality:

To the summit

One more PCB to go; the seemingly still-functional one up top, starting with the removal of a side screw:

The top assembly’s now gone:

More accurately, I’d just momentarily set it aside:

Four more screws to remove (how many times have I already said that in this piece?):

And now a pictorial sequence of the steps necessary to fully-as-possible expose the PCB to view:

Last time, I wrote: “I didn’t bother with a shot of the underside of the PCB; the only contents of note are switches corresponding to the top-side microphone-mute and device-setup buttons.” This time, I’m instead going the extra mile for you, dear readers:

See, like I said before; just switches:

The other side of the PCB, of which you’ve already caught several glimpses, is more interesting:

Here’s the image of that same PCB (and side of it) from last time, once again notably rotated 90 degrees as compared to the new version:

Regarding the gear structure in one corner, I previously said that “Echo contains a rotating upper “cuff” which, among other things, acts as a manually operated alternative to voice command-driven volume up/down operations.” And the gear? It “provides cuff position and speed-of-rotation information.”

And what about the various visible ICs? Again, I requote (again, with location references to the original version of the photo, not the newer one):

Toward the top is a humble Texas Instruments SN74LVC74A dual positive-edge-triggered D-Type flip-flop (ironically the largest IC on this particular PCB). Toward the center are four Texas Instruments TLV320ADC3101 stereo ADCs. They surround one of the seven microphones, at center in gold. And they are surrounded by four Texas Instruments LP55231 9-output LED drivers. The other six microphones are symmetrically located along the rim of the PCB; one of them isn’t visible in the photo, obscured by the ribbon cable. And on either side of each of those edge-located microphones is an LED, twelve total in the design.

Transducers redux

Now let’s return to those two speakers—main and subwoofer—that you initially saw earlier. A decade-plus ago and regarding the foam seen surrounding the internal assembly after removing the mostly-mesh metal outer chassis, I wrote:

Underneath the thin black fabric layer surrounding the chassis is the woofer, along with its corresponding bass reflex port. To see them in detail, check out iFixit’s website; the electronics aspects of the design were my primary focus. The fabric’s purpose may be at least in part to diffuse the speakers’ outputs, thereby delivering the 360º sound that Amazon promotes. It may also dampen vibration at high volume.

This time, curiosity got the better of me, and I decided to peruse them for myself, sharing the images with you in the process. The first step, however, was to finish removing the main speaker. Hey, look. Four more screws!

The rounded rear of the main speaker’s acoustic suspension enclosure, as mentioned earlier, uniformly redirects the subwoofer’s primary sound output around the mesh sides of the device.

Speaking of the subwoofer…

And, last but not least, the curiously shaped (as you’ll soon see) bass reflex structure, intended to boost the subwoofer’s low frequency efficiency:

You’ve actually seen its associated port already, in one of the post-foam-removal internal assembly side shots. Here’s a closeup:

Remove the two screws whose heads are visible in the prior photo:

And the bass reflex structure then slips right out the Echo’s now-speakers-less bottom end (in normal operation; top as currently upside-down oriented):

Wutdunit?

(why yes, I did just create my own word)…

Unfortunately, unless you saw an old-vs-new teardown disparity I’d overlooked in any of the PCB photos I’ve shared here (a bulging capacitor, perhaps?), we’re left with the same question I posed at the start of this writeup: what caused this Amazon Echo to fail? The power subsystem in the bottom assembly seems to still be intact, given that the top assembly’s various LEDs and microphone mute switch continue to function (likely removing the top assembly from the root-cause list as well). I doubt that a failure in either/both the bottom assembly’s audio subsystem digital-to-analog and amplification stages would suffice to bring the device completely to its knees, either.

That leaves what I previously referred to as the “digital PCB”. A corrupted firmware image, perhaps the result of power loss mid-update, is one possibility. While the Echo was still present in my list of activated (albeit in this case, not found) devices, before I removed it in an ultimately fruitless attempt to fully factory-reset and then revive it, its settings in the Alexa app indicated that a firmware update was pending. I initially discounted this firmware-corruption possibility because:

  • Amazon certainly wouldn’t design a device that “bricked” so easily, absent any sort of user-friendly recovery scheme…would it?(??!!!!)
  • And at the time, my other two, still-functional first-generation Echoes’ settings displayed the exact same “firmware update available” messages. I therefore assumed they were bogus remnants of the first-generation Echo’s dearth of Alexa+ support.

But in re-looking at their settings just now, those messages are now gone. There’s no user-controlled way to manually initiate a firmware update; Amazon automatically “pushes” them (presumably at a time when it senses that a given device isn’t in use, for example). So…maybe?

The other, more benign possibility is simply that some circuit on that particular (or another) board has “gone south”, taking the entire device down with it. Nearing 3,000 words in, I’m going to wrap up at this point and turn the keyboard over to you for your theories and broader thoughts on this teardown in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A defunct Amazon Echo: Where did its acumen go? appeared first on EDN.

USB-C and Power Delivery: Too much of a good thing?

Птн, 02/27/2026 - 15:00

I’ve recently been doing some detailed research and studying related to the USB Type-C connector and the associated USB Power Delivery (PD) specification. At first, both seemed like such a good idea, but now I am not so sure – especially about the USB PD part.

First, a little background. Like many people, I have a drawer full of AC/DC charger units I no longer use but can’t bear to toss, Figure 1. These units are often derisively called wall warts; many also function as power sources in addition to chargers, to be used with or without batteries in their target unit.

Figure 1 If you have used electronic devices, toys, or smartphones over the past decades, you likely have a drawer or box stuffed with chargers that are no longer needed, but you can’t bear to toss out. Source: Google

These chargers come in a wide range of voltage and current ratings, each specific to the product with which they came. They also have a wide range of frustratingly incompatible coaxial (barrel) connectors (“coaxial” in their physical structure, and unrelated to RF coaxial-cable connectors), and both polarity orientations, Figure 2.

Figure 2 Barrel connectors come in a wide range of inner and outer diameter pairings, presumably to key the connectors to their voltage and current, but actually a source of confusion and waste. Sources: Bid or Buy/South Africa; Same Sky

As a consequence, it is almost impossible to use one AC/DC unit as a replacement for a misplaced or defunct one. While I have resorted to repurposing one with the needed rating but wrong connector by swapping and soldering the correct connector from another unit, the average person can’t do this.

Now, USB-C and USB-PD

Then came smartphone charging and a drive towards more uniformity in USB-based charging, using either the Apple Lightning connector, a USB Type A connector, or others. “Hey,” I thought, “we’re making progress.”

Now, we have the USB Type-C connector, which is mandated by the European Union for all suitable products, including smartphones and, by extension, driving its adoption outside the EU, Figure 3. So it looks like barrel connectors are history, and other USB connectors are falling behind, as USB-C is the way to go. So far, so good.

Figure 3 The USB Type-C connector is poised to dominate due to its capabilities and the EU mandate to be used wherever technically feasible. Source: CNET

Then I started looking into the USB Power Delivery (PD) standard in more detail. It dramatically increases the available voltage, current, and power levels, Figure 4.

Figure 4 The progression of power-delivery capabilities offered by the various USB connectors is impressive. Source: Texas Instruments

USB-PD offers three power-delivery modes:

  • Sink: a port, most often a device, that consumes power from VBUS when attached.
  • Source: a port that provides power over VBUS when attached,
  • Dual-role power (DRP): a port that can operate as either a sink or source, and may even alternate between these two states.
It gets messy

This makes it all sound so simple and effective, but USB PD is not like peeling an onion, where every layer you peel back reveals only one other one. Instead, it’s more like nuclear fission, where each action or state change can lead to multiple new ones.

I won’t try to describe all the ins and outs of USB PD. There are many good overviews as well as detailed dives into the standard (see References). To sum it all up: it’s very complicated, starting with a back-and-forth initialization-negotiation dialogue between the two sides of the connection to decide who can do what to whom, Figure 5. An added complication is that USB PD allows for multiple loads to be charged at the same time, each with different requirements.

Figure 5 Once the USB-C connector is connected, the two cable ends begin a sophisticated negotiation about what needs to be done and what can be done. Source: Acroname Inc.

USB PD has many cases, exceptions, state diagrams, timing diagrams, conditional rules…it’s a long list. With all this comes the need for a very smart embedded controller to implement it.

At first, I thought the entire USB-C/PD scenario was the best thing to happen. After all, what could be better than a “universal” charging setup? It promises to handle anything up to the specified maximum, with no action on the part of the user, and no incompatibilities. What’s not to like?

However, the more I looked into USB PD, the more concerned I became. In the attempt to be a solution to just about any charging situation (and let’s ignore the data-connection interface aspect), it tries to do an awful lot. Yet history shows that such overarching objectives, however laudable and well-intentioned, can become a swamp.

That’s where I started to worry. Who can actually grasp the totality and subtleties of USB PD, especially if there’s a problem? Can the controller really be tested to 100% certainty that it properly implements all the rules and cases correctly? Are there corner cases in the real world that will only show up months or years later, with frustrated users as the test subjects?

This isn’t the only example

Whatever happened to the engineering mandate to “keep it simple”? I’ll cite an automotive parallel. Volkswagen recently introduced the 2026 Tiguan SEL R-Line Turbo, which uses a list of engineering approaches to squeeze 268 horsepower and 258 lb-ft of torque out of a modest two-liter, four-cylinder engine.

To do this, they use forced induction turbocharging, where one turbine spins in the engine exhaust, with temperatures around 1,000 degrees, and its momentum is transferred to a paired turbine spinning at speeds above 150,000 rpm to pressurize the air-intake charge. It also employs variable inlet geometry that instantly and precisely meters boost, air charge, and bypass, reducing throttle latency and increasing efficiency. The super-high compression ratio of 10.5:1 relies on higher pressure in the direct fuel-injection system (from 350 to 500 bar) as well as a forged steel fuel rail to carry it.

But why stop there? In a classic example of inevitable follow-on consequences, the higher forces require thicker piston crowns, shortened connecting rods and thicker wrist pins. The need for cooling meant redesigning the combustion chamber itself, and incorporating a new air-to-water heat exchanger. The big turbo-edition comes with oil-cooled pistons and a nitrided crankshaft. Finally, the hydraulic intake cam adjuster replaces two pairs of cam pieces with double actuators and instead substitutes four separate cam pieces with eight adjusters.

 So I have to wonder: what will the reliability and maintenance of this engineered complexity and sophistication be in a mass-produced car?

In some ways, USB PD is the latest iteration of the belief that a universal solution is possible and that “this time, we’ll get it all right.” However, sometimes having just one more-tightly focused objective is a better idea long term, as there are fewer unexpected and unpleasant surprises.

Will I miss the cheap AC/DC charger that does one thing, with its proliferation of power ratings and barrel connectors? No, I won’t. Do I welcome the USB-C and PD standard and implementation? Let’s just say I am cautiously optimistic, as I recognize that it’s a complicated system and not merely an A-to-B power source. My personal jury is out on this question!

What are your thoughts on the complexity and ambitious reach of this power-delivery standard?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

References

EU and USB Type-C regulation

The post USB-C and Power Delivery: Too much of a good thing? appeared first on EDN.

Scope boosts high-speed interface validation

Чтв, 02/26/2026 - 19:54

Keysight’s XR8 real-time oscilloscope accelerates high-speed interface debug and compliance validation with powerful parallel, multicore analysis. A newly designed frontend ASIC combined with an integrated 12-bit ADC and DSP engine preserves signal integrity, enhances timing accuracy, and delivers consistent, repeatable measurements across high-speed serial, memory, and mixed-signal designs.

Powered by Infiniium 2026 software, the XR8 streamlines workflows with flexible waveform windows and productivity tools including drag-and-drop functionality and an integrated SCPI recorder. Intrinsic jitter as low as 13 fs rms and noise below 130 µV at 8-GHz bandwidth maintain compliance margin for high-speed interfaces including USB4v2, DisplayPort 2.1, and DDR5. The integrated ADC/DSP engine increases acquisition, analysis, and reporting throughput by up to 3×, helping engineers complete high-speed interface validation faster and more efficiently.

The XR8’s redesigned mechanical architecture reduces power consumption, improves thermal efficiency, and minimizes acoustic noise in a compact footprint. This smaller, quieter platform can be deployed in space-constrained labs or positioned closer to the device under test for stable, low-noise operation.

For more information about the XR8 4-channel, 8-GHz to 33-GHz bandwidth oscilloscope, click the product page link below.

XR8 product page

Keysight Technologies 

The post Scope boosts high-speed interface validation appeared first on EDN.

GaN half-bridge simplifies 650-V power stages

Чтв, 02/26/2026 - 19:54

MasterGaN6 from ST integrates two 650-V enhancement-mode GaN transistors with typical RDS(on) of 140 mΩ in a half-bridge configuration, delivering a compact, efficient power stage. This power system-in-package also integrates a high-voltage gate driver and linear regulators for both high-side and low-side supplies to further reduce external components.

As the second generation of the MasterGaN half-bridge family, MasterGaN6 adds dedicated fault and standby pins to enable enhanced system monitoring and power management. Integrated LDOs and a bootstrap diode ensure reliable, optimized gate driving for improved efficiency and performance in high-density power applications.

MasterGaN6 handles output currents up to 10 A, with an overall driver propagation delay of 45 ns and a minimum pulse width of 35 ns. Its 3.3-V to 15-V logic-compatible inputs feature hysteresis and an integrated pull-down for robust noise immunity. A comprehensive protection set includes cross-conduction prevention, thermal shutdown, and undervoltage lockout to ensure safe and reliable operation.

Prices for the MasterGaN6 half-bridge in a 9×9-mm QFN package start at $4.14 in lots of 1000 units.

MasterGaN6 product page 

STMicroelectronics

The post GaN half-bridge simplifies 650-V power stages appeared first on EDN.

Сторінки