Українською
  In English
EDN Network
Mitsubishi samples high-voltage IGBT modules

Mitsubishi announced that it has begun shipping samples of two new S1-Series high-voltage IGBT modules rated at 1.7 kV. These two components are useful for large industrial equipment, such as railcars and DC power transmitters. With proprietary IGBT devices and advanced insulation structures, the S1-Series modules enhance reliability, minimize power loss, and reduce thermal resistance, supporting more reliable and efficient operation of inverters in large industrial equipment.
The S1-Series incorporates Mitsubishi’s Relaxed Field of Cathode (RFC) diode, increasing the Reverse Recovery Safe Operating Area (RRSOA) by 2.2 times over previous models, improving inverter reliability. Additionally, an IGBT element with a Carrier Stored Trench Gate Bipolar Transistor (CSTBT) structure reduces power loss and thermal resistance, enabling more efficient inverter operation. The upgraded insulation structure boosts insulation voltage resistance to 6.0 kVRMS—1.5 times higher than earlier products—allowing more flexible insulation designs for compatibility with a broader range of inverter types.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Mitsubishi samples high-voltage IGBT modules appeared first on EDN.
ADI’s efforts for a wirelessly upgraded software-defined vehicle
In-vehicle systems have massively grown in complexity with more installed speakers, microphones, cameras, displays, and compute burden to process the necessary information and provide the proper, often time-sensitive output. The unfortunate side effect of this complexity is the massive increase in ECUs and subsequent cabling to and from its allocated subsystem (e.g., engine, powertrain, braking, etc.). The lack of practicality with this approach has become apparent where more OEMs are shifting away from these domain-based architectures and subsequently traditional automotive buses such as local interconnect network (LIN), controlled area network (CAN) for ECU communications, FlexRay for x-by-wire systems, and media oriented transport (MOST) for audio and video systems. SDVs rethink underlying vehicle architecture so that cars are broken into zones that will directly service the vehicle subsystems that surround it locally, cutting down wiring, latency, and weight. Another major benefit of this are over-the-air (OTA) updates using Wi-Fi or cellular to update cloud-connected cars, however bringing ethernet to the automotive edge comes with its complexities.
ADI’s approach to zonal architecturesThis year at CES, EDN spoke with Yasmine King, VP of automotive cabin experience at Analog Devices (ADI). The company is closely working with the underlying connectivity solutions that allow vehicle manufacturers to shift from domain architectures to zonal with ethernet-to-edge (E2B) bus, automotive audio bus (A2B), and gigabit multimedia serial link (GMSL) technology. “Our focus this year is to show how we are adding intelligence at the edge and bringing the capabilities from bridging the analog of the real world into the digital world. That’s the vision of where automotive wants to get to, they want to be able to create experiences for their customers, whether it’s the driving experience, whether it’s the back seat passenger experience. How do you help create these immersive and safe experiences that are personalized to each occupant in the vehicle? In order to do that, there has to be a fundamental change of what the architecture of the car looks like,” said King. “So in order to do this in a way that is sustainable, for mobility to remain green, remain long battery range, good fuel efficiency, you have to find a way of transporting that data efficiently, and the E2B bus is one of those connectivity solutions where it’s it allows for body control, ambient lighting.”
E2B: Remote control protocol solution 10BASE-T1S solutionBased on the OPEN alliance 10BASE-T1S physical layer (PHY), the E2B bus aims at removing the need for MCUs centralizing the software to the high performance compute (HPC) or central compute (Figure 1). “The E2B bus is the only remote control protocol solution available on the market today for the 10BASE-T1S so it’s a very strong position for us. We just released our first product in June of this past year, and we see this as a very fundamental way to help the industry transform to zonal architecture. We’re working with the OPEN alliance to be part of that remote control definition.” These transceivers will integrate low complexity ethernet (LCE) hardware for remote operation and, naturally, can be used on the same bus as any other 10BASE-T1S-compliant product
BMW has already adopted the E2B bus for their ambient lighting system, King mentioned that there has already been further adoption by other OEMs but they were not public yet. “The E2B bus is one of those connectivity solutions where it allows for body control, ambient lighting. Honestly, there’s about 50 or 60 different applications inside the vehicle.” She mentioned how E2B is often used for ambient lighting today but there are many other potential applications such as driver monitoring systems (DMSs) that might detect a sleeping driver via the in-vehicle biometric capabilities to then respond with a series of measures to wake them up, E2B allows OEMs to apply these measures with an OTA update. Without E2B, you’d have to not only update the DMS, but you’d have to update the multiple nodes that are controlling the ambient light. The owner might have to take it back into the shop to apply the updates, it just takes longer and is more of a hassle. With E2B, it’s a single OTA update that is an easy, quick download to add safety features so it’s more realistic to get that safer, more immersive driver experience.” The goal for ADI is to move all the software from all edge nodes to the central location for updates.
Figure 1: EDN editor, Aalyia Shaukat (left) and VP of automotive cabin experience, Yasmine King (right) in front of a suspension control demo with 4 edge nodes sensing the location of the weighted ball, sends the information back to the HPC to send commands back to control the motors.
A2B: Audio system based on 100BASE-T1Based upon the 100BASE-T1 standard, the A2B audio follows a similar concept of connecting edge nodes with a specialization in sound limiting the installation of weighty shielded analog cables going to and from the many speakers and microphones in vehicles today for modern functions such as active noise cancellation (ANC) and road noise cancellation (RNC). “We have RNC algorithms that are connected through A2B, and it’s a very low latency, highly deterministic bus. It allows you to get the inputs from, say, the wheel base, where you’re listening for the noise, to the brain of the central compute very quickly.” King mentioned how audio systems require extremely low latencies for an enhanced user experience, “your ears are very susceptible to any small latency or distortion.” The technology has more maturity than the newer E2B bus and has therefore seen more adoption, “A2B is a technology that is utilized across most OEMs, the top 25 OEMs are all using it and we’ve shipped millions of ICs.” ADI is working on a second iteration of the A2B bus that multiplies the data rate of the previous generation, this is likely due to the maturation of the 1000BASE-T1 standard for automotive applications that is meant to reach 1 Gbps. When asked about the data rate King responded, “I’m not sure exactly what we are publicly stating yet but it will be a multiplier.”
GMSL: Single-wire SerDes display solutionGMSL is the in-vehicle serializer/deserializer (SerDes) video solution that shaves off the significant wiring typically required with camera and subsequent sensor infrastructure (Figure 2). “As you’re moving towards autonomous driving and you want to replace a human with intelligence inside the vehicle, you need additional sensing capabilities along with radar, LiDAR, and cameras to be that perception sensing network. It’s all very high bandwidth and it needs a solution that can be transmitted in a low-cost, lightweight cable.” Following a similar theme as the E2B and A2B audio buses, using a single cable to manage a cluster display or an in-vehicle infotainment (IVI) human-to-machine interface (HMI) minimizes the potential weight issues that could damage range/fuel efficiency. King finished by mentioning one overlooked benefit of lowering the weight of vehicle harnessing “The other piece that often gets missed is it’s very heavy during manufacturing, when you move over 100 pounds within the manufacturing facilities you need different safety protocols. This adds expense and safety concerns for the individuals who have to pick up the harness where now you have to get a machine over to pick up the harness because it’s too heavy.”
Figure 2: GMSL demo aggregating feeds from six cameras into a deserializer board going into a single MIPI port on the Jetson HPC-platform.
Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
The post ADI’s efforts for a wirelessly upgraded software-defined vehicle appeared first on EDN.
PWMpot approximates a Dpot

Digital potentiometers (“Dpots”) are a diverse and useful category of digital/analog components with up to a 10-bit resolution, element resistance from 1k to 1M, and voltage capability up to and beyond ±15v. However, most are limited to 8 bits, monopolar (typically 0v to +5v) signal levels, and 5k to 100k resistances with loose tolerances of ±20 to 30%.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This design idea describes a simple and inexpensive Dpot-like alternative. It has limitations of its own (mainly being restricted to relatively low signal frequencies) but offers useful and occasionally superior performance in areas where actual Dpots tend to fall short. These include parameters like bipolar signal range, terrific differential nonlinearity, tight resistance accuracy, and programmable resolution. See Figure 1.
Figure 1 PWM drives opposing-phase CMOS switches and RC network to simulate a Dpot
RC ripple filtering limits frequency response to typically tens to hundreds of Hz.
Switch U1b connects wiper node W to node B when PWM = 1, and to A when PWM = 0. Letting the PWM duty factor, P = 0 to 1, and assuming no excessive loading of W:
Vw = P(Vb – Va) + Va
Meanwhile, switch U1a connects W to node A when PWM = 1, and to B when PWM = 0, thus 180o out of phase with U1b. Due to AC coupling, this has no effect on pot DC output, but the phase inversion relative to U1b delivers active ripple attenuation as described in “Cancel PWM DAC ripple with analog subtraction.”
The minimum RC time-constant required to attenuate ripple to no more than 1 least significant bit (lsb) for any given N = number of PWM bits of resolution and Tpwm = PWM period is given by:
RC = Tpwm 2(N/2 – 2)
For example:
for N = 8, Fpwm = 10 kHz
RC = 10 kHz-1*2(8/2 – 2) = 100 µs*22 = 400 µs
The maximum acceptable value for R is dictated by the required Vw voltage accuracy under load. Minimum R is determined by:
- Required resistance accuracy after factoring in the variability of U1b switch Ron: r which is 40 ±40Ω for the HC4053 powered as in Figure 1.
- Required integral nonlinearity (INL) as affected by switch-to-switch Ron variation, which is just 5 Ω for the HC4053 as powered here.
R = 1k to 10k would be a workable range of choices for N = 8-bit resolution. N is programmable.
The net result is the equivalent circuit shown in Figure 2. Note that, unlike a mechanical pot or Dpot, where output resistance varies dramatically with wiper setting, the PWMpot’s output resistance (R +r) is nominally constant and independent of setting.
Figure 2 The PWMpot’s equivalent circuit where r = switch Ron, P = PWM duty factor, and where the ripple filter capacitors are not shown.
Funny footnote: While pondering a name for this idea, I initially thought “PWMpot” was too long and considered making it shorter and catchy-er by dropping the “WM.” But then, after reading the resulting acronym out loud, I decided it was maybe a little too catchy.
And put the “WM” back!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Cancel PWM DAC ripple with analog subtraction
- A faster PWM-based DAC
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- PWM power DAC incorporates an LM317
- Cancel PWM DAC ripple with analog subtraction but no inverter
- Cancel PWM DAC ripple with analog subtraction—revisited
The post PWMpot approximates a Dpot appeared first on EDN.
AI at the edge: It’s just getting started

Artificial intelligence (AI) is expanding rapidly to the edge. This generalization conceals many more specific advances—many kinds of applications, with different processing and memory requirements, moving to different kinds of platforms. One of the most exciting instances, happening soonest and with the most impact on users, is the appearance of TinyML inference models embedded at the extreme edge—in smart sensors and small consumer devices.
Figure 1 The TinyML inference models are being embedded at the extreme edge in smart sensors and small consumer devices. Source: PIMIC
This innovation is enabling valuable functions such as keyword spotting (detecting spoken keywords) or performing environmental-noise cancellation (ENC) with a single microphone. Users treasure the lower latency, reduced energy consumption, and improved privacy.
Local execution of TinyML models depends on the convergence of two advances. The first is the TinyML model itself. While most of the world’s attention is focused on enormous—and still growing—large language models (LLMs), some researchers are developing really small neural-network models built around hundreds of thousands of parameters instead of millions or billions. These TinyML models are proving very capable on inference tasks with predefined inputs and a modest number of inference outputs.
The second advance is in highly efficient embedded architectures for executing these tiny models. Instead of a server board or a PC, think of a die small enough to go inside an earbud and efficient enough to not harm battery life.
Several approaches
There are many important tasks involved in neural-network inference, but the computing workload is dominated by matrix multiplication operations. The key to implementing inference at the extreme edge is to perform these multiplications with as little time, power, and silicon area as possible. The key to launching a whole successful product line at the edge is to choose an approach that scales smoothly, in small increments, across the whole range of applications you wish to cover.
It is the nature of the technology that models get larger over time.
System designers are taking different approaches to this problem. For the tiniest of TinyML models in applications that are not particularly sensitive to latency, a simple microcontroller core will do the job. But even for small models, MCUs with their constant fetching, loading, and storing are not an energy-efficient approach. And scaling to larger models may be difficult or impossible.
For these reasons many choose DSP cores to do the processing. DSPs typically have powerful vector-processing subsystems that can perform hundreds of low-precision multiply-accumulate operations per cycle. They employ automated load/store and direct memory access (DMA) operations cleverly to keep the vector processors fed. And often DSP cores come in scalable families, so designers can add throughput by adding vector processor units within the same architecture.
But this scaling is coarse-grained, and at some point, it becomes necessary to add a whole DSP core or more to the design, and to reorganize the system as a multicore approach. And, not unlike the MCU, the DSP consumes a great deal of energy in shuffling data between instruction memory and instruction cache and instruction unit, and between data memory and data cache and vector registers.
For even larger models and more latency-sensitive applications, designers can turn to dedicated AI accelerators. These devices, generally either based on GPU-like SIMD processor arrays or on dataflow engines, provide massive parallelism for the matrix operations. They are gaining traction in data centers, but their large size, their focus on performance over power, and their difficulty in scaling down significantly make them less relevant for the TinyML world at the extreme edge.
Another alternative
There is another architecture that has been used with great success to accelerate matrix operations: processing-in-memory (PiM). In this approach, processing elements, rather than being clustered in a vector processor or pipelined in a dataflow engine, are strategically dispersed at intervals throughout the data memory. This has important benefits.
First, since processing units are located throughout the memory, processing is inherently highly parallel. And the degree of parallel execution scales smoothly: the larger the data memory, the more processing elements it will contain. The architecture needs not change at all.
In AI processing, 90–95% of the time and energy is consumed by matrix multiplication, as each parameter within a layer is computed with those in subsequent layers. PiM addresses this inefficiency by eliminating the constant data movement between memory and processors.
By storing AI model weights directly within memory elements and performing matrix multiplication inside the memory itself as input data arrives, PiM significantly reduces data transfer overhead. This approach not only enhances energy efficiency but also improves processing speed, delivering lower latency for AI computations.
To fully leverage the benefits of PiM, a carefully designed neural network processor is crucial. This processor must be optimized to seamlessly interface with PiM memory, unlocking its full performance potential and maximizing the advantages of this innovative technology.
Design case study
The theoretical advantages of PiM are well established for TinyML systems at the network edge. Take the case of Listen VL130, a voice-activated wake word inference chip,which is also PIMIC’s first product. Fabricated on TSMC’s standard 22-nm CMOS process, the chip’s always-on voice-detection circuitry consumes 20 µA.
This circuit triggers a PiM-based wake word-inference engine that consumes only 30 µA when active. In operation, that comes out to a 17-times reduction in power compared to an equivalent DSP implementation. And the chip is tiny, easily fitting inside a microphone package.
Figure 2 Listen VL130, connected to external MCU in the above diagram, is an ultra-low-power keyword-spotting AI chip designed for edge devices. Source: PIMIC
PIMIC’s second chip, Clarity NC100, takes on a more ambitious TinyML model: single-microphone ENC. Consuming less than 200 µA, which is up to 30 times more efficient than a DSP approach, it’s also small enough for in-microphone mounting. It is scheduled for engineering samples in January 2025.
Both chips depend for their efficiency upon a TinyML model fitting entirely within an SRAM-based PiM array. But this is not the only way to exploit PiM architectures for AI, nor is it anywhere near the limits of the technology.
LLMs at the far edge?
One of today’s undeclared grand challenges is to bring generative AI—small language models (SLMs) and even some LLMs—to edge computing. And that’s not just to a powerful PC with AI extensions, but to actual edge devices. The benefit to applications would be substantial: generative AI apps would have greater mobility while being impervious to loss of connectivity. They could have lower, more predictable latency; and they would have complete privacy. But compared to TinyML, this is a different order of challenge.
To produce meaningful intelligence, LLMs require training on billions of parameters. At the same time, the demand for AI inference compute is set to surge, driven by the substantial computational needs of agentic AI and advanced text-to-video generation models like Sora and Veo 2. So, achieving significant advancements in performance, power efficiency, and silicon area (PPA) will necessitate breakthroughs in overcoming the memory wall—the primary obstacle to delivering low-latency, high-throughput solutions.
Figure 3 Here is a view of the layout of Listen VL130 chip, which is capable of processing 32 wake words and keywords while operating in the tens of microwatts, delivering energy efficiency without compromising performance. Source: PIMIC
At this technology crossroads, PiM technology is still important, but to a lesser degree. With these vastly larger matrices, the PiM array acts more like a cache, accelerating matrix multiplication piecewise. But much of the heavy lifting is done outside the PiM array, in a massively parallel dataflow architecture. And there is a further issue that must be resolved.
At the edge, in addition to facilitate model execution, it’s of primary importance to resolve the bandwidth and energy issues that come with scaling to massive memory sizes. Meeting all these challenges can improve an edge chip’s power-performance-area efficiency by more than 15 times.
PIMIC’s studies indicate that models with hundreds of millions to tens of billions of parameters can in fact be executed on edge devices. It will require 5-nm or 3-nm process technology, PiM structures, and most of all a deep understanding of how data moves in generative-AI models and how it interacts with memory.
PiM is indeed a silver bullet for TinyML at the extreme edge. But it’s just one tool, along with dataflow expertise and deep understanding of model dynamics, in reaching the point where we can in fact execute SLMs and some LLMs effectively at the far edge.
Subi Krishnamuthy is the founder and CEO of PIMIC, an AI semiconductor company developing processing-in-memory (PiM) technology for ultra-low-power AI solutions.
Related Content
- Getting a Grasp on AI at the Edge
- Tiny machine learning brings AI to IoT devices
- Why MCU suppliers are teaming up with TinyML platforms
- Open-Source Development Comes to Edge AI/ML Applications
- Edge AI: The Future of Artificial Intelligence in embedded systems
The post AI at the edge: It’s just getting started appeared first on EDN.
Unconventional headphones: Sonic response consistency, albeit cosmetically ungainly

Back in mid-2019, I noted that the ability to discern high quality music and other audio playback (both in an absolute sense and when relatively differentiating between various delivery-format alternatives) was dependent not only on the characteristics of the audio itself but also on the equipment used to audition it. One key link in the playback chain is the speakers, whether integrated (along with crossover networks and such) into standalone cabinets or embedded in headphones, the latter particularly attractive because (among other reasons) they eliminate any “coloration” or other alteration caused by the listening room’s own acoustical characteristics (not to mention ambient background noise and imperfect suppression of its derogatory effects).
However, as I wrote at the time, “The quality potential inherent in any audio source won’t be discernable if you listen to it over cheap (i.e., limited and uneven frequency response, high noise and distortion levels, etc.) headphones.” To wit, I showcased three case study examples from my multi-headphone stable: the $29.99 (at the time) Massdrop x Koss Porta Pro X:
$149.99 Massdrop x Sennheiser HD 58X Jubilee:
and $199.99 Massdrop x Sennheiser HD 6XX:
I’ve subsequently augmented the latter two products with optional balanced-connection capabilities via third-party cables. Common to all three is an observation I made about their retail source, Drop (formerly Massdrop): the company “partners with manufacturers both to supply bulk ‘builds’ of products at cost-effective prices in exchange for guaranteed customer numbers, and (in some cases) to develop custom variants of those products.” Hold that thought.
And I’ve subsequently added another conventional-design headphone set to the menagerie: Sony’s MDR-V6, a “colorless” classic that dates from 1985 and is still in widespread recording studio use to this day. Sony finally obsoleted the MDR-V6 in 2020 in favor of the MDR-7506, more recent MDR-M1 and other successor models, which motived my admitted acquisition of several gently used MDR-V6 examples off eBay:
One characteristic that all four of these headphones share is that, exemplifying the most common headphone design approach, they’re all based on electrodynamic speaker drivers:
At this point, allow me a brief divergence; trust me, its relevance will soon be more obvious. In past writeups I’ve done on various kinds of both speakers and microphones, I’ve sometimes intermingled the alternative term “transducer”, a “device that converts energy from one form to another,” for both words. Such interchange is accurate; even more precise would be an “electroacoustic transducer”, which converts between electrical signals and sound waves. Microphones input sound waves and output electrical signals; with speakers, it’s the reverse.
I note all of this because electrodynamic speaker drivers, specifically in their most common dynamic configuration, are the conceptual mirror twins to the dynamic microphones I more recently wrote about in late November 2022. As I explained at the time, in describing dynamic mics’ implementation of the principle of electromagnetic induction:
A dynamic microphone operates on the same basic electrical principles as a speaker, but in reverse. Sound waves strike the diaphragm, causing the attached voice coil to move through a magnetic gap creating current flow as the magnetic lines are broken.
Unsurprisingly, therefore, the condenser and ribbon microphones also discussed in that late 2022 piece also have (close, albeit not exact, in both of these latter cases) analogies in driver design used for both standalone speakers and in headphones. Condenser mics first; here’s a relevant quote from my late 2022 writeup, corrected thanks to reader EMCgenius’s feedback:
Electret condenser microphones (ECMs) operate on the principle that the diaphragm and backplate interact with each other when sound enters the microphone. Either the diaphragm or backplate is permanently electrically charged, and this constant charge in combination with the varying capacitance caused by sound wave-generated varying distance between the diaphragm and backplate across time results in an associated varying output signal voltage.
Although electret drivers exist, and have found use both in standalone speakers and within headphones, their non-permanent-charge electrostatic siblings are more common (albeit still not very common). To wit, an excerpt from a relevant section of Wikipedia’s headphones entry:
Electrostatic drivers consist of a thin, electrically charged diaphragm, typically a coated PET film membrane, suspended between two perforated metal plates (electrodes). The electrical sound signal is applied to the electrodes creating an electrical field; depending on the polarity of this field, the diaphragm is drawn towards one of the plates. Air is forced through the perforations; combined with a continuously changing electrical signal driving the membrane, a sound wave is generated…A special amplifier is required to amplify the signal to deflect the membrane, which often requires electrical potentials in the range of 100 to 1,000 volts.
Now for ribbon microphones; here’s how Wikipedia and I described them back in late 2022:
A type of microphone that uses a thin aluminum, duraluminum or nanofilm of electrically conductive ribbon placed between the poles of a magnet to produce a voltage by electromagnetic induction.
Looking at that explanation and associated image, you can almost imagine how the process would work in reverse, right? Although ribbon speakers do exist, my focus for today is their close cousins, planar magnetic (also known as orthodynamic) speakers. Wikipedia again:
Planar magnetic speakers (having printed or embedded conductors on a flat diaphragm) are sometimes described as ribbons, but are not truly ribbon speakers. The term planar is generally reserved for speakers with roughly rectangular flat surfaces that radiate in a bipolar (i.e. front and back) manner. Planar magnetic speakers consist of a flexible membrane with a voice coil printed or mounted on it. The current flowing through the coil interacts with the magnetic field of carefully placed magnets on either side of the diaphragm, causing the membrane to vibrate more or less uniformly and without much bending or wrinkling. The driving force covers a large percentage of the membrane surface and reduces resonance problems inherent in coil-driven flat diaphragms.
I’ve chronologically ordered electrostatic and planar magnetic driver technologies based on their initial availability dates, not based on when examples of them came into my possession. Specifically, I found a good summary of the two approaches (along with their more common dynamic driver forebear) on Ken Rockwell’s always-informative website, which is also full of lots of great photography content (it’s always nice to stumble across a kindred interest spirit online!). Rockwell notes that electrostatics were first introduced in 1957 [editor note: by Stax, who’s still in the business], and “have been popular among enthusiasts since the late 1950s, but have always been on the fringe as they are expensive, require special amplifiers and power sources and are delicate—but they sound flawless.” Conversely, regarding planar magnetics, which date from 1972, he comments, “Planar magnetic drivers were invented in the 1970s and didn’t become popular until modern ultra-powerful magnet technology become common in the 2000s. Planar magnetics need tiny, ultra powerful magnets that didn’t used to exist. Planar magnetics offer much of the sound quality of electrostatics, with the ease-of use and durability of conventional drivers, which explains why they are becoming more and more popular.”
Which takes us, roughly 1,200 words in, to the specifics of my exotic headphone journey, which began with two sets containing planar magnetic drivers. Back in late May 2024, Woot! was selling the Logitech for Creators Blue Ella headset (Logitech having purchased Blue in mid-2018) for $99.99, versus the initial $699.99 MSRP when originally introduced in early January 2017. The Ella looked (and still looks) weird, and is also heavy, albeit surprisingly comfortable; the only time I’ve ever seen anyone actually using one was a brief glimpse on Trey Parker and Matt Stone’s heads while doing voice tracks for South Park within the recently released Paramount+ documentary ¡Casa Bonita Mi Amor!. But reviewers rave about the headphones’ sound quality, a headphone amplifier is integrated for use in otherwise high impedance-unfriendly portable playback scenarios, and my wife was bugging me for a Father’s Day gift suggestion. So…
A couple of weeks later, a $10-off promotional coupon from Drop showed up in my email inbox. Browsing the retailer’s inventory, I came across another set of planar magnetic headphones, the Drop + HIFIMAN HE-X4 (remember my earlier comments about Drop’s longstanding history of partnering with name-brand suppliers to come up with custom product variants?), at the time selling for $99.99. They were well reviewed by the Drop community, and looked much less…err…alien…than the Blue Ella, so…(you’ve already seen one stock photo of ‘em earlier):
Look how happy she is (in spite of how big they are on her head)!
And of course, with two planer magnetic headsets now in the stable, I just had to snag an electrostatic representative too, right? Koss, for example, has been making (and evolving) them ever since 1968’s initial ESP/6 model:
The most recent ESP950 variant came out in 1990 and is still available for purchase at $999.99 (or less: Black Friday promotion-priced at $700 on Amazon as I type these words). Believe it or not, it’s one of the most cost-effective electrostatic headphone options currently in the market. Still, its price tag was too salty for my curiosity taste, lifetime factory warranty temptation aside.
That box to the right is the “energizer”, which tackles both the aforementioned high voltage generation and output signal amplification tasks. Koss includes with the ESP950 kit, believe it or not, a 6 C-cell battery pack to alternatively power the energizer (therefore enabling use of the headphones) when away from an AC outlet. Portability? Hardly, although in fairness, the ESP950 was originally intended for use in live recording settings.
But then I stumbled across the fact that back in April 2019, Drop (doing yet another partnership with a brand-name supplier, this one reflective of a long-term multi-product engagement also exemplified by the earlier-shown Porta Pro X) had worked with Koss to introduce a well-reviewed $499.99 version of the kit called the Massdrop x Koss ESP/95X Electrostatic System:
Drop tweaked the color scheme of both the headphones themselves (to midnight blue) and the energizer, swapped out the fake leather (“pleather”) earpads for foam ones wrapped in velour, and dropped both the battery pack and the leather case (the latter still available for purchase standalone for $150) from Koss’s kit to reduce the price point:
Bad news: at least for the moment, the ESP/95X is no longer being sold by Drop. Good news: I found a gently used kit on eBay for $300 plus shipping and tax (and for likely obvious reasons, I also purchased a two-year extended warranty for it).
And what did all of this “retail therapy” garner me? To set the stage for this section, I’ll again quote from the introduction to Ken Rockwell’s earlier mentioned writeup:
Almost all speakers and headphones today are “dynamic.”
Conventional speakers and headphones stick a coil of wire inside a magnet, and glue this coil to a stiff cone or dome that’s held in place with a springy suspension. Current passes through this coil, and electromagnetism creates force on the coil while in the field of the magnet. The resulting force vibrates the coil, and since it’s glued to a heavy cone, moves the whole mess in and out. This primitive method is still used today because it’s cheap and works reasonably well for most purposes.
Dynamic drivers are the standard today and have been the standard for close to a hundred years. These systems are cheap, durable and work well enough for most uses, however their heavy diaphragms and big cones lead to many more sound degrading distortions and resonances absent in the newer systems below.
By “newer systems below”, of course, he’s referring to alternative electrostatic and planar magnetic approaches. And although he’s not totally off-base with his observations, the choice of words like “primitive method” reveals a bias, IMHO. It’s true that the large, flat, thin and lightweight membrane-based approaches have inherent (theoretical, at least) advantages when it comes to metrics such as distortion and transient response, leading to descriptions such as “unmatched clarity and impressive detail”, which admittedly concur with my own ears-on impressions. That said, theoretical benefits are moot if they don’t translate into meaningful real-life enhancements. To wit, for a more balanced perspective, I’ll close with a (fine-tuned by yours truly) post within an August 2023 discussion thread titled “Is there really any advantage to planar magnetics or electrostats?” at Audio Science Review, a site that I regularly reference:
For electrostatics, the strong points are the low membrane weight and drive across the entire membrane. The disadvantage is output level. The driver surface area is big, which has advantages and disadvantages. On can play with shape to change modal behavior. Electrostatics are difficult to drive in the sense that they require a bias voltage (or electret charge) and high voltage on the plates, which necessitates mains voltage or converters. Mechanical tension is a must and ‘sticking’ to one stator is a potential problem.
For planar magnetics, the strong points are the maximum sound pressure level (SPL), linearity and the driver size. The latter can be both a blessing and (frequency-dependent) downside. Fewer tuning methods are available, and it is difficult to get a bass boost in a passive way. The magnets obstruct the sound waves more than does the stator of electrostatic planars, which has an influence on mid to high frequencies. Planar magnetics are easier to drive than electrostatics but in general are inefficient compared to dynamic drivers, especially when high SPL is needed with good linearity. They are heavy (weight) due to the magnets compared to other drivers. They can handle a lot of power. They need closed front volume to work properly.
Dynamics can have a much higher efficiency, at the expense of maximum undistorted SPL. They can be used directly from low power sources. There are many more ways to ‘shape’ the sound signature of the driver, and the headphone containing it. They are less expensive to make, and lighter in weight. Membrane size and shape can both find use in controlling modal issues. Linearity (max SPL without distortion) can be much worse than planar alternatives, although for low to normal SPLs, this usually is not an issue.
Balanced armature drivers [editor note: an alternative to dynamic drivers not discussed here, commonly found in earbuds] are smaller and can be easily used close to the ear canal. These drivers too have strong and weak points and are quite different from dynamic drivers. They are easier to make custom molds for due to their size.
In closing, speaking of “balance” (along with the just-mentioned difference between theoretical benefits and meaningful real-life enhancements), I found it interesting that none of the electrostatic or planar magnetic headphones discussed here offer the balanced-connection output (even optional) that I covered at length back in December 2020:
And with that, having just passed through the 2,500-word threshold, I’ll close for today with an as-usual invitation for your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Balanced headphones: Can you hear the difference?
- Microphones: An abundance of options for capturing tones
- Is high-quality streamed audio an oxymoron?
- Earbud implementation options: Taking a test drive(r)
- Teardown: Analog rules over digital in noise-canceling headphones
- Audio Perceptibility: Mind The Headphone Sensitivity
The post Unconventional headphones: Sonic response consistency, albeit cosmetically ungainly appeared first on EDN.
A brief history and technical background of heat shrink tubing

Heat shrink tubing, rarely referred to simply as “HST” even in our acronym-intensive world, is made of cross-linked polymers and is primarily used to cover and protect wire splices. EDN and Planet Analog contributor Bill Schweber provides a sneak peek of this important but often underrated technology in his latest blog.
Read the full story at EDN’s sister publication, Planet Analog.
Related Content
- Consumer connectors get ruggedized
- Be aware of connector mating-cycle limits
- Read this and give electric insulation a second thought
- Fry’s: Will hands-on opportunities shrink as component stores close?
The post A brief history and technical background of heat shrink tubing appeared first on EDN.
CES 2025: Moving towards software-defined vehicles

Software-defined vehicles (SDVs) are a big theme at CES this year, shifting vehicles from hardware-centric upgrades to over the air (OTA) software upgrades. In order to do this, vehicle subsystems must rely on a, more or less, generic processing platform that can perform a wide variety of functions to serve the various aspects of a car. As shown in Figure 1, TI’s approach to this is shifting from a “domain” architecture to a “zonal” one where ECUs that were once custom-tailored to specific domains (e.g., powertrain, ADAS, infotainment, body electronic and lighting, passive safety) are now more location, or zone-, -based to reduce weighty wire harnessing and improve processor speeds.
Figure 1 Traditional domain versus zone architecture. Source: Texas Instruments
TI’s radar sensor, audio processors, Class-D amplifierTI’s automotive innovations are currently focused in powertrain systems; ADAS; in-vehicle infotainment (IVI); and body electronics and lighting. The recent announcements fall into the ADAS with the AWRL6844 radar sensor as well as IVI with the AM275 and AM62D processors and the class-D audio amplifier.
ADAS: passenger safety solutionThe AWRL6844 radar sensor uses 60-GHz millimeter-wave (mm-wave) with a 4×4 antenna array and edge AI models running on an on-chip TI-specific accelerator and DSP to support several in-vehicle safety measures including occupancy monitoring for seat belt reminders, child presence detection, and intrusion (Figure 2). Presently, OEMs resort to a combination of in-seat weight sensors, two UWB sensors for front-row and back-row child presence detection, and an ultrasonic intrusion module for the same direct-sensing safety measures, directly tracking human activity such as respiration, heartbeat, movement, etc.). The technology is designed to assist OEMs in meeting evolving regulatory safety requirements such as the Euro new car assessment program (NCAP) advanced that offers rewards to manufacturers for implementing advanced safety technologies as a means to complement its established star rating system. Yariv Raveh, the vice president and business unit management of radar stated, “In 2025 the Euro NCAP requirement for child presence detection will only award points for a direct sensing system and in the near future, the in-cabin sensing system must accurately distinguish between a child and an adult in order to provide a good user experience.”
Figure 2 A block diagram of TI’s AWRL6844 radar sensor and the three vehicle modes that the sensor can assist with (seat belt reminder, child presence detection, and intrusion detection). Source: Texas Instruments
IVI: Premium audio solutionSome of the features of the new AM275x-Q1 and AM62D-Q1 processors are the integration of two vector-based C7x DSP cores, multiple Arm cores, on-chip memory, an NPU accelerator, and audio networking with Ethernet AVB. The differences between the processors is highlighted in Figure 3. “Tier 1 suppliers must elect the appropriate processing components to meet all of their customer needs across their fleets. So, our answer is to provide two different architectures to give engineers the flexibility to choose across the range of use cases, all using the same audio processing family where engineers can design standalone and integrated premium audio systems across a range of performance levels with minimal additional hardware and software investment,” said Sonia Ghelani, TI’s product line manager for signal processing MCUs. The company is actively working with customers to incorporate AI into the audio signal chain for unique solutions in applications such as active noise cancellation (ANC) and road noise cancellation (RNC).
Figure 3 The AM275x DDR-less MCU and AM62D DDR-based process for premium audio in IVI applications. Source: Texas Instruments
IVI: Class-D audio amplifierThe TAS6754-Q1 class-D amplifier (Figure 4) is meant to assist engineers with implementing TI’s “1L” modulation scheme, a technology that lowers the inductor count per audio channel to one (hence the phrase “1L”). Modern vehicles can embed well over 20 speakers and, in an effort to reduce size, weight, and cost, class-D amplifiers are being used for their higher power efficiency and lower thermal dissipation. However, these amplifiers generally require two LC filters per audio channel to attenuate high frequency noise. “1L maintains class-D performance while reducing component count and cost, allowing the premium audio system to grow in terms of speakers and mics,” added Sonia Ghelani.
Figure 4 Sample vehicle speaker and mic distribution as well as a sample block diagram of an audio signal chain including TI’s class-D amplifier. Source: Texas Instruments
Blurring the lines between IVI and ADASOne major discussion during the press briefing involved the industry trend of integrating ADAS and IVI functions on a single SoC. “So today we see that they’re in two separate boards, however, more and more we’re seeing that they end up being in the same board,” said Mark Ng, TI’s director of automotive systems. Sonia Ghelani added with an example of an overlap between ADAS and IVI functions, “these chimes and seat belt reminders are ADAS requirements that fall into the audio domain. As we move into a world of software-defined cars with more zonal architectures, you’ll continue to see an overlap between the two.” She continued, “For TI it’s important that we understand exactly what the customer is trying to build so that we don’t silo these systems in one bucket or another, but rather understand what problems the customer is trying to solve.”
Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
- Power Tips #136: Design an active clamp circuit for rectifiers at a high switching frequency
- Collaboration drives innovation in software-defined vehicles
- AI algorithms on MCU demo progress in automated driving
The post CES 2025: Moving towards software-defined vehicles appeared first on EDN.
A two transistor sine wave oscillator

Figure 1 shows a variation on a sine wave oscillator, it uses just two transistors and a single variable resistor to set the frequency.
Figure 1 Just a couple of components are needed for a simple tunable sine wave oscillator.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The section around Q1 is a multiple-feedback-bandpass-filter (MFBF). The usual embodiment of this type of filter is shown in Figure 2.
Figure 2 A standard implementation of a MFBF.
The formulas for these filter can be found in almost any textbook (where C = C1 = C2):
Please note that the center frequency, among others, is determined by the resistance of R3. The gain of the filter is determined by the ratio of R2/R1 in such a way that Av = -R2/(2*R1). Usually this filter is implemented around an operational amplifier, it can also be implemented around an inverting transistor amplifier. However, because of the limited open-loop gain of the latter, the gain will be lacking at the higher frequencies.
The section around Q2 is an inverting amplifier, with an unloaded gain set by R8/R7. D1 and D2 together with R8 form a clipper to make sure that the signal offered to the MFBF is of constant level.
At the center frequency of the filter, the phase-shift is 180°. Together with the 180° phase shift of Q2 there is a total 360° phase shift at this frequency.The loop gain is >1 due to the ample gain of Q2. Thus, Barkhausen’s criteria are met.
The relative soft clipping of D1 and D2 together with the filtering of Q1 limits the amount of harmonics in the output signal. The passive components around Q1 determine the center frequency.
With the current values, the frequency can be set between 498 Hz and 1230 Hz by changing R3 between 1k and 6k. At the same time the output amplitude changes from 1.28 Vpp to 0.68 Vpp. The output shows around ~1% distortion (Figure 3).
Figure 3 The scope image shows the oscillator output at circa 1 kHz.
A variation in the supply voltage from 9 V to 12 V causes a frequency variation of only 2 Hz and a variation of output amplitude from 0.80 Vpp to 0.86 Vpp.
Cor van Rij blew his first fuse at 10 under the close supervision of his father who promptly forbade him to ever work on the house mains again. He built his first regenerative receiver at the age of 12 and his boys bedroom was decorated with all sorts of antennas and a huge collection of disassembled radios took up every horizontal plane. He studied electronics and graduated cum laude. He worked as a data design engineer and engineering manager in the telecom industry. And is working for almost 20 years as a principal electrical design engineer, specializing in analog and RF electronics and embedded firmware. Every day is a new discovery!
Related Content
- Simple 5-component oscillator works below 0.8V
- Ultra-low distortion oscillator, part 1: how not to do it.
- Clapp oscillator
- Oscillators: How to generate a precise clock source
- Clapp versus Colpitts
The post A two transistor sine wave oscillator appeared first on EDN.
CES 2025’s sensor design highlights

Sensing solutions—a vital ingredient in automotive, consumer and industrial applications—are prominent features in the offerings displayed at CES 2025 held in Las Vegas, Nevada on from 7 to 10 January. That encompasses sensing solutions packed into system-on-chip (SoC) devices as well as hardware components meshed with sensor fusion algorithms.
But the most exciting foray in this year’s sensor parade at CES 2025 relates to how artificial intelligence (AI) content is incorporated into sensing designs.
Read the full story published at EDN’s sister publication, Planet Analog.
Related Content
- MEMS group targets IoT sensor design
- Key design considerations, future of IoT sensors
- A new era in electrochemical sensing technology
- High-Performance Design for Ultrasound Sensors
- Designer’s Guide to Industrial IoT Sensor Systems
The post CES 2025’s sensor design highlights appeared first on EDN.
Is Imagination Technologies for sale again?

Graphics chip designer Imagination Technologies is up for grabs again. A Bloomberg report claims that Canyon Bridge Capital Partners, the private equity firm with ties to Chinese state investors, has hired Lazard Inc. to seek a buyer for the Hertfordshire, U.K.-based chip designer.
Imagination, once a promising graphics technology outfit, could never recover after the Apple fiasco and the perception of Chinese ownership. According to media reports, Apple, which owned an 8.1% stake in Imagination, considered buying the British chip designer in 2016. However, after failing to agree on Imagination’s valuation, Apple left the negotiating table and announced that it would start developing its own graphics IP.
Apple contributed to nearly half of Imagination’s sale, sending shock waves at the British chip company at that time. The company’s stock fell by 70%, and in 2017, Canyon Bridge, backed by state-owned China Reform, acquired Imagination for $686 million. Soon after, Imagination began shedding its non-core businesses; for instance, it sold its connectivity business Ensigma comprising Wi-Fi and Bluetooth silicon to Nordic Semiconductor.
Next came the issue of China gaining access to key semiconductor technology. The effort to appoint new board members and Imagination’s listing in Shanghai proved hot potatoes, leading to intervention from the U.K. regulators to ensure that Imagination remains a U.K.-headquartered business. The company has been in distress since then.
Figure 1 Imagination has more than 3,500 patents related to graphics and related technologies.
Its CEO, Simon Beresford-Wylie, has denied a recent Daily Telegraph report that he’s stepping down. He also rejected some reports about the company engaging in illicit transfers of technology to China. Earlier, in November 2023, Reuters reported that Imagination was laying off 20% of its staff.
With this backdrop, let’s go back to Imagination on the selling block. The Bloomberg report has named Alphabet Inc.’s Google, MediaTek, Renesas, and Texas Instruments as Imagination’s key clients. But no suitors have been reported in trade media yet.
Imagination owners are pinning their hopes on two major factors. First, they draw their hopes from Nvidia’s runaway success in the graphics realm. Though Nvidia’s GPUs are targeted at entirely different markets such as data centers and scientific computing. Imagination, on the other hand, mainly offers graphic solutions for lower-power markets such as automotive, PC cards, drones, robotics, and smartphones.
Second, like Nvidia, Imagination aims to bolster its standing by incorporating artificial intelligence (AI) content in its graphics IP offerings. The British chip firm plans to turn its graphics IP into AI accelerators for low-power training and inference applications.
Figure 2 Imagination is aiming to bring graphics-centric AI to battery-powered devices like drones and smartphones.
Imagination, founded in 1985, has come a long way in its 40-year long technology journey. Once seen as a jewel in Britain’s technology crown, it’s now facing the paradox of a struggling company with a highly promising technology. Perhaps its new owner could address that paradox and put the graphics design house in order.
Related Content
- Re-imagining Imagination Technologies
- Imagination’s RISC-V gambit reaches its next level
- Imagination Raises $100 Million Investment To Take On Edge AI
- GPU specialist Imagination to create 250 engineering jobs in 2022
- Imagination Sells Ensigma Wi-Fi Business to Nordic Semiconductor
The post Is Imagination Technologies for sale again? appeared first on EDN.
Clapp versus Colpitts

Edwin Henry Colpitts (January 19, 1872 – March 6, 1949)
James Kilton Clapp (December 03, 1897 – 1965)
The two persons above are the geniuses who gave us two classic oscillator circuits as shown in Figure 1.
Figure 1 The two classic oscillators circuits: Colpitts (left) and Clapp (right).
We’ve looked at these two oscillators individually before in “The Colpitts oscillator” and “Clapp oscillator”.
However, a side-by-side examination of the two oscillators is additional time well spent.
The Clapp oscillator was devised as an improvement over the Colpitts oscillator by virtue of adding one capacitor, C3, in the above image.
The amplifier “A” is nominally at a gain value of unity, but as a matter of practicality, the gain value is slightly lower than that because the amplifier is really a “follower”. If made with a vacuum tube, then “A” is a cathode follower. If made with a bipolar transistor, then “A” is an emitter follower. If made with a field effect transistor, then “A” is a source follower. The concept itself remains the same.
Each oscillator works because the RLC network develops a voltage step-up at the frequency of oscillation. The “R” is not an incorporated component though. The “R” (R1 or R2) simply represents an output impedance of the follower. The 10 ohms that we see here is purely an arbitrary value guess on my part. The other components are also of arbitrary value choices, but they are convenient values for illustrating just how these little beasties work.
We use SPICE simulations to examine the transfer functions of the two RLC networks as shown in Figure 2.
Figure 2 Colpitts versus Clapp spice simulations using the transfer functions of the two RLC networks.
Each RLC network has a peak in its frequency response which will result in oscillation at that peak frequency. However, the peak of the Clapp circuit is much sharper and narrower than that of the Colpitts circuit. This narrowing has the beneficial effect of suppressing spectral noise centered around the oscillation frequency.
Note in the examples above that the oscillation peaks differ by 0.16% and that the reactance of the L1 inductor and the reactance of the L2 C3 pair differ by 1.12%. That’s just a matter of my having chosen some convenient numbers with the intent of having the two curves match in that regard at the same peak frequency. (I almost succeeded.)
The Clapp oscillator has several advantages over the Colpitts oscillator. The transfer function peak of the Clapp circuit is narrower than that of the Colpitts which tends to yield an oscillator output with less spurious off-frequency energy meaning a “cleaner” signal.
Another advantage of the Clapp circuit is that capacitors C4 and C5 can be made very large as the L2 C3 combination is made to look like a very small inductance value at the oscillation frequency. The larger C4 and C5 values mean that any variations of those capacitance values brought about by variations of the input capacitance of the “A” stage have a minimal effect on the oscillation frequency.
That’s because frequency control of the Clapp circuit is primarily set by the series resonance of the L2 C3 pair rather than the parallel resonance of L1 versus the C1 C2 pair in the Colpitts circuit. If the “A” input capacitance tends to vary for this reason or that, the Clapp circuit is far less prone to an unwanted frequency shift as shown in Figure 3.
Figure 3 A Clapp versus Colpitts frequency shift comparison showing how the Clapp circuit (right) is far less prone to this unwanted shift in frequency.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- The Colpitts oscillator
- Clapp oscillator
- Emitter followers as Colpitts oscillators
- Oscillator has voltage-controlled duty cycle
The post Clapp versus Colpitts appeared first on EDN.
Industrial MCU packs EtherCAT controller

GigaDevice has introduced the GD32H75E 32-bit MCU, featuring an integrated GDSCN832 EtherCAT subdevice controller, which is also available as a standalone device. Both components target industrial automation applications, including servo control, variable frequency drives, industrial PLCs, and communication modules.
Powered by an Arm Cortex-M7 core running at up to 600 MHz, the GD32H75E microcontroller includes a DSP hardware accelerator, double-precision floating-point unit, hardware trigonometric accelerator, and filter algorithm accelerator. It also comes with 1024 KB of SRAM, up to 3840 KB of flash memory with security protection, and a 64-KB cache to enhance CPU efficiency and real-time performance.
The MCU’s integrated EtherCAT subdevice controller, licensed from Beckhoff Automation, manages EtherCAT communication, acting as an interface between the EtherCAT fieldbus and the sub-application. It includes two internal PHY ports and an external MII. With 64-bit distributed clock support, it enables synchronization with other EtherCAT devices, achieving DC synchronization accuracy to within 1 µs.
The GD32H75E MCU is available in two variants: one with two internal Ethernet PHYs and another that supports bypass mode, both housed in BGA240 packages. Samples and development boards are available now, with mass production planned for Q2 2025.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Industrial MCU packs EtherCAT controller appeared first on EDN.
Wireless audio SoC integrates AI processing

Airoha Technology’s AB1595 Bluetooth audio chip features a 6-core architecture and a built-in AI hardware accelerator. It consolidates functions typically spread across multiple chips into a single SoC and achieves Microsoft Teams Open Office certification.
The AB1595 uses AI algorithms and input from up to 10 microphones to improve speech clarity by reducing background noise. This collaboration allows it to accurately distinguish between the user’s voice and environmental sounds, achieving professional-grade speech quality. In noisy environments like offices and cafes, it enhances voice noise suppression from 10 dB up to 40 dB, optimizing speech quality and elevating consumer headsets to professional teleconference standards.
Real-time adaptive active noise cancellation (ANC) in the AB1595 boosts environmental noise attenuation across a wide frequency range. It detects the user’s wearing condition (e.g., fit or leakage) and adjusts compensation accordingly. Internal filters automatically adapt to both the fit and surrounding noise, balancing effective noise cancellation with comfort for a superior wearing and listening experience.
Airoha reports that the AB1595 has been adopted by customers, with products expected to be available in Q1 2025. A datasheet was not available at the time of this announcement. Contact Airoha Technology here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Wireless audio SoC integrates AI processing appeared first on EDN.
85-V LED driver handles multiple topologies

Designed for automotive LED lighting systems, Diodes’ AL8866Q driver supports buck, boost, buck-boost, and single-ended primary-inductance converter (SEPIC) topologies. This DC-switching LED driver-controller operates over an input voltage range of 4.7 V to 85 V, accommodating 12-V, 24-V, and 48-V battery power rails. It is suitable for applications such as daytime running lights, high/low beams, fog lights, turn signals, and brake lights.
The AL8866Q employs a 400-kHz fixed-frequency peak current-mode control architecture. Spread spectrum frequency modulation enhances EMI performance and aids compliance with the CISPR 25 Class 5 standard.
The device enables analog or PWM dimming via its DIM pin. A 1% reference tolerance ensures better brightness control and matching between lamps. With an analog dimming range of 1% to 100%, the AL8866Q maintains ±12% output current accuracy at 20% dimming. Alternatively, PWM dimming, ranging from 0.1 kHz to 1 kHz, provides a 100:1 dynamic range.
An integrated soft-start function gradually increases the inductor and switch current, minimizing potential overvoltage and overcurrent at the output. The driver also includes an open-drain fault output to signal various fault conditions.
Prices for the AEC-Q100 Grade 1 qualified AL8866Q driver start at $0.48 each in lots of 1000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 85-V LED driver handles multiple topologies appeared first on EDN.
PCIe Gen4 SSD delivers 6200 MB/s

The P400 V4 from Patriot Memory is a PCIe Gen 4 x4 M.2 SSD, offering read speeds up to 6200 MB/s and write speeds up to 5200 MB/s. Optimized for PC and PS5 compatibility, it provides gamers and content creators with high-speed performance and enhanced thermal management. Its compact M.2 2280 form factor makes it well-suited for space-constrained systems, including thin laptops and small form-factor PCs.
With a read speed of 6200 MB/s, the P400 V4 achieves a total bytes written (TBW) rating of 1280 TB. Available in storage capacities ranging from 500 GB to 4 TB, the drive features SmartECC technology for improved reliability. To maintain consistent peak performance during intensive operations, the P400 V4 incorporates a graphene heatshield that helps prevent thermal throttling and efficiently manages thermal output.
The P400 V4’s PCIe Gen 4 x4 controller is NVMe 2.0 compliant, offering improved performance and support for the latest features. The SSD comes with a 5-year warranty and supports Windows 7, 8.0, 8.1, 10, and 11 (drivers may be required for older versions).
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post PCIe Gen4 SSD delivers 6200 MB/s appeared first on EDN.
The advent of co-packaged optics (CPO) in 2025

Co-packaged optics (CPO)—the silicon photonics technology promising to transform modern data centers and high-performance networks by addressing critical challenges like bandwidth density, energy efficiency, and scalability—is finally entering the commercial arena in 2025.
According to a report published in Economic Daily News, TSMC has successfully integrated CPO with advanced semiconductor packaging technologies, and sample deliveries are expected in early 2025. Next, TSMC is projected to enter mass production in the second half of 2025 with 1.6T optical transmission offerings.
Figure 1 CPO facilitates a shift from electrical to optical transmission to address the interconnect limitations such as signal interference and overheating. Source: TrendForce
The report reveals that TSMC has successfully trialled a key CPO technology—micro ring modulator (MRM)—at its 3-nm process node in close collaboration with Broadcom. That’s a significant leap from electrical to optical signal transmission for computing tasks.
The report also indicates that Nvidia plans to adopt CPO technology, starting with its GB300 chips, which are set for release in the second half of 2025. Moreover, Nvidia plans to incorporate CPO in its subsequent Rubin architecture to address the limitations of NVLink, the company’s in-house high-speed interconnect technology.
What’s CPO
CPO is a crucial technology for artificial intelligence (AI) and high-performance computing (HPC) applications. It enhances a chip’s interconnect bandwidth and energy efficiency by integrating optics and electronics within a single package, which significantly shortens electrical link lengths.
Here, optical links offer multiple advantages over traditional electrical transmission; they lower signal degradation over distance, reduce susceptibility to crosstalk, and offer significantly higher bandwidth. That makes CPO an ideal fit for data-intensive AI and HPC applications.
Furthermore, CPO offers significant power savings compared to traditional pluggable optics, which struggle with power efficiency at higher data rates. The early implementations show 30% to 50% reductions in power consumption, claims an IDTechEx study titled “Co-Packaged Optics (CPO): Evaluating Different Packaging Technologies.”
This integration of optics with silicon—enabled by advancements in chiplet-based technology and 3D-IC packaging—also reduces signal degradation and power loss and pushes data rates to 1.6T and beyond.
Figure 2 Optical interconnect technology has been gaining traction due to the growing need for higher data throughput and improved power efficiency. Source: IDTechEx
Heterogeneous integration, a key ingredient in CPO, enables the fusion of optical engine (OE) with switch ASICs or XPUs on a single package substrate. Here, the optical engine includes both photonic ICs and electronic ICs. The packaging in CPO generally employs two approaches. The first one involves the packaging of optical engine itself and the second one focuses on the system-level integration of the optical engine with ICs like ASICs or XPUs.
A new optical computing era
TSMC’s approach involves integrating CPO modules with advanced packaging technologies such as chip-on-wafer-on-substrate (CoWoS) or small outline integrated circuit (SOIC). It eliminates traditional copper interconnects’ speed limitations and puts TSMC at the forefront of a new optical computing era.
However, challenges such as low yield rates in CPO module production might lead TSMC to outsource some optical-engine packaging orders to other advanced packaging companies. This shows that the complex packaging process encompassing CPO fabric will inevitably require a lot of fine-tuning before commercial realization.
Still, it’s a breakthrough that highlights a tipping point for AI and HPC performance, wrote Jeffrey Cooper in his LinkedIn post. Cooper, a former sourcing lead for ASML, also sees a growing need for cross-discipline expertise in photonics and semiconductor packaging.
Related Content
- Optical interconnects draw skepticism, scorn
- TSMC crunch heralds good days for advanced packaging
- Intel and FMD’s Roadmap for 3D Heterogeneous Integration
- Heterogeneous Integration and the Evolution of IC Packaging
- CEA-Leti Develops Active Optical Interposers to Connect Chiplets
- Road to Commercialization for Optical Chip-to-Chip Interconnects
The post The advent of co-packaged optics (CPO) in 2025 appeared first on EDN.
PWM power DAC incorporates an LM317

Instead of the conventional approach of backing up a DAC with an amplifier to boost output, this design idea charts a less traveled by path to power. It integrates an LM317 positive regulator with a simple 8-bit PWM DAC topology to obtain a robust 11-V, 1.5-A capability. It thus preserves simplicity while exploiting the built-in fault protection features (thermal and overload) of that time proven Bob Pease masterpiece. Its output is proportional to the guaranteed 2% precision of the LM317 internal voltage reference, making it securely independent of vagaries of both the 5-V logic supply rail and incoming raw DC supply.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 diagrams how it works.
Figure 1 LM317 regulator melds with HC4053 CMOS switch to make a 16-W PWM power DAC.
CMOS SPDT switches U1b and U1c accept a 10-kHz PWM signal to generate a 0 V to 9.75 V “ADJ” control signal for the U2 regulator via feedback networks R1, 2, and R3. The incoming PWM signal is AC coupled so that U1 can “float” on U2’s output. U1c provides an inverse of the PWM signal, implementing active ripple cancellation as described in “Cancel PWM DAC ripple with analog subtraction.” Note that R1||R2 = R3 to optimize ripple subtraction and DAC accuracy.
This feedback arrangement does, however, make the output voltage a nonlinear function of PWM duty factor (DF) as given by:
Vout = 1.25 / (1 – DF(1 – R1/(R1 + R2))
= 1.25 / (1 – 0.885*DF)
This is graphed in Figure 2.
Figure 2 The Vout (1.25 V to 11 V) versus PWM DF (0 to 1) where Vout = 1.25 / (1 – 0.885*DF).
Figure 3 plots the inverse of Figure 2, yielding the PWM DF required for any given Vout.
Figure 3 The inverse of Figure 2 where PWM DF = (1 – 1.25/Vout)/0.885.
The corresponding 8-bit PWM setting works out to: Dbyte = 255 (1 – 1.25 / Vout) / 0.885
Vfullscale = 1.25 / (R1/(R1 + R2)), so design choices other than 11 V are available. 11 V is the maximum consistent with HC4053’s ratings, but up to 20 V is feasible if the metal gate CD4053B is substituted for U1. Don’t forget, however, the requirement that R3 = R1||R2.
The supply rail V+ can be anything from a minimum of Vfullscale+3V to accommodate U2’s minimum headroom dropout requirement, up to the LM317’s absmax 40-V limit. DAC accuracy will be unaffected due to this chip’s excellent PSRR, although of course efficiency may suffer.
U2 should be heatsunk as dictated by heat dissipation caused by required output currents multiplied by the V- to Vout differential. Up to double-digit watts is possible at high currents and low Vout.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Cancel PWM DAC ripple with analog subtraction
- A faster PWM-based DAC
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- Cancel PWM DAC ripple with analog subtraction but no inverter
The post PWM power DAC incorporates an LM317 appeared first on EDN.
2024: A year’s worth of interconnected themes galore

As any of you who’ve already seen my precursor “2025 Look Ahead” piece may remember, we’ve intentionally flipped the ordering of my two end-of-year writeups once again this year. This time, I’ll be looking back over 2024: for historical perspective, here are my prior retrospectives for 2019, 2021, 2022 and 2023 (we skipped 2020).
As I’ve done in past years, I thought I’d start by scoring the topics I wrote about a year ago in forecasting the year to come:
- Increasingly unpredictable geopolitical tensions
- The 2024 United States election
- Windows (and Linux) on Arm
- Declining smartphone demand, and
- Internal and external interface evolutions
Maybe I’m just biased but I think I nailed ‘em all, albeit with varying degrees of impactfulness. To clarify, by the way, it’s not that if the second one would happen was difficult to predict; the outcome, which I discussed a month back, is what was unclear at the time. In the sections that follow, I’m going to elaborate on one of the above themes, as well as discuss other topics that didn’t make my year-ago forecast but ended up being particularly notable (IMHO, of course).
Battery transformationsI’ve admittedly written quite a lot about lithium-based batteries and the devices they fuel over the past year, as I suspect I’ll also be doing in the year(s) to come. Why? My introductory sentence to a recent teardown of a “vape” device answers that question, I think:
The ever-increasing prevalence of lithium-based batteries in various shapes, sizes and capacities is creating a so-called “virtuous circle”, leading to lower unit costs and higher unit volumes which encourage increasing usage (both in brand new applications and existing ones, the latter as a replacement for precursor battery technologies), translating into even lower unit costs and higher unit volumes that…round and round it goes.
Call me simple-minded (as some of you already may have done a time or few over the years!) but I consistently consult the same list of characteristics and tradeoffs among them when evaluating various battery technologies…a list that was admittedly around half its eventual length when I first scribbled it on a piece of scrap paper a few days ago, until I kept thinking of more things to add in the process of keyboard-transcribing it (thereby eventually encouraging me to delete the “concise” adjective I’d originally used to describe it)!
- Volume manufacturing availability, translating to cost (as I allude to in the earlier quote)
- Form factor implementation flexibility (or not)
- The required dimensions and weight for a given amount of charge-storage capacity
- Both peak and sustained power output
- The environmental impacts of raw materials procurement, battery manufacturing, and eventual disposal (or, ideally, recycling)
- Speaking of “environmental”, the usable operating temperature range, along with tolerance to other environment variables such as humidity, shock and vibration
- And recharge speed (both to “100% full” and to application-meaningful percentages of that total), along with the number of recharge cycles the battery can endure until it no longer can hold enough anode electrons to be application-usable in a practical sense.
Although plenty of lithium battery-based laptops, smartphones and the like are sold today, a notable “driver” of incremental usage growth in the first half of this decade (and beyond) has been various mobility systems—battery-powered drones (and, likely in the future, eVTOLs), automobiles and other vehicles, untethered robots, and watercraft (several examples of which I’ll further elaborate on later in this writeup, for a different reason). Here, the design challenges are quite interconnected and otherwise complex, as I discussed back in October 2021:
Li-ion battery technology is pretty mature at this point, as is electric motor technology, so in the absence of a fundamental high-volume technology breakthrough in the future, to get longer flight time, you need to include bigger batteries…which leads to what I find most fundamentally fascinating about drones and their flying kin: the fundamental balancing act of trading off various contending design factors that is unique to the craft of engineering (versus, for example, pure R&D or science). Look at what I’ve just said. Everyone wants to be able to fly their drone as long as possible, before needing to land and swap out battery packs. But in order to do so, that means that the drone manufacturer needs to include larger battery cells, and more of them.
Added bulk admittedly has the side benefit of making the drone more tolerant of wind gusts, for example, but fundamentally, the heavier the drone the beefier the motors need to be in order to lift it off the ground and fly it for meaningful altitudes, distances, and durations. Beefier motors burn more juice, which begs for more batteries, which make the drone even heavier…see the quagmire? And unlike with earth-tethered electricity-powered devices, you can’t just “pull over to the side of the road” if the batteries die on you.
Now toss in the added “twist” that everyone also wants their drone to be as intelligent as possible so it doesn’t end up lost or tangled in branches, and so it can automatically follow whatever’s being videoed. All those image and other sensors, along with the intelligence (and memory, and..) to process the data coming off them, burns juice, too. And don’t forget about the wireless connectivity between the drone and the user—minimally used for remote control and analytics feedback to the user…How do you balance all of those contending factors to come up with an optimum implementation for your target market?
Although the previous excerpt was specifically about drones, many of the points I raised are also relevant at least to a degree in the other mobility applications I mentioned. That said, an electric car’s powerplant size and weight constraints aren’t quite as acute as an airborne system’s might be, for example. This application-defined characteristics variability, both in an absolute sense and relative to others on my earlier list, helps explain why, as Wikipedia points out, “there are at least 12 different chemistries of Li-ion batteries” (with more to come). To wit, developers are testing out a diversity of both anode and cathode materials (and combinations of them), increasingly aided by AI (which I’ll also talk more about later in this piece) in the process, along with striving to migrate away from “wet” electrolytes, which among other things are flammable and prone to leakage, toward safer solid-state approaches.
Another emerging volume growth application, as I highlighted throughout the year, are battery generators, most frequently showcased by me in their compact portable variants. Here, while form factor and weight remain important, since the devices need to be hauled around by their owners, they’re stationary while in use. Extrapolate further and you end up with even larger home battery-backup banks that never get moved once installed. And extrapolate even further, to a significant degree in fact, and you’re now talking about backup power units for hospitals, for example, or even electrical grid storage for entire communities or regions. One compelling use case is to smooth out the inherent availability variability of renewable energy sources such as solar and wind, among other reasons to “feed” the seemingly insatiable appetites of AI workload-processing data centers in a “green”-as-possible manner. And in all these stationary-backup scenarios, installation space is comparatively abundant and weight is also of lesser concern; the primary selection criteria are factors such as cost, invulnerability, and longevity.
As such, non-lithium-based technologies will likely become increasingly prominent in the years to come. Sodium-ion batteries (courtesy of, in part, sodium’s familial proximity to lithium in the Periodic Table of Elements) are particularly near-term promising; you can already buy them on Amazon! The first US-based sodium-ion “gigafactory” was recently announced, as was the US Department of Energy’s planned $3 billion in funding for new sodium-ion (and other) battery R&D projects. Iron-based batteries such as the mysteriously named (but not so mysterious once you learn how they work) iron-air technology tout raw materials abundance (how often do you come across rust, after all?) translating into low cost. Vanadium-based “flow” batteries also hold notable promise. And there’s one other grid-scale energy storage candidate with an interesting twist: old EV batteries. They may no longer be sufficiently robust to reliably power a moving vehicle, but stationary backup systems still provide a resurrecting life-extension opportunity.
For ongoing information on this topic, in addition to my and colleagues’ periodic coverage, market research firm IDTechEx regularly publishes blog posts on various battery technology developments which I also commend to your inspection. I have no connection with the firm aside from being a contented consumer of their ongoing information output!
Drones as armamentsAs a kid, I was intrigued by the history of warfare. Not (at all) the maiming, killing and other destruction aspects, mind you, instead the equipment and its underlying technologies, their use in conflicts, and their evolutions over time. Three related trends that I repeatedly noticed were:
- Technologies being introduced in one conflict and subsequently optimized (or in other cases disbanded) based on those initial experiences, with the “success stories” then achieving widespread use in subsequent conflicts
- The oft-profound advantages that adopters of new successful warfare technologies (and equipment and techniques based on them) gained over less-advanced adversaries who were still employing prior-generation approaches, and
- That new technology and equipment breakthroughs often rapidly obsoleted prior-generation warfare methods
Re point #1, off the top of my head, there’s (with upfront apologies for any United States centricity in the examples that follow):
- Chemical warfare, considered (and briefly experimented with) during the US Civil War, with widespread adoption beginning in World War I (WWI)
- Airplanes and tanks, introduced in WWI and extensively leveraged in WWII (and beyond)
- Radar (airplanes), sonar (submarines) and other electronic surveillance, initially used in WWII with broader implementation in subsequent wars and other conflicts
- And RF and other electronics-based communications methods, including cryptography (and cracking), once again initiated in WWII
And to closely related points #2 and #3, two WWII examples come to mind:
- I still vividly recall reading as a kid about how the Polish army strove, armed with nothing but horse cavalry, to defend against invading German armored brigades, although the veracity of at least some aspects of this propaganda-tainted story are now in dispute.
- And then there was France’s Maginot Line, a costly “line of concrete fortifications, obstacles and weapon installations built by France in the 1930s” ostensibly to deter post-WWI aggression by Germany. It was “impervious to most forms of attack” across the two countries’ shared border, but the Germans instead “invaded through the Low Countries in 1940, passing it to the north”. As Wikipedia further explains, “The line, which was supposed to be fully extended further towards the west to avoid such an occurrence, was finally scaled back in response to demands from Belgium. Indeed, Belgium feared it would be sacrificed in the event of another German invasion. The line has since become a metaphor for expensive efforts that offer a false sense of security.”
I repeatedly think of case studies like these as I read about how the Ukrainian armed forces are, both in the air and sea, now using innovative, often consumer electronics-sourced approaches to defend against invading Russia and its (initially, at least) legacy warfare techniques. Airborne drones (more generally: UAVs, or unmanned aerial vehicles) have been used for surveillance purposes since at least the Vietnam War as alternatives to satellites, balloons, manned aircraft and the like. And beginning with aircraft such as the mid-1990s Predator, UAVs were also able to carry and fire missiles and other munitions. But such platforms were not only large and costly, but also remotely controlled, not autonomous to any notable degree. And they weren’t in and of themselves weapons.
That’s all changed in Ukraine (and elsewhere, for that matter) in the modern era. In part hamstrung by its allies’ constraints on what missiles and other weapons it was given access to and how and where they could be used, Ukraine has broadened drones’ usage beyond surveillance into innate weaponry, loading them up with explosives and often flying them hundreds of miles for subsequent detonation, including all the way to Moscow. Initially, Ukraine retrofit consumer drones sourced from elsewhere, but it now manufactures its own UAVs in high volumes. Compared to their Predator precursors, they’re compact, lightweight, low cost and rugged. They’re increasingly autonomous, in part to counteract Russian jamming of wireless control signals coming from their remote operators. They can even act as flamethrowers. And as the image shown at the beginning of this section suggests, they not only fly but also float, a key factor in Ukraine’s to-date success both in preventing a Russian blockade of the Black Sea and in attacking Russia’s fleet based in Crimea.
AI (again, and again, and…)AI has rapidly grown beyond its technology-coverage origins and into the daily clickbait headlines and chyrons of even mainstream media outlets. So it’s probably no surprise that this particular TLA (with “T” standing for “two” this time, versus the the usual) is a regular presence in both my end-of-year and next-year-forecast writeups, along with plenty of ongoing additional AI coverage in-between each year’s content endpoints. A month ago, for example, I strove to convince you that multimodal AI would be ascendant in the year(s) to come. Twelve months ago, I noted the increasing importance of multimodal models’ large language model (LLM) precursors over the prior year, and the month(-ish) before that, I’d forecasted that generative AI would be a big deal in 2023 and beyond. Lather, rinse and repeat.
What about the past twelve months; what are the highlights? I could easily “write a book” on just this topic (as I admittedly almost already did earlier re “Battery Transformations”). But with the 3,000-word count threshold looming, and always mindful of Aalyia’s wrath (I kid…maybe…), I’ll strive to practice restraint in what follows. I’m not, for example, going to dwell on OpenAI’s start-of-year management chaos and ongoing key-employee-shedding, nor on copyright-infringement lawsuits brought against it and its competitors by various content-rights owners…or for that matter, on lawsuits brought against it and its competitors (and partners) by other competitors. Instead, here’s some of what else caught my eye over the past year:
- Deep learning models are becoming more bloated with the passage of time, despite floating point-to-integer conversion, quantization, sparsity and other techniques for trimming their size. Among other issues, this makes it increasingly infeasible to run them natively (and solely) on edge devices such as smartphones, security cameras and (yikes!) autonomous vehicles. Imagine (a theoretical case study, mind you) being unable to avoid a collision because your car’s deep learning model is too dinky to cover all possible edge and corner cases and a cloud-housed supplement couldn’t respond in time due to server processing and network latency-and-bandwidth induced delays…
- As the models themselves grow, the amount of processing horsepower (not to mention consumed power) and time needed to train them increases as well…exponentially so.
- Resource demands for deep learning inference are also skyrocketing, especially as the trained models referenced become more multimodal and otherwise complex, not to mention the new data the inference process is tasked with analyzing.
- And semiconductor supplier NVIDIA today remains the primary source of processing silicon for training, along with (to a lesser but still notable market segment share degree) inference. To the company’s credit, decades after kicking off its advocacy of general-purpose graphics processing (GPGPU) applications, its longstanding time, money and headcount investments have borne big-time fruit for the company. That said, competitors (encouraged by customers aspiring for favorable multi-source availability and pricing outcomes) continue their pursuit of the “Green Team”.
- To my earlier “consumed power” comments, along with my even earlier “seemingly insatiable appetites of AI workload-processing data centers” comments, and as my colleague (and former boss) Bill Schweber also recently noted, “AI-driven datacenter energy demand could expand 160 percent over the next two years, leaving 40 percent of existing facilities operationally constrained by power availability,” to quote recent coverage in The Register. In response to this looming and troubling situation, in the last few days alone I’ve come across news regarding Amazon (“Amazon AI Data Centers To Double as Carbon Capture Machines”) and Meta (“Meta wants to use nuclear power for its data centers”). Plenty of other recent examples exist. But will they arrive in time? And will they only accelerate today’s already worrying global warming pace in the process?
- But, in spite of all of this spiraling “heavy lifting”, researchers continue to conclude that AI still doesn’t have a coherent understanding of the world, not to mention that the ROI on ongoing investments in what AI can do may be starting to level off (at least to some observers, albeit not a universally held opinion).
- One final opinion; deep learning models are seemingly already becoming commodities, a trend aided in part by increasingly capable “open” options (although just what “open” means has no shortage of associated controversy). If I’m someone like Amazon, Apple, Google, Meta or Microsoft, whose deep learning investments reap returns in associated AI-based services and whose models are “buried” within these services, this trend isn’t so problematic. Conversely, however, for someone whose core business is in developing and licensing models to others, the long-term prognosis may be less optimistic, no matter how rosy (albeit unprofitably so) things may currently seem to be. Heck, even AMD and NVIDIA are releasing open model suites of their own nowadays…
I’m writing this in early December 2024. You’ll presumably be reading it sometime in January 2025. I’ll split the difference and wrap up by first wishing you all a Happy New Year!
As usual, I originally planned to cover a number of additional topics in this piece, such as (in no particular order save for how they came out of my noggin):
- Matter and Thread’s misfires and lingering aspirations
- Much discussed (with success reality to follow?) chiplets
- Plummeting-cost solar panels
- Iterative technology-related constraints on China (and that country’s predictable responses), and
- Intel’s ongoing, deepening travails
But (also) as usual I ended up with more things that I wanted to write about than I had a reasonable wordcount budget to do so. Having now passed through 3,000 words, I’m going to restrain myself and wrap up, saving the additional topics (as well as updates on the ones I’ve explored here) for dedicated blog posts to come in the coming year(s). Let me know your thoughts on my top-topic selections, as well as what your list would have looked like, in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- 2023: Is it just me, or was this year especially crazy?
- A tech look back at 2022: We can’t go back (and why would we want to?)
- A 2021 technology retrospective: Strange days indeed
- 10 consumer technology breakthroughs from 2019
- 2025: A technology forecast for the year ahead
The post 2024: A year’s worth of interconnected themes galore appeared first on EDN.
Ternary gain-switching 101 (or 10202, in base 3)

This design idea is centered on the humble on/off/on toggle switch, which is great for selecting something/nothing/something else, but can be frustrating when three active options are needed. One possibility is to use the contacts to connect extra, parallel resistors across a permanent one (for example), but the effect is something like low/high/medium, which just looks wrong.
That word “active” is the clue to making the otherwise idle center position do some proper work, like helping to control an op-amp stage’s gain, as shown in Figure 1.
Figure 1 An on/off/on switch gives three gain settings in a non-inverting amplifier stage and does so in a rational order.
Wow the engineering world with your unique design: Design Ideas Submission Guide
I’ve used this principle many times, but can’t recall having seen it in published circuits, and think it’s novel, though it may be so commonplace as to be invisible. It’s certainly obvious when you think about it.
A practical applicationThat’s the basic idea, but it’s always more satisfying to convert such ideas into something useful. Figure 2 illustrates just that: an audio gain-box whose amplification is switched in a ternary sequence to give precise 1-dB steps from 0 to +26 dBs. As built, it makes a useful bit of lab kit.
Figure 2 Ternary switching over three stages gives 0–26 dB gain in precise 1-dB steps.
Three gain stages are concatenated, each having its own switch. C1 and C2 isolate any DC, and R1 and R12 are “anti-click” resistors, ensuring that there’s no stray voltage on the input or output when something gets plugged in. A1d is the usual rail-splitter, allowing use on a single, isolated supply.
The op-amps are shown as common-or-garden TL074/084s. For lower noise and distortion, (a pair of) LM4562s would be better, though they take a lot more current. With a 5-V supply, the MCP6024 is a good choice. For stereo use, just duplicate almost everything and use double-pole switches.
All resistor values are E12/24 for convenience. The resistor combinations shown are much closer to the ideal, calculated values than the assumed 1% tolerance of actual parts, and give a better match than E96s would in the same positions.
Other variations on the themeThe circuit of Figure 2 could also be built for DC use but would then need low-offset op-amps, especially in the last stage. (Omit C1, C2, and other I/O oddments, obviously.)
Figure 1 showed the non-inverting version, and Figure 3 now employs the idea in an inverting configuration. Beware of noise pick-up at the virtual-earth point, the op-amp’s inverting input.
Figure 3 An inverting amplifier stage using the same switching principle.
The same scheme can also be used to make an attenuator, and a basic stage is sketched in Figure 4. Its input resistance changes depending on the switch setting, so an input buffer is probably necessary; buffering between stages and of the output certainly is.
Figure 4 A single attenuation stage with three switchable levels.
Conclusion: back to binary basicsYou’ve probably been wondering, “What’s wrong with binary switching?” Not a lot, except that it uses more op-amps and more switches while being rather obvious and hence less fun.
Anyway, here (Figure 5) is a good basic circuit to do just that.
Figure 5 Binary switching of gain from 0 to +31 dB, using power-of-2 steps. Again, the theoretical resistor values are much closer to the ideal than their actual 1% tolerances.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- To press on or hold off? This does both.
- Latching power switch uses momentary pushbutton
- A new and improved latching power switch
- Latching power switch uses momentary-action pushbutton
The post Ternary gain-switching 101 (or 10202, in base 3) appeared first on EDN.
A Bluetooth receiver, an identity deceiver

In mid-October 2015, EDN ran my teardown of Logitech’s Bluetooth Audio Adapter (a receiver, to be precise) based on a CSR (now Qualcomm) BlueCore CSR8630 Single Chip Audio ROM.
The CSR module covers the bulk of the bottom half of the PCB topside, with most of the top half devoted to discretes and such for implementing the audio line-level output amp and the like:
A couple of weeks later, in a follow-up blog post, I mentioned (and briefly compared) a bunch of other Bluetooth adapters I’d come across. Some acted as both receivers and transmitters, for example, while others embedded batteries for portable usage. They implemented varying Bluetooth profiles and specification levels, and some even supported aptX and other optional audio codecs. Among them were three different Aukey models; here’s what I said about them:
I recently saw Aukey’s BR-C1 on sale for $12.99, for example (both black and white color scheme options are available), while the BR-C2 was recently selling for $1 less, and the even fuller-featured BT-C2 was recently special-priced at $24.99.
Logitech’s device is AC-powered via an included “wall wart” intermediary and therefore appropriate for adding Bluetooth input-source capabilities to an A/V receiver, as discussed in my teardown. Aukey’s products conversely contain built-in rechargeable batteries and are therefore primarily intended for mobile use, such as converting a conventional pair of headphones into wireless units. Recharging of the Aukey devices’ batteries occurs via an included micro-USB cable and not-included 5V USB-output power source.
All of the Aukey products can also act as hands-free adapters, by virtue of their built-in microphones. The BR-C1 and BR-C2’s analog audio connections are output-only, thereby classifying them as Bluetooth receivers; the more expensive BT-C2 is both a Bluetooth transmitter and receiver (albeit not both at the same time). But the Bluetooth link between all of them and a wirelessly tethered device is bi-directional, enabling not only speakerphone integration with a vehicle audio subsystem or set of headphones (via analog outputs) but also two-way connectivity to a smartphone (via Bluetooth).
The fundamental difference between the BR-C1 and BR-C2, as far as I can tell, is the form factor; the BR-C1 is 2.17×2.17×0.67 inches in size, while the BR-C2 is 2×1×0.45 inches. All other specs, including play and standby time, seem to be identical. None of Aukey’s devices offer dual RCA jacks as an output option; they’re 3.5 mm TRS-only. However, as my teardown writeup notes, the inclusion of a TRS-to-dual RCA adapter cable in each product’s kit makes multiple integrated output options a seemingly unnecessary functional redundancy.
As time passed, my memory of the specifics of that latter piece admittedly faded, although I’ve re-quoted the following excerpt a few times in comparing a key point made then with other conceptually reminiscent product categories: LED light bulbs, LCDs, and USB-C-based devices:
Such diversity within what’s seemingly a mature and “vanilla” product category is what prompted me to put cyber-pen to cyber-paper for this particular post. The surprising variety I encountered even during my brief period of research is reflective of the creativity inherent to you, the engineers who design these and countless other products. Kudos to you all!
Fast forward to early December 2023, when I saw an Aukey Bluetooth audio adapter intended specifically for in-vehicle use (therefore battery powered, and with an embedded microphone for hands-free telephony), although usable elsewhere too. It was advertised at bargains site SideDeal (a sibling site to same-company Meh, who I’ve also mentioned before) for $12.99.
No specific model number was documented on the promo page, only some features and specs:
Features
- Wireless Audio Stream
- The Bluetooth 5 receiver allows you to wirelessly stream audio from your Bluetooth enabled devices to your existing wired home or car stereo system, speakers, or headphones
- Long Playtime
- Built-in rechargeable battery supports 18 hours of continuous playback and 1000 hours of standby time
- Dual Device Link
- Connect two bluetooth devices simultaneously; free to enjoy music or answer phone call from either of the two paired devices
- Easy Use
- Navigate your music on the receiver with built-in controls which can also be used to manage hands-free calls or access voice assistant
Specifications
- Type: Receiver
- Connectivity: 3.5mm
- Bluetooth standard: Bluetooth v5.0
- Color: Black
- To fit: Audio Receivers
- Ports: 3.5 mm Jack
I bit. I bought three, actually; one each for my and my wife’s vehicles, and a third for teardown purposes. When they arrived, I put the third boxed one on the shelf.
Fast forward nearly a year later, to the beginning of November 2024 (and a couple of weeks prior to when I’m writing these words now), when I pulled the box back off the shelf and prepared for dissection. I noticed the model number, BR-C1, stamped on the bottom of the box but didn’t think anything more of it until I remembered and re-read that blog post published almost exactly nine years earlier, which had mentioned the exact same device:
(I’ve saved you from the boring shots of the blank cardboard box sides)
Impressive product longevity, eh? Hold that thought. Let’s dive in:
The left half of the box contents comprises three cables: USB-A to micro-USB for recharging, plus 3.5 mm (aka, 1/8”) TRS to 3.5 mm, and 3.5 mm to dual RCA for audio output connections:
And a couple of pieces of documentation (a PDF of the user manual is available here):
On the right, of course, is our patient (my images, this time, versus the earlier stock photos), as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
The other three device sides, like the earlier box sides, are bland, so I’ve not included images of them. You’re welcome.
Note, among other things, the FCC ID, 2AFHP-BR-C1. Again, hold that thought. By the way, it’s 2AFHP-BR-C1, not the 2AFHPBR-C1 stamped on the underside, which as it turns out is a different device, albeit, judging from the photos, also an automobile interior-tailored product.
From past experience, I’ve learned that the underside of a rubber “foot” is often a fruitful path inside a device, so once again I rolled the dice:
Bingo: my luck continues to hold out!
With all four screws removed (or at least sufficiently loosened; due to all that lingering adhesive, I couldn’t get two of them completely out of the holes), the bottom popped right off:
And the first thing I saw staring back at me was the 3.7-V, 300 mAh Li-polymer “pouch” cell. Why they went with this battery form factor and formulation versus the more common Li-ion “can” is unclear; there was plenty of room in the design for the battery, and flexibility wasn’t necessary:
In pulling the PCB out of the remaining top half of the case:
revealing, among other things, the electret microphone above it:
I inadvertently turned the device on, wherein it immediately went into blue-blinking-LED standby mode (I fortuitously quick-snapped the first still photo while the LED was illuminated; the video below it shows the full blink cadence):
Why standby, versus the initial alternating red/blue pairing-ready sequence that per the user manual (not to mention common sense) it was supposed to first-time power up in? I suspect that since this was a refurbished (not brand new) device, it had been previously paired to something by the prior owner and the factory didn’t fully reset it before shipping it back out to me. A long-press of the topside button got the device into the desired Bluetooth pairing mode:
And another long-press powered the PCB completely off again:
The previously seen bottom side of the PCB was bare (the glued-on battery doesn’t count, in my book) and, as usual for low cost, low profit margin consumer electronics devices like this one, the PCB topside isn’t very component-rich, either. In the upper right is the 3.5 mm audio output jack; to its left, and in the upper left, is the micro-USB charging connector, with the solder sites for the microphone wiring harness between them. Below them is the system’s multi-function power/mode switch. At left is the three-wire battery connector. Slightly below and to its right (and near the center) is the main system processor, Realtek’s RTL8763BFR Bluetooth dual mode audio SoC with integrated DAC, ADC (for the already-seen mic), DSP and both ROM and RAM.
To the right is of the Realtek RTL8763BFR is its companion 40 MHz oscillator, with a total of three multicolor LEDs in a column both above and below it. In contrast, you may have previously noted five light holes in the top of the device; the diffusion sticker in the earlier image of the inside of the top half of the chassis “bridges the gaps”. Below and to the left of the Realtek RTL8763BFR is the HT4832 audio power amplifier, which drives the aforementioned 3.5 mm audio output jack. The HT4832 comes from one of the most awesome-named companies I’ve yet come across: Jiaxing Heroic Electronic Technology. And at the bottom of the PCB, perhaps obviously, is the embedded Bluetooth antenna.
After putting the device back together, it seemingly still worked fine; here’s what the LEDs look like displaying the pairing cadence from the outside:
All in all, a seemingly straightforward teardown, right? So, then, what’s with the “Identity Deceiver” mention in this writeup’s title? Well, before finishing up, I as-usual hit up the FCC certification documentation, final-action dated January 29, 2018, to see if I’d overlooked anything notable…but the included photos showed a completely different device inside. This time, the bottom side of the PCB was covered with components. And one of them, the design’s area-dominant IC, was from ISSC Technologies, not Realtek. See for yourself.
Confused, I hit up Google to see if anyone else had done a teardown of the Aukey BR-C1. I found one, in video form, published on October 30, 2015. It shows the same design version as in the FCC documentation:
The Aukey BR-C1 product review from the same YouTube creator, published a week-plus earlier, is also worth a view, by the way:
Fortuitously, the YouTube “thumbnail” video for the first video showcases the previously mentioned ISSC Technologies chip:
It’s the IS1681S, a Bluetooth 3.0+EDR multimedia SOC. Here’s a datasheet. ISSC Technologies was acquired by Microchip Technology in mid-2014 and the IS1681S presumably was EOL’d sometime afterward, thereby prompting Aukey’s redesign around Realtek silicon. But how was Aukey able to take the redesign to production without seeking FCC recertification? I welcome insights on this, or anything else you found notable about this teardown, in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Teardown: Bluetooth audio dongle keeps it simple
- Bluetooth audio adapters and their creative developers
- Teardown: Tile Mate Bluetooth tracker relies on software
- Teardown: Bluetooth-enhanced LED bulb
- Teardown: Bluetooth smart dimmer
- Teardown: OBD-II Bluetooth adapter
The post A Bluetooth receiver, an identity deceiver appeared first on EDN.