Українською
  In English
EDN Network
Murata introduces ultra-small chip inductor

At this month’s CES 2025 show, Murata unveiled what is claimed to be the world’s smallest 006003-inch (0.16×0.08 mm) chip inductor. This development offers a 75% volume reduction compared to the previous smallest product, the 008004-inch (0.25×0.125 mm) inductor.
“Following our success in introducing the world’s smallest multilayer ceramic capacitor (MLCC) in September 2024, our engineering teams are now developing a pioneering 006003-inch size chip inductor to further meet market demands,” says Takaomi Toi, general manager of Inductor Product Development at Murata Manufacturing.
“With the creation of the world’s smallest class prototype, we’re confident that this product represents an exciting addition to Murata’s extensive portfolio of market-leading chip inductors. This development continues to demonstrate Murata’s commitment to innovation and also marks a significant milestone in our quest to support the miniaturization and enhanced functionality of future electronic devices,” Toi said.
For more information about this chip inductor development, please contact Murata here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Murata introduces ultra-small chip inductor appeared first on EDN.
EDA tool tackles 3D IC design challenges

GENIO EVO, an integrated chiplet/package EDA tool from MZ Technologies, addresses thermal and mechanical stress in the pre-layout stage of 3D IC design. Set to be demonstrated at this month’s Chiplet Summit, GENIO EVO is the second generation of MZ’s flagship GENIO cross-fabric platform for system design. Like its predecessor, GENIO EVO enables co-design of chiplets, dies, silicon interposers, packages, and surrounding PCBs to meet area, power, and performance targets.
GENIO EVO integrates seamlessly with existing commercial implementation platforms or custom EDA flows through plugins. Operating at the architectural level, it provides optimal system choices for 2.5D or 3D multi-die designs. A new user interface supports a cross-hierarchical, 3D-aware design methodology that streamlines the system design process. By integrating IC and advanced packaging design, it ensures full system-level optimization, shorter design cycles, faster time-to-manufacturing, and improved yields.
The platform identifies and analyzes thermal and mechanical failures. It supports architectural exploration and what-if analysis in the early design stages to improve implementation predictability. By planning and managing high-pin-count interconnects in complex multi-fabric designs, it anticipates and avoids downstream thermal and mechanical issues.
GENIO EVO is available for immediate licensing. For more information, click the link below.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post EDA tool tackles 3D IC design challenges appeared first on EDN.
Add one resistor to allow DAC control of switching regulator output

Whether its buck, boost, or buck/boost, internal or external switch, milliamps or tens of amps, a literal cornucopia of programmable output switching regulator/converter chips are commercially available. While the required external Ls and Cs (of course) vary wildly from topology to topology and chip to chip, (almost) all use exactly the same basic two-resistor network for output voltage programming shown in Figure 1. Its example buck type regulator was picked more or less arbitrarily, so please ignore the L and Cs and just focus on R1, R2, and (later) R3.
Figure 1 The (almost) universal regulator output programming network where Vout = Vsense(R1/R2 + 1) = 0.8v*(11.5 + 1) = 10v.
Wow the engineering world with your unique design: Design Ideas Submission Guide
For reasons known only to gurus of the mystic and marvelous monolithic realm, the precision Vsense feedback node voltage varies from type to type over a roughly 3:1 range from 0.50v to 1.5v. Recommended values for R1 vary too.
The point is the topology doesn’t vary. All (or at least most) conform faithfully to Figure 1. This surprising uniformity becomes very useful if your application requires DAC control of the output voltage. See Figure 2 for how this can be done with a positive polarity DAC and just one added resistor: R3.
Figure 2 Regulator output programming with a DAC and the KISS1 network where Vout = (Vc)*(R1/R2) = (2.5 to 0v) 4 = 0 to 10v.
Given reasonable choices for the DAC (e.g., 2.5v), numbers for R1 and Vsense from the regulator chip datasheet, and Vomax from your application requirements, here’s the KISS1 arithmetic:
- R2 = R1 Vcmax/Vomax
- R3 = R1/(Vomax/Vsense – R1/R2 – 1)
And, in the grand tradition of the KISS1 principle, that’s it. Ok, ok. Except maybe for a couple of (minor?) caveats. For example:
- Expression 2 above, and therefore the necessary value for R3, must shake out positive. I can’t think of a practical case where it wouldn’t, but there’s probably some perverse permutation of parameters out there where it won’t, and implementing negative resistors isn’t particularly simple.
- The relation between Vout and Vc is inverse. So, the digital version of Vc must be 1’s complemented (a totally KISS-bit of software arithmetic to flip all the bits, so 0s become 1s, and 1s become 0s) before being written to the DAC register.
- Vin must be adequate for the chosen chip to generate the chosen Vomax when Vc = 0. Duh.
So maybe it’s not really totally KISS1, just mostly.
1 Famous KISS principle: Is a footnote really necessary?
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- PWM power DAC incorporates an LM317
- Parsing PWM (DAC) performance: Part 2—Rail-to-rail outputs
- Precision DC motor speed control via pot or DAC
- Designing with a DAC: Settle for the best
The post Add one resistor to allow DAC control of switching regulator output appeared first on EDN.
How TMDs can transform semiconductor manufacturing

While semiconductors remain in high demand, electronics engineers must stay abreast of associated developments that could eventually affect their work. Case in point: significant advancements in transition metal dichalcogenides (TMDs).
These two-dimensional materials are of particular interest to electronics engineers due to their structural phase and chemical composition; they possess numerous properties advantageous to electronic devices.
The 2D materials like TDM are prominent in the future semiconductor manufacturing landscape. Source: Nature
The ongoing semiconductor shortage has caused some engineers to delay projects or alter plans to acquire readily available supplies rather than those that are challenging to source. However, physical resource concentrations are more significant contributors to the shortage than actual scarcity.
When most of the critical raw materials used in semiconductor production come from only a few countries or regions, supply chain constraints happen frequently.
TDM learning curve
If it was possible to make the materials locally rather than relying on outside sources, electronics engineers and managers would enjoy fewer workflow hiccups. So, researchers are focusing on that possibility while exploring TMD capabilities. They are learning how to grow these materials in a lab while overcoming notable challenges.
One concern was making the growth occur without the thickness irregularities that often negatively affect other 2D materials. Therefore, this research team designed a shaped structure that controls the TMD’s kinetic activities during growth.
Additionally, they demonstrated an option to facilitate layer-by-layer growth by creating physical barriers from chemical compound substrates, forcing the materials to grow vertically. The researchers believe this approach could commercialize the production of these 2D materials. Their problem-solving efforts could also encourage others to follow their lead as they consider exploring how to produce and work with TMDs.
Semiconductor manufacturing is a precise process requiring many specific steps. For example, fluorinated gases support everything from surface-etching activities to process consistency. Although many production specifics will remain constant for the foreseeable future, some researchers are interested in finding feasible alternatives.
So, while much of their work centers around furthering the development of next-generation computer chips, succeeding in that aim may require prioritizing different materials, including TMDs. People have used silicon for decades. Although it’s still the best choice for some projects, electronics engineers and other industrial experts see the value in exploring other options.
Learning more about TMDs will enable researchers to determine when and why the materials could replace silicon.
TDM’s research phase
In one recent case, a team explored TMD defects and how these materials could impact semiconductor performance. Interestingly, the outcomes were not always adverse because some imperfections made the material more electrically conductive.
Another research phase used photoluminescence to verify the light frequencies emitted by the TMDs. One finding was that specific frequencies would characterize five TMDs with defects called chalcogen vacancies.
An increased understanding of common TMD defects and their impacts will allow engineers to determine the best use cases more confidently. Similarly, knowing effective and efficient ways to detect those flaws will support production output and improve quality control.
These examples illustrate why electronics engineers and managers are keenly interested in TDMs and their role in future semiconductors. Even if some efforts are not commercially viable, those involved will undoubtedly learn valuable details that shape their future progress.
Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.
Related Content
- 2D Material Beats Graphene
- Chip Makers Must Learn New Ways to Play ‘D’
- 2D TMD Materials Enabling Future Electronics
- Imec’s Van den hove: Moving to Chiplets to Extend Moore’s Law
- Semiconductor Manufacturers Must Adapt to Shifting Landscape
The post How TMDs can transform semiconductor manufacturing appeared first on EDN.
Tamron’s TAP-in Console: A nexus for camera lens update and control

Camera lenses were originally fully mechanical (and in some cases, still are; witness my Rokinon Cine optics suite). The user manually focused them, manually set the aperture, and manually zoomed them (for non-fixed-focal length optics, that is). Even when the aperture was camera body-controlled—in shutter-priority and fully auto-exposure modes, for example—the linkage between it and the lens was mechanical, not electrical, in nature.
Analogies between lenses and fly-by-wire aircraft are apt, however, as the bulk of today’s lenses are electronics-augmented and, perhaps more accurately, -dependent. Take, for example, optical image stabilization (OIS), which harnesses electromagnets paired with floating lens elements and multiple gyroscope and accelerometer sensors to counteract one-to-multiple possible variants of unwanted camera system movement:
- Axis rotation (roll)
- Horizontal rotation (pitch)
- Vertical rotation (yaw)
- And both horizontal and vertical motion (caused, for example, by imperfect panning)
Not only is OIS within the lens itself image quality-desirable (at admitted tradeoffs of added size, weight and cost), its effectiveness can be further boosted when paired with in-body image stabilization (IBIS) within the camera itself. Olympus’ (now OM Systems’) Sync IS and Panasonic’s conceptually similar, functionally incompatible Dual I.S. are examples of this mutually beneficial coordination, which of course requires real-time bidirectional electronic communication. Why, you might ask, is OIS even necessary if IBIS already exists? The answer is particularly relevant for telephoto lenses, where the deleterious effects of camera system movement are particularly acute given the lens’s narrow angle of view, and where subtle movement may be less effectively corrected at the camera body versus at the other end of the long lens mounted to it.
More modest but no less electronics-dependent lens function examples include:
- Motor-driven autofocus (controlled by focus-determining sensors and algorithms in the camera body)
- Electronics-signaled, motor-based aperture control (some modern lenses even dispense completely with the manual aperture ring, relying solely on body controls instead)
- And motor-assisted zoom
And user setting optimization (fine-tuned focus, for example) and customization (constraining the focus range to minimize autofocus-algorithm “hunting”, etc.) is also often desirable.
All these functions, likely unsurprisingly to you, are managed by in-lens processors running firmware which benefits from periodic updates to fix bugs, add features, and augment the compatibility list to support new camera models (a particularly challenging task for third-party lens suppliers such as aforementioned Rokinon, Sigma, and Tamron). I’ve come across several lens firmware update approaches, the first two most practically implemented when the camera and lens come from the same manufacturer (i.e., a first-party lens):
- The lens’ new firmware image is downloaded to a memory card, which is inserted in the connected camera and activated via an update menu option or control button sequence
- The lens and body are again mated, but this time the body is then USB-tethered to a computer running a manufacturer-supplied update utility
- The lens is directly USB-tethered to the computer, with a manufacturer-supplied update utility then run. The key downside to this approach, therefore its comparative uncommonness, is that it requires a dedicated USB port on the lens, with both size and potential dust and water ingress impacts
- And the approach we’ll be showcasing today, which relies on a lens manufacturer- and camera mount-specific USB port-inclusive intermediary docking station to handle communications between the lens and computer.
Specifically, today’s teardown victim is a Tamron TAP-in Console, this particular model intended for the Canon EF mount used by my Canon DSLRs and one of my BlackMagic Design video cameras (Nikon mount stock images of the TAP-01 from Tamron’s website follow)
Here are some example screenshots of Tamron’s TAP-in Utility software in action, with my Mac connected to my Tamron 15-30 mm zoom lens via the TAP-01E dock intermediary:
along with my 100-400 mm zoom lens:
And both lenses post-firmware updates through the same utility:
Tamron isn’t the only lens manufacturer that goes the intermediary dock route. Here, for example, is Sigma’s UD-01 USB Dock in action with the company’s Optimization Pro software and two of that supplier’s Canon EF mount zoom lenses (24-105 and 100-400 mm) that I own:
Enough with the conceptual chitter-chatter, let’s get to real-life tearing down, shall we? In addition to the TAP-in Console I’ve already screenshot-shown you in action, which I bought used back in January 2024 from KEH Camera for $34.88, I’d subsequently picked up another one for teardown purposes off eBay open-box for about the same price. However, after it arrived and I confirmed it was also functional, I didn’t have the heart to disassemble perfectly good hardware in a potentially permanently destructive manner. I decided instead to hold onto it for future gifting to a friend who also owns Canon EF-mount Tamron lenses, and instead bought one claimed to be a “faulty spares-and-repairs” from MPB for $9. After it arrived, and to satisfy my curiosity, I decided to hook it up. It seems to work just fine, too! Oh well…
By the way, that dock-embedded LED shown in the first photo only illuminates when the TAP-in Utility software is running on the computer and detects a valid lens installed in the mount:
As usual, I’ll start out with some outer-box shots (yes, even though the dock was advertised as a “faulty spares-and-repairs” it still came with the original box, cable and documentation):
Open it up:
(I suspect that in its original brand-new condition there was more padding, etc. inside)
and the contents tumble out (I’m being overly dramatic; I actually lifted them out and placed them on my desk as shown):
Here’s the USB-A to micro-USB power-and-data cable:
Re the just-mentioned “data”, I always find it interesting to encounter a ferrite bead (or not) and attempt to discern whether there was a logical reason for its presence or absence (or not):
A bit of documentation (here’s a PDF version), supplemented by online video tutorials:
And last, but not least, our patient, already-seen LED end first, and as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
Two side views: one of the micro-USB connector:
and another, of the lens release button:
Finally, here’s the mount end, first body-capped:
and now uncapped and exposed:
See those four screws around the shiny outer circumference? You know what comes next, right?
The now-unencumbered shiny metal ring, as it turns out, consists of two stacked rings. Here are the top and bottom views of the outer (upper) one:
and the even shinier inner (lower) one:
If you’re thinking those look like “springs” on the bottom, you’re not off-base:
With the rings gone, my attention next turned to the two screws at the inside top, holding a black-colored assembly piece in place:
Four more screws around the inside circumference:
In the process of removing them, the locking pin also popped out:
As you can see, the pin is spring-loaded and normally sticks out from the dock’s mount. When you mate a lens with the dock, with the former’s bayonet tabs aligned with the latter’s recesses, the lens mount presses against the pin, retracting it flush with the dock mount. Subsequently rotating the lens into its fully mounted position mates the pin with a matching indentation on the lens mount, allowing the pin to re-extend and locking the lens in place in the process.
Pressing the earlier-seen side release button manually re-retracts the pin, enabling rotation of the lens in the opposite direction for subsequent removal.
Onward. With the four screws removed:
the middle portion of the chassis lifts away, revealing the PCB underneath:
In the process of turning the middle portion upside-down, the release button (now absent its symbiotic locking pin partner) fell out:
I had admittedly been a bit concerned beforehand that the dock might be nothing more than a high-profit-margin (the TAP-in Console brand-new price is $59) “dummy” USB connection-redirector straight to the mount contacts, with the USB transceiver intelligence built into the lens itself. Clearly, and happily so, my worries were for naught:
Two screws hold the contacts assembly in place:
Four more for the PCB itself:
And with that, ladies and gentlemen, we have achieved liftoff:
Let’s zoom in (zoom…camera lens accessory…get it? Ahem…) on that PCB topside first:
As previously mentioned, the TAP-in Console comes in multiple product options for various camera manufacturers’ lens mounts. My pre-dissection working theory, in the hope that the dock wasn’t just a “dummy” USB connection-redirector as feared, was that the base PCB was generic, with camera manufacturer mount hardware customization solely occurring via the contacts assembly. Let’s see if that premise panned out.
At left is the USB-C connector. At bottom is the connector to the ribbon cable which ends up at the mount contacts assembly (which we’ll see more closely shortly). But what’s that connector at the top for? I ended up figuring out the answer to that question indirectly, in the process of trying (unsuccessfully) to identify the biggest IC in the center of the PCB, marked:
846AZ00
F51116A
DFL
I searched around online for any other published references to “F51116A”, and found only one. It was for the Nikon version of the TAP-in Console (coincidentally the same version in the stock images at the beginning of this piece) and was in Japanese (which I can’t read, far from speak), but Google Translate got me to something I could comprehend. Two things jumped out at me:
- This time, the upper connector was used to ribbon-cable tether to the contacts assembly
- And the IC was marked somewhat differently this time, specifically in the first line
734AZ00
F51116A
DFL
So, here’s my revised working theory. The PCB itself is the same (with confirmation that you’ll shortly see), as are the bulk of the components mounted to it. The main IC is either a PLD or FPGA appropriately programmed for the intended product model, a model-specific ASIC, or a microcontroller with camera mount-specific firmware. And depending on the product variant, either the top or bottom connector (or maybe both in some cases) gets ribbon-cable-populated.
Let’s flip the PCB over now:
Not much to see versus the other side, comparatively, although note the LED at bottom and another (also unpopulated this time) connector to the right of it. And to my recent comments, note that the stamp on the right:
TAMRON
AY042-901
-0000-K1
exactly matches the markings shown on the PCB in the Nikon-version teardown.
About that contacts assembly I keep mentioning…here’s the “action” (electrically relevant) end:
And here’s the seemingly (at least initially) more boring side:
I thought about stopping here. But those two screws kept calling to me:
And I’m glad I listened to them. Nifty!
With that I’ll wrap up and, after the writeup’s published, see if I might be able to get it back together again…functionally, that is…mindful of the Japanese teardown enthusiast’s comments that “The lens lock release switch part was a bit of a pain to assemble (lol).” Reader thoughts are as-always welcomed in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- A holiday shopping guide for engineers: 2023 edition
- Teardown: High-quality and inexpensive security camera
- Teardown: Blink XT security camera
- Konica Minolta Dimage X50 camera teardown and serviceability
- The Godox V1 camera flash: Well-“rounded” with multiple-identity panache
- Perceiving the insides of a wireless camera flash receiver
- Disassembling a premium webcam
- Teardown: Aukey’s DRA1 dash cam
The post Tamron’s TAP-in Console: A nexus for camera lens update and control appeared first on EDN.
How neon lamps can replace LEDs in AC-powered designs

It’s not difficult to drive an LED indicator from the AC line, but it requires many active and passive components. It also poses safety challenges. EDN and Planet Analog blogger Bill Schweber explains how engineers can replace LEDs with neon lamps to design AC power-on indicators while addressing modern design challenges.
Read full story at EDN’s sister publication, Planet Analog.
Related Content
- The burned-out bulb mystery
- Use Light Bulbs as Current Limiters?
- When the AC line meets the CFL/LED lamp
- An LED ‘Filament’ Bulb: Not a Contradiction
- Gas Discharge Tubes: Old Protection in a New Bottle–Literally
The post How neon lamps can replace LEDs in AC-powered designs appeared first on EDN.
Part 1: A beginner’s guide to the power of IQ data and beauty of negative frequencies

Editor’s Note: This is a two-part series where DI authors Damian and Phoenix Bonicatto explore IQ signal representation and negative frequencies to ease the understanding and development of SDRs.
Part 1 explains the commonly used SDR IQ signal representation and negative frequencies without the complexity of math.
Part 2 (to be published) presents a device that allows you to play with and display live SDR signal spectrums with negative frequencies.
IntroductionSoftware-defined radio (SDR) firmware makes extensive use of I/Q representation of the received and transmitted signal. This representation can simplify and add ease to the manipulation of incoming signal. I/Q data also allows us to work with negative frequencies. My goal here is to explain the I/Q representation and negative frequencies without the complexity usually invoked by obscure terms and non-intuitive mathematics. Also, I will present a device that you can build to allow you to play with and display live spectrums with negative frequencies. So, let’s get started.
Wow the engineering world with your unique design: Design Ideas Submission Guide
I/Q and quadrature conceptsWhat is I/Q data? “I” is short for in-phase and “Q” is short for quadrature. It’s the first set of SDR terms that sound mysterious and tends to put people off—let’s just call them I and Q. Simply, if you have a waveform, like you see on an oscilloscope, you can break it into two sinusoidal components—one based on a sine, and another based on a cosine. This is done by using the trig “angle sum identity”. The I and Q are the amplitudes of these components, so our signal is now represented as:
Where: “A” is the original signal amplitude and:
We have just created the in-phase signal, I*cos(ωt), and the quadrature signal, Q*sin(ωt). Just to add to the confusion, when we deal with the in-phase and quadrature signals together it is referred to as “quadrature signaling” …sigh.
[Note: In SDR projects IQ data (or I/Q data) is generally referring to the digital data pairs at each sample interval.]
Most signal processing textbooks work with exponentials to describe and manipulate signals. For example, a transmitted signal is always “real” and is typically shown as something like:
This is another formula that creates obfuscation and puts off people just starting out in signal processing and SDR. I will say that exponential notation creates cleaner mathematical manipulation, but my preference is to use the trig representation as I can see the signal in my mind’s eye as I manipulate the equations. Also, explaining your design to people who are not up on signal processing is much easier when using things everyone learned in high school. Note that, although most SDR simulations tools like MATLAB use the exponential for signal processing work, when it comes down to writing C code in an MCU, the trig representation is normally used.
Without going into it, this exponential representation is based on Euler’s formula, which is related to the beautiful and cleverly derived Euler’s equation.
Now, you may wonder why we would go through the trouble to convert the data to this quadrature form and what this form of the signal is good for. In receivers, for example, just using the incoming signal and mixing it with another frequency and extracting the data has worked since the early days of radio. To answer this, let’s look at a couple of examples.
Example of the benefits of quadrature formFirst, when doing simple mixing of an incoming signal you get, as an output, two signals—the sum of the incoming signal and the mix frequency, and the difference of these two frequencies. The following equation demonstrates this by use for the trig product identity:
To continue in your receiver, you typically need to filter one of these out, usually the higher frequency. (The unwanted resultant frequency is often called the image frequency, which is removed by an image filter.) In a digital receiver this filter can take some valuable resources (cycles and memory). Using the I/Q form above, a mix can be created that removes either just the sum or just the difference without filtering.
You can see how this works in Figure 1. First, define the mix signal in an I/Q format:
Mix Signal I part = cos(ωmt)
Mix Signal Q part = sin(ωmt)
Figure 1 Quadrature (complex-to-complex) mix returning the lower frequency.
(There is more to this, but this mix architecture is the basic idea of this technique.)
You can see that only the lower frequency is output from the mixer. If you want the higher frequency and to remove the lower frequency, just change where the minus sign is in the final additions as shown in Figure 2.
Figure 2 Quadrature mix returning the higher frequency.
This quadrature, or complex-to-complex, mixing is a very powerful technique in SDR designs.
Next, let’s look at how I/Q data can allow us to play with negative frequencies.
When you perform a classical (non-quadrature) mix, any result that you get cannot go below a frequency of zero. The result will be two new frequencies: the sum of the input frequencies and the absolute value of the difference. This absolute value means the output frequencies cannot go negative. In a quadrature mixer the frequency is not constrained with an absolute value function, and you can get negative frequencies.
Let’s think about what this means if you are sweeping one of the inputs. In the classical mixer as the two input frequencies approach each other, the difference frequency will approach 0 Hz and then start to go back up in frequency. In a quadrature mixer the difference frequency will go right through 0 Hz and continue getting more and more negative.
One implication of this is that, in a sampled system you’re working on, bandwidth is the sample rate divided by 2. When using a quadrature representation, you have a working bandwidth that is twice as large. This is especially handy when you have a system where you want to deal with a large range of frequencies at a time. You can move any of the frequencies to baseband; the higher frequencies will stay in their relative position in the positive frequencies; and the lower frequencies will stay in their relative positions in the negative frequencies. You can slide up and down, by mixing, without image filters or corrupting the spectrum with images. Another very powerful technique in SDR designs.
A tool for exploring IQ dataThis positive and negative spectrum is very interesting but unfortunately the basic FFT on your oscilloscope probably won’t display them. It typically only displays positive frequencies. Vector network analyzers (VNAs) can display negative frequency but not all labs have one. You can play around in tools like MATLAB, but I usually like something a little closer to actual hardware and more real-time to get a better feel for the concept. A signal generator and a scope always help me. But I already said a scope does not display negative frequency. Well, the tool presented in Part 2 will allow us to play with I/Q data, negative frequencies, and mixing.
[Editor’s Note: An Arduino-Nano-based device will be presented in Part 2 that can generate IQ samples based upon user frequency, amplitude, and phase settings. This generated data will then display the spectrum showing both positive and negative frequencies. Stay tuned for more!]
Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.
Phoenix Bonicatto is a freelance writer.
Related Content
- Exploring software-defined radio (without the annoying RF) – Part 1
- Exploring software-defined radio (without the annoying RF)—Part 2
- SDR Basics Part 3: Transmitters
- The virtual reality of 5G – Part 2 (measurements)
- Ultra-wideband I/Q demodulator improves receiver performance
The post Part 1: A beginner’s guide to the power of IQ data and beauty of negative frequencies appeared first on EDN.
The 2025 CES: Safety, Longevity and Interoperability Remain a Mess

Once again this year, I’m thankfully reporting on CES (formerly also known by its de-acronym’d “Consumer Electronics Show” moniker, although the longer-winded version is apparently no more) from the remote comfort of my home office. There are admittedly worse places to visit than Las Vegas, especially given its newfound coolness courtesy of the Sphere (which I sadly have yet to experience personally):
That said, given the option to remain here, I’ll take it any day, realizing as I say this that it precludes on-camera cameos…which, come to think of it, is a plus for both viewers and myself!
(great job, Aalyia!)
Anyhoo, I could spend the next few thousand words (I’m currently guesstimating, based on repeated past experience, which in some years even necessitated a multi-part writeup series), telling you about all the new and not-new-but-maturing products and technologies showcased at the show. I’ll still do some of that, in part as case study examples of bigger-picture themes. But, to the title of this writeup, this year I wanted to start by stepping back and discussing three overriding themes that tainted (at least in my mind) all the announcements.
Safety
(Who among you is, like me, old enough to recognize this image’s source without cheating by clicking through first?)
A decade-plus ago, I told you the tale of my remote residence-located Linksys router that had become malware-infected:
Ever since then, I’ve made it a point to collect news tidbits on vulnerabilities and the attack vectors that subsequently exploit them, along with manufacturers’ subpar compromise responses. It likely won’t surprise you to learn that the rate of stories I’ve accumulated has only accelerated over time, as well as broadened beyond routers to encompass other LAN and WAN-connected products. I showcased some of them in two-part coverage published five years ago, for example, and disassembled another (a “cloud”-connected NAS) just a few months back.
The insecure-software situation has become so rampant, in fact, that the U.S. Federal Communications Committee (FCC) just unveiled a new program and associated label, the U.S. Cyber Trust Mark, intended to (as TechCrunch describes it) “help consumers make more informed decisions about the cybersecurity of the internet-connected products they bring into their homes.” Here’s more, from Slashdot’s pickup of the news, specifically referencing BleepingComputer’s analysis:
It’s designed for consumer smart devices, such as home security cameras, TVs, internet-connected appliances, fitness trackers, climate control systems, and baby monitors, and it signals that the internet-connected device comes with a set of security features approved by the National Institute of Standards and Technology (NIST). Vendors will label their products with the Cyber Trust Mark logo if they meet NIST cybersecurity criteria. These criteria include using unique and strong default passwords, software updates, data protection, and incident detection capabilities. Consumers can scan the QR code included next to the Cyber Trust Mark labels for additional security information, such as instructions on changing the default password, steps for securely configuring the device, details on automatic updates (including how to access them if they are not automatic), the product’s minimum support period, and a notification if the manufacturer does not offer updates for the device.
Candidly, I’m skeptical that this program will be successful, even if it survives the upcoming Presidential administration transition (speaking of which: looming trade war fears weighed heavily on folks’ minds at the show) and in spite of my admiration for its honorable intention. As reader “Thinking_J” pointed out in response to my recent teardown of a Bluetooth receiver that has undergone at least one mid-life internal-circuits switcheroo, the FCC essentially operates on the “honor system” in this and similar regards after manufacturers gain initial certification.
One of the root causes of such vulnerabilities, IMHO, is any reliance on open-source code, no matter that doing so may ironically also improve initial software quality. Requoting myself:
Open-source software has some compelling selling points. For one thing, it’s free, and the many thousands of developer eyeballs peering over it generally result in robust code. When a vulnerability is discovered, those same developers quickly fix it. But among those thousands of eyeballs are sets with more nefarious objectives in mind, and access to source code enables them to develop exploits for unpatched, easily identified software builds.
I also suspect that at least some amount of laissez-faire tends to creep into the software-development process when you adopt someone else’s code versus developing your own, especially if you subsequently “forget” to make proper attribution and take other appropriate action regarding that adoption. The result is a tendency to overlook the need to maintain that portion of the codebase as exploits and broader bugs in it are discovered and dealt with by the developer community or, more often than note, the one-and-only developer.
Sometimes, though, code-update neglect is intentional:
Consumer electronics manufacturers as a rule make scant (if any) profit on each unit sold, especially after subtracting the “percentage” taken by retailer intermediaries. Revenue tangibly accrues only as a function of unit volume, not from per-unit profit margin. Initial-sale revenue is sometimes supplemented by after-sale firmware-unlocked feature set updates, services, and other add-ons. But more often than not, a manufacturer’s path to ongoing fiscal stability involves straightforwardly selling you a brand-new replacement/upgrade unit down the road; cue obsolescence by design for the unit currently in your possession.
Which leads to my next topic…
Longevity
One of the products “showcased” in my August 2020 writeup didn’t meet its premature demise due to intentionally unfixed software bugs (as was the case for a conceptually similar product in Belkin’s Wemo line, several examples of which I owned when the exploit was announced). Instead, its early expiration was the result of an intentional termination of the associated “cloud” service done by its retail supplier, Best Buy (Connect WiFi Smart Plug shown above).
More recently, I told you about a similar situation (subsequently resolved positively via corporate buyout and resurrection, I’m happy to note) involving SmartLabs’ various Insteon-branded powerline networking products. Then there was the Spotify Car Thing, which I tore down in early 2023. And right before this year’s CES opened its doors to the masses, ironically, came yet another case study example of the ongoing disappointing trend: the $800 (nope, no refunds) Moxie “emotional support” robot, although open source (which, yes, I know I just critiqued earlier here) may yet come to the rescue for the target 5-10 year old demographic:
Government oversight to the rescue, again (?). Here’s a summary, from Slashdot’s highlight:
Nearly 89% of smart device manufacturers fail to disclose how long they will provide software updates for their products, a Federal Trade Commission staff study found this week. The review of 184 connected devices, including hearing aids, security cameras and door locks, revealed that 161 products lacked clear information about software support duration on their websites.
Basic internet searches failed to uncover this information for two-thirds of the devices. “Consumers stand to lose a lot of money if their smart products stop delivering the features they want,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. The agency warned that manufacturers’ failure to provide software update information for warranted products costing over $15 may violate the Magnuson Moss Warranty Act. The FTC also cautioned that companies could violate the FTC Act if they misrepresent product usability periods. The study excluded laptops, personal computers, tablets and automobiles from its review.
Repeating what I said earlier, I’m skeptical that this effort will be successful, despite my admiration for its honorable intentions. In no small part, my pessimism stems from recent US election results, given that Republicans have (historically, at least) been disproportionally pro-business to the detriment of consumer rights. That said, were the manufacturer phase-out to instead be the result of something other than the shutdown of a proprietary “cloud” service, such as (for example) a no-longer-maintained-therefore-viable (or at-all available, for that matter) proprietary application, the hardware might still be usable if it could alternatively be configured and controlled using industry-standard command and communications protocols.
Which leads to my next topic…
Interoperability
Those of you who read to the bitter end of my recently published “2024 look-back” tome might have noticed a bullet list of topics there that I’d originally also hoped to cover but eventually decided to save for later. The first topic on the list, “Matter and Thread’s misfires and lingering aspirations,” I held back not just because I was approaching truly ridiculous wordcount territory but also because I suspected I’d have another crack at it a short time later, at CES to be precise.
I was right; that time is now. Matter, for those of you not already aware, is:
…a freely available connectivity standard for smart home and IoT (Internet of Things) devices. It aims to improve interoperability and compatibility between different manufacturers and security, always allowing local control as an option.
And Thread? I thought you’d never ask. It’s:
…an IPv6-based, low-power mesh networking technology for Internet of things (IoT) products…
Often used as a transport for Matter (the combination being known as Matter over Thread), the protocol has seen increased use for connecting low-power and battery-operated smart-home devices.
Here’s what I wrote about Matter and Thread a year ago, in my 2024 CES discourse:
The Matter smart home communication standard, built on the foundation of the Thread (based on Zigbee) wireless protocol, had no shortage of associated press releases and product demos in Las Vegas this week. But to date, its implementation has been underwhelming (leading to a scathing but spot-on recent diatribe from The Verge, among other pieces), both in comparison to its backers’ rosy projections and its true potential.
Not that any of this was a surprise to me, alas. Consider that the fundamental premise of Matter and Thread was to unite the now-fragmented smart home device ecosystem exemplified by, for example, the various Belkin Wemo devices currently residing in my abode. If you’re an up-and-coming startup in the space, you love industry standards, because they lower your market-entry barriers versus larger, more established competitors. Conversely, if you’re one of those larger, more established suppliers, you love barriers to entry for your competitors.
Therefore the lukewarm-at-best (and more frequently, nonexistent or flat-out broken) embrace of Matter and Thread by legacy smart home technology and product suppliers (for which, to be precise, and as my earlier Blink example exemplifies, conventional web browser access, vs a proprietary app, is even a bridge too far)…Suffice it to say that I’m skeptical about Matter and Thread’s long-term prospects, albeit only cautiously so. I just don’t know what it might take to break the logjam that understandably prevents competitors from working together, in spite of the reality that a rising tide often does end up lifting all boats…or if you prefer, it’s often better to get a slice of a large pie versus the entirety of a much smaller pie.
A year later, is the situation better? Not really, candidly. For a more in-depth supplier-sourced perspective, I encourage you to read Aalyia’s coverage of her time spent last week in Silicon Labs’ product suite, including an interview with Daniel Cooley, CTO of the company. Cooley is spot-on when he notes that “it is not unusual for standards adoption to progress slower than desired.” I’ve seen this same scenario play out plenty of times in the past, and Matter and Thread (assuming they eventually achieve widespread success) won’t be the last. I’m reminded, for example, of a quote attributed to Bill Gates, that “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next 10.”
Cooley is also spot-on when he notes that Matter and Thread don’t necessarily need to go together; the Matter connectivity standard can alternatively use Ethernet (either wireless, aka Wi-Fi, or wired) for transport, along with Bluetooth Low Energy for initial device setup purposes (and speaking of wireless smart home network protocols, by the way, a quick aside: check out Z-Wave’s just-announced long range enhancements). And granted, there has been at least progress with both Matter (in particular) and Thread over the past year.
Version 1.4 of the Matter specification, announced last November, promises (quoting from Ars Technica’s coverage) “more device types, improvements for working across ecosystems [editor note: a concept called “Enhanced Multi-Admin”], and tools for managing battery backups, solar panels, and heat pumps”, for example. And at CES, the Connectivity Standards Alliance (CSA), which runs Matter, announced that Apple, Google, and Samsung will accept its certification results for their various “Works With” programs, too. That said, Amazon is notably absent from the CSA’s fast-track certification list. And more generally, Ars Technica was spot-on with the title of its writeup, “Matter 1.4 has some solid ideas for the future home—now let’s see the support.” See you back here this same time next year?
The Rest of the Story
(no, I don’t know what ballet has to do with smart rings, either)
Speaking of “approaching truly ridiculous wordcount territory”, I passed through 2,000 words a couple of paragraphs back, so I’m going to strive to make the rest of this piece more concise. Looking again at the list of potential coverage technology and product topics I scribbled down a few days ago, partway through CES, and after subtracting out the “Matter and Thread” entry I just discussed, I find…16 candidates left. Let’s divide that in two, shall we? Without further ado, and in no particular order save for how they initially streamed out of my noggin:
- Smart glasses: Ray-Ban and Meta’s jointly developed second-generation smart glasses were one of the breakout consumer electronics hits of 2024, with good (initial experience, at least) reason. Their constantly evolving AI-driven capabilities are truly remarkable, on top of the first-generation’s foundational still and video image capture and audio playback support. Unsurprisingly, therefore, a diversity of smart glasses implementations in various function and price-point options, from numerous suppliers and in both nonfunctional mockup, prototype and already-in-production forms, populated 2025 CES public booths and private meeting rooms alike in abundance. I actually almost bought a pair of Ray-Ban Meta glasses during Amazon’s Black Friday…err…week-plus promotion to play around with for myself (and subsequently cover here at EDN, of course). But I decided to hold off for the inevitable barely-used (if at all) eBay-posting markdowns to come. Why? Well, the recent “publicity” stemming from the New Orleans tragedy didn’t help (and here I thought “glassholes” were bad). Even though Meta Ray-Ban offers product options with clear lenses, not just sunglasses, most folks don’t (and won’t) wear glasses all the time, not to mention that battery life limitations currently preclude doing so anyway (and don’t get me started on the embedded batteries’ inherent obsolescence by design). And when folks do wear them, they’re fashion statements. Multiple pairs for various outfits, moods, styles (invariably going in and out of fashion quickly) and the like are preferable, something that’s not fiscally feasible for the masses when the glasses cost several hundred dollars apiece.
- Smart rings: This wearable health product category is admittedly intriguing because unlike glasses (or watches, for that matter), rings are less obvious to others, therefore it’s less critical (IMHO, at least) for the wearer to perfectly match them with the rest of the ensemble…plus you have 10 options of where to wear one (that said, does anyone put a ring on their thumb?). There were quite a few smart rings at CES this year, and next year there’ll probably be more. Do me a favor; before you go further, please go read (but come back afterwards!) The Verge’s coverage of Ultrahuman’s Rare ring family (promo videos at the beginning of this section). The snark is priceless; it was the funniest piece of 2025 CES coverage I saw!
- HDMI: Version 2.2 is enroute, with higher bandwidth (96 Gbps) now supportive of 16K resolution displays (along with 4K displays at head-splitting 480 fps), among other enhancements. And there’s a new associated “Ultra96” cable, too. At first, I was a bit bummed when I heard this, due to the additional infrastructure investment that consumers will need to shoulder. But then I thought back to all the times I’d grabbed a random legacy cable out of my box o’HDMI goodies only to discover that, for example, it only supported 1080p resolution, not 4K…even though the next one I pulled out of the box, which looked just like its predecessor down to the exact same length, did 4K without breaking a sweat. And I decided that maybe making a break from HDMI’s imperfect-implementation past history wasn’t such a bad idea, after all…
- 3D spatial audio: Up to this point, Dolby’s pretty much had the 3D spatial audio (which expands—bad pun intended—beyond conventional surround sound to also encompass height) stage all to itself with Atmos, but on the eve of CES, Samsung unveiled the latest fruits of its partnership with Google to promulgate an open source alternative called IAMF, for Immersive Audio Model and Formats, now also known by its marketing moniker, “Eclipsa Audio”. In retrospect, this isn’t a terrible surprise; for high-end video, Samsung has already settled on HDR10+ versus Dolby Vision. But I have questions, specifically as to whether Google and Samsung are really going to be able to deliver something credible that doesn’t also collide with Dolby’s formidable patent portfolio. And I also gotta say that the fact that nobody at Samsung’s booth was able to answer one reporter’s questions doesn’t leave me with a great deal of early-days confidence.
- TVs: Speaking of video, I mentioned more than a decade ago that Chinese display manufacturers were beginning to “make serious hay” at South Korea competitors’ expense, much as those same South Korea-based companies had previously done to their Japanese competitors (that said, it sure was nice to see Panasonic’s displays back at CES!). To wit, TCS has become a particularly formidable presence in the TV market. While it and its competitors are increasingly using viewer-customized ads (logging and uniquely responding to the specific content you’re streaming at the time) and other smart TV “platform” revenue enhancements to counterbalance oft-unprofitable initial hardware prices, TCS takes it to the next level with remarkably bad AI-generated drivel shown on its own “free” (translation: advertising-rife) channel. No thanks, I’ll stick with reruns of The Office. That said, the on-the-fly auto-translation capabilities built into Samsung’s newest displays (along with several manufacturers’ earbuds and glasses) were way
- Qi: Good news/bad news on the wireless charging Bad news first: the Qi Consortium recently added the “Qi Ready” category to its Qi2 specification suite. What this means, simply stated, is that device manufacturers (notably, at least at the moment, of Android smartphones) no longer need to embed orientation-optimization magnets in the devices themselves. Instead, as I’m already doing with my Pixel phones, they can alternatively rely on magnets embedded in accompanying cases. On the one hand, as Apple’s MagSafe ecosystem already shows, if you put a case on a phone it needs to have magnets anyway, because the ones in the phone aren’t strong enough to work through the added intermediary case material. And—I dunno—maybe the magnets add notable bill-of-materials cost? Or they interfere with the phone’s speakers, microphones and the like? Or…more likely (cynically, at least), the phone manufacturers see branded cases-with-magnets as a lucrative upside revenue streams? Thoughts, readers? Now for the good news: auto-movable coils to optimize device orientation! How cool is that?
- Lithium battery-based storage systems: Leading suppliers are aggressively expanding beyond portable devices into full-blown home backup systems. EcoFlow’s monitoring and management software looks quite compelling, for example, although I think I’ll skip the solar cell-inclusive hat. And Jackery’s now also selling solar cell-augmented roof tiles.
- Last but not least: (the) RadioShack (licensed brand name, to be precise) is back, baby!
And, now well past 3,000 words, I’m putting this one to bed, saving discussions on robots, Wi-Fi standards evolutions, full-body scanning mirrors with cameras (!!), the latest chips, inevitable “AI” crap and the like for another day. I’ll close with iFixit’s annual “worst of show” coverage:
And with that, I look forward to your thoughts on the things I discussed, saved for later and overlooked alike in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- CES 2025 coverage
- IoT device vulnerabilities are on the rise
- Routers infected with malware: Owners (and manufacturers) beware
- Disassembling a Cloud-compromised NAS
- 2025: A technology forecast for the year ahead
- A Bluetooth receiver, an identity deceiver
- Open Source: Keep It Current Or Suffer The Consequences
- Heartbleed: the wakeup call the open-source community needed?
- Obsolescence by design, defect, or corporate decree
The post The 2025 CES: Safety, Longevity and Interoperability Remain a Mess appeared first on EDN.
Automotive insights from CES 2025

OEMs are shifting from installing black box solutions that specialized functions in the more conventional domain architecture to a zone architecture and a function-agnostic processing backbone where each node handles location-specific data. Along with this trend, there is a push towards optimizing sensor functions, fusing multimodal input data with ML for contextual awareness. Sensors no longer serve one function, instead they can be leveraged in a series of automotive systems from driver monitoring systems (DMSs) to smart door access. As a result, camera/sensor count is minimized and power consumption maximized. A tour of several booths at CES 2025 showed some of the automotive-oriented solutions.
Automotive lightingMicrochip’s intelligent smart embedded LED (ISELED), ISELED light and sensor network (ILaS), and Macroblock lighting solutions can be seen in Figure 1. The ISELED protocol was developed to overcome the issue of requiring an external IC per LED to control the color/brightness of individual LEDs. Instead, Microchip has integrated an intelligent ASIC into each LED where the entire system can be controlled with a simple 16-bit MCU. The solution allows for more styling control for aesthetics with additional use cases such as broadcasting the status of a car via text that appears on display-based matrix lighting.
Figure 1: Microchip ISELED lighting solution where all of these LEDS are individually addressable allowing designers to change color/brightness levels of each LED.
ADI’s 10BASE-T1S ethernet to edge bus (E2B) tech has been used as a body control and automotive lighting connectivity solution. And, while this solution is not directly related to LED control, it can be used to update OEM automotive lighting systems that leverage the 10BASE-T1S automotive bus.
In-cabin sensing systemsOne of the more pervasive themes were child presence detection (CPD) and occupancy monitoring system (OMS) products, with many companies showing off their ultra-wide band (UWB) detection and/or ranging tech and 60-GHz radar chips. The inspiration here comes from the incessant pressure on OEMs to meet stringent safety regulations. For instance, The Euro NCAP advanced program will only offer rewards to OEMs for direct sensing systems for CPD. For UWB sensing, the typical setup involved 4 UWB anchors placed outside of the vehicles and two on the inside to detect a phone equipped with UWB. The NXP booth’s automotive UWB demo can be seen in Figure 2. As shown in the image, the UWB radar will be able to identify the distance of the phone from the UWB anchor and unlock the car from the outside using the UWB ranging feature with time of flight (ToF) measurements. The very same principles can be applied for smart door locks and train stations, allowing passengers with pre-purchased train tickets to pass the turnstile from outside of the station to the inside of it.
Figure 2: The NXP automotive UWB radar smart car access solution.
Qorvo also showed their UWB solution, Figure 3 shows one UWB anchor on a toy car for demonstration purposes. The image also highlights another ADAS application of radar (UWB or 60 GHz): respiration and heartbeat detection.
An engineer at NXP granted a basic explanation of the process: the technology measures signal reflections from occupants to detect, for instance, how often the chest is expanding/contracting to measure breathing. This allows for direct-sensing of occupants with algorithms that can discern whether or not a child is present in the vehicle, offering a CPD, OMS, intrusion & proximity alert, and a host of other functions with the established sensor infrastructure. It is apparent that there is no clear answer on the number of wireless chips but there is more of a clear requirement that sensors are becoming more intelligent to minimize part-count—a single radar chip could eliminate five in-seat weight sensors.
Figure 4: Qorvo’s UWB keyless entry and vitals monitoring solutions in partnership with other companies.
TI’s CPD, OMS, and driver monitoring system (DMS) can be seen in Figure 5 with a combination of their 60-GHz radar chip and a camera. Naturally, the shorter wavelength 60-GHz radar offers much more range resolution so this system would likely be more accurate in CPD applications potentially offering less false positives. However, possibly the most obvious benefit of utilizing 60 GHz radar is the fact that a single module replaces the 6 UWB modules for CPD, OMS, intrusion detection, gesture detection, etc. This however, does not entirely sidestep UWB technology; the ranging aspect of UWB allows for accurate smart door access and this is something that may be impractical for 60-GHz technology, especially considering the atmospheric absorption at that particular frequency.
Figure 5: TI’s CPD, OMS, and driver monitoring system (DMS) CES demo.
AD and surround view systemsAutomotive surround view cameras for AD and ADAS functions were also presented in a number of booths. Microchip’s can be seen in Figure 6 where their serializers are used in three cameras that can transmit up to 8 Gbps. The Microchip deserializers are configured to receive the video data and aggregate it via the Automotive SerDes Alliance Motion Link (ASA-ML) standard to the central compute, or high-performance computer (HPC), mimicking a zonal architecture.
Figure 6: Microchip’s ASA-ML standard 360o surround view solution.
ADI also used a serializer/deserializer (SerDes) solution with a gigabit multimedia serial link (GMSL) demo. GMSL’s claim to fame is its lightweight nature, the single-strand solution transports up to 12 Gbps over a single bidirectional cable, shaving weight.
Figure 7: ADI GMSL demo aggregating feeds from six cameras into a deserializer board and going into a single MIPI port on the Jetson HPC-platform.
Using VLMs for ADAmbarella, a company that specializes in AI vision processors showed a particularly interesting AD demo that integrated LLM in the stack. This technology was originally developed by Vislab, an Italian startup that is now an R&D automotive center under Ambarella. The system consisted of 6 cameras, 5 radars, and Ambarella’s CV3 automotive domain controller for L2+ to L4 autonomy. The use of the vision language model (VLM) LLaVA-OneVision allowed for more context-aware decision making.
Founder of Vislab, Alberto Broggi hosted the demo and explained the benefits of leveraging an LLM in this particular use case, “Suppose you have the best perception in the world, so you can perceive everything; you can understand the position of cars, locate pedestrians, and so on. You will still have problems, because there are situations that are ambiguous.” He continued by describing a few of these situations, “If you have a car in front of you in your lane, you don’t really know whether or not you can overtake because it depends on the situation. If its a broken down vehicle, you can obviously overtake it. If it’s a vehicle that is waiting for a red light, you can’t. So you really need some higher level description and context.”
Figure 8 and the video below shows one such example of contextual-awareness that a VLM can offer.
Figure 8: Ambarella VLM AD demo with use case offering some contextual-awareness and suggestions.
Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
- CES 2025: A Chat with Siemens EDA CEO Mike Ellow
- ADI’s efforts for a wirelessly upgraded software-defined vehicle
- CES 2025: Moving toward software-defined vehicles
- CES 2025: Approaches towards hardware acceleration
- CES 2025: Day 2 Wrap and Interview with EdgeCortix’s CEO
The post Automotive insights from CES 2025 appeared first on EDN.
CES 2025 coverage

Editors from EDN and our AspenCore sister publications are covering the Consumer Electronics Show (CES). Scroll down to see coverage of this year’s CES!
![]() |
CES 2025: Day 2 Wrap and Interview with EdgeCortix’s CEO A constant theme at CES 2025 this week has been around the deployment of AI in all kinds of applications, how to drive as much intelligence as possible to the edge, sensor fusion and making everything smart. We saw many large and small companies developing technologies and products to optimize this process, aiming to get more “smarts” or performance with less effort and power. |
![]() |
CES 2025: Approaches towards hardware acceleration It is clear that support for some kind of hardware acceleration has become paramount for success in breaking into the intelligent embedded edge. Company approaches to the problem run the full gamut from hardware accelerated MCUs with abundant software support and reference code, to an embedded NPU. |
![]() |
CES 2025: It’s All About Digital Coexistence, and AI is Real CES 2025 commenced in Las Vegas, Nev., on Sunday at the Mandalay Bay Convention Center for the trade media with the Consumer Technology Association’s annual tech trends survey and forecast. Plus, there was a sneak preview provided to some of the exhibiting companies at the CES Unveiled event. |
|
Integration of AI in sensors prominent at CES 2025 Miniaturization and power efficiency have long defined sensor designs. Enter artificial intelligence (AI) and software algorithms to dramatically improve sensing performance and enable a new breed of features and capabilities. This trend has been apparent at this year’s CES in Las Vegas, Nevada. |
![]() |
Software-defined vehicle (SDV): A technology to watch in 2025 Software-defined vehicle (SDV) technology has been a prominent highlight in the quickly evolving automotive industry. But how much of it is hype, and where is the real and tangible value? CES 2025 in Las Vegas will be an important venue to gauge the actual progress this technology has made with a motto of bringing code on the road. |
![]() |
CES 2025: Wirelessly upgrading SDVs SDVs rethink underlying vehicle architecture so that cars are broken into zones that will directly service the vehicle subsystems that surround it locally, cutting down wiring, latency, and weight. Another major benefit of this is over-the-air (OTA) updates using Wi-Fi or cellular to update cloud-connected cars; however, bringing Ethernet to the automotive edge comes with its complexities. |
![]() |
CES 2025: Moving toward software-defined vehicles TI’s automotive innovations are currently focused in powertrain systems; ADAS; in-vehicle infotainment (IVI); and body electronics and lighting. The recent announcements fall into the ADAS with the AWRL6844 radar sensor as well as IVI with the AM275 and AM62D processors and the class-D audio amplifier. |
![]() |
CES 2025: Day 1 Recap with Synaptics, Ceva EE Times and AspenCore staff are on-site at CES 2025, providing expert coverage on the latest and greatest developments at one of the largest tech events in the world. |
![]() |
CES 2025: A Chat with Siemens EDA CEO Mike Ellow Siemens showcased its latest PAVE360 digital twin solution this year at CES 2025, lowering the barrier between design efforts that are typically siloed. EE Times had an opportunity to chat with Siemens EDA CEO Mike Ellow about how this approach to design is relevant for the semiconductor industry—especially considering the recent uptick in using AI tools at every level of a system to dynamically assess the trickle up/down effects of design adjustments. |
![]() |
CES 2025: An interview with Si Labs’ Daniel Cooley At the forefront of many of the CES wireless solutions is WiFi’s newest iteration (WiFi 6), BLE and BLE audio for their already-established place in consumer devices. A chat with Silicon Labs CTO Daniel Cooley illuminated the company’s presence and future in IoT and the intelligent edge. |
The post CES 2025 coverage appeared first on EDN.
Integration of AI in sensors prominent at CES 2025

Miniaturization and power efficiency have long defined sensor designs. Enter artificial intelligence (AI) and software algorithms to dramatically improve sensing performance and enable a new breed of features and capabilities. This trend has been apparent at this year’s CES in Las Vegas, Nevada.
See full story at EDN’s sister publication, Planet Analog.
Related Content
- The sensor parade at CES 2025
- Unlocking New Possibilities with Smart Sensors
- Designer’s Guide to Industrial IoT Sensor Systems
- Smart sensors need smart power integrity analysis
- Smart sensor emulation speed design of signal conditioning systems
The post Integration of AI in sensors prominent at CES 2025 appeared first on EDN.
CES 2025: Approaches towards hardware acceleration

Edge computing has naturally been a hot topic at CES with companies highlighting a myriad of use cases where the pre-trained edge device runs inference locally to produce the desired output, never once interacting with the cloud. The complexity of these nodes has grown to not only include multimodal support with the fusion and collaboration between sensors for context-aware devices but also multiple cores to ratchet up the compute power.
Naturally, any hardware acceleration has become desirable with embedded engineers craving solutions that ease the design and development burden. The solutions vary where many veer towards developing applications with servers in the cloud that are then virtualized or containerized to run at the edge. Ultimately, there is no one-size-fits-all solution for any edge compute application.
It is clear that support for some kind of hardware acceleration has become paramount for success in breaking into the intelligent embedded edge. Company approaches to the problem run the full gamut from hardware accelerated MCUs with abundant software support and reference code, to an embedded NPU.
Table 1 highlights this with a list of a few companies and their hardware acceleration support.
Company |
Hardware acceleration |
Implemented in |
Throughput |
Software |
NXP |
eIQ Neutron NPU |
select MCX, i.MX RT crossover MCUs, and i.MX applications processors |
32 Ops/cycle to over 10,000 Ops/cycle |
eIQ Toolkit, eIQ Time Series Studio |
STMicroelectronics |
Neural-ART Accelerator NPU |
STM32N6 |
up to 600 GOPS |
ST Edge AI Suite |
Renesas |
DRP-AI |
RZ/V2MA, RZ/V2L, RZ/V2M |
– |
DRP-AI Translator, DRP-AI TVM |
Silicon Labs |
Matrix Vector Processor, AI/ML co-processor |
BG24 and MG24 |
– |
MVP Math Library API, partnership with Edge Impulse |
TI |
NPU |
TMS320F28P55x, F29H85x, C2000 and more |
Up to 1200 MOPS (on 4bWx8bD) Up to 600 MOPS (on 8bWx8bD) |
Model Composer GUI or Tiny ML Modelmaker |
Synaptics |
NPU |
Astra (SL1640, SL1680) |
1.6 to 7.9 TOPS |
Open software with complete GitHub project |
Infineon |
Arm Ethos-U55 micro-NPU processor |
PSOC Edge MCU series, E81, E83 and E84 |
– |
ModusToolbox |
Microchip |
AI-accelerated MCU, MPU, DSC, or FPGA |
8-, 16- and 32-bit MCUs, MPUs, dsPIC33 DSCs, and FPGAs |
– |
MPLAB Machine Learning Development Suite, VectorBlox Accelerator Software Development (for FPGAs) |
Qualcomm |
Hexagon NPU |
Oryon CPU, Adreno GPU |
45 TOPS |
Qualcomm Hexagon SDK |
Table 1: Various company’s approaches for hardware acceleration.
Synaptics, for instance, has their Astra platform that is beginning to incorporate Google’s multi-level intermediate representation (MLIR) framework. “The core itself is supposed to take in models and operate in a general-purpose sense. It’s sort of like an open RISC-V core based system but we’re adding an engine alongside it, so the compiler decides whether it goes to the engine or whether it works in a general-purpose sense.” said Vikram Gupta, senior VP and general manager of IoT processors and chief product officer, “We made a conscious choice that we wanted to go with open frameworks. So,whether it’s a Pytorch model or a TFLite model, it doesn’t matter. You can compile it to the MLIR representation, and then from there go to the back end of the engine.” One of their CES demos can be seen in Figure 1.
Figure 1: A smart camera solution showing the Grinn SoM that uses the Astra SL1680 and software from Arcturus to provide both identification and tracking. New faces are assigned an ID and an associated confidence interval that will adjust according to the distance from the camera itself.
TI showcased its TMS320F28P55x C2000 real-time controller (RTC) MCU series with an integrated NPU with an arc fault detection solution for solar inverter applications. The system performs power conversion while at the same time doing real-time arc fault detection using AI. The solution follows the standard process of obtaining data, labeling, and training the arc fault models that are then deployed onto the C2000 device (Figure 2).
Figure 2: TI’s solar arc fault detection edge AI solution
One of Microchip’s edge demos detected true touches in the presence water using its mTouch algorithm in combination with their PIC16LF1559 MCU (Figure 3). Another solution highlighted was in partnership with Edge Impulse and used the FOMO ML architecture to perform object detection in a truck loading bay. Other companies, such as Nordic Semiconductor, have also partnered with Edge Impulse to ease the process of labeling, training, and deploying AI to their hardware. The company has also eased the process of leveraging NVIDIA TAO models to adapt well-established AI models to a specific end-application on any Edge-Impulse-supported target hardware.
Figure 3: Some of Microchip’s edge AI solutions at CES 2025. Truck loading bay augmented by AI in partnership with Edge Impulse (left) and a custom-tailored Microchip solution using their mTouch algorithm to differentiate between touch and water (right).
Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content- CES 2025: A Chat with Siemens EDA CEO Mike Ellow
- Software-defined vehicle (SDV): A technology to watch in 2025
- ADI’s efforts for a wirelessly upgraded software-defined vehicle
- CES 2025: Moving toward software-defined vehicles
The post CES 2025: Approaches towards hardware acceleration appeared first on EDN.
Dev kit uses backscatter Wi-Fi for low-power connectivity

HaiLa Technologies has introduced the EVAL2000 development board, featuring its BSC2000 passive backscatter Wi-Fi chip and ST’s STM32U0 MCU. The platform empowers developers and researchers to create ultra-low-power connected sensor applications over Wi-Fi.
The BSC2000 is a monolithic chip that combines analog front-end and digital baseband components to implement HaiLa’s backscatter protocol for 802.11 1-Mbps Direct Sequence Spread Spectrum (DSSS) over Wi-Fi. By using backscattering, it enables low-power communication by reflecting existing Wi-Fi signals instead of generating its own. This allows devices to transmit data with minimal energy consumption. Leveraging readily available, standard Wi-Fi infrastructure, the BSC2000 backscatter Wi-Fi chip collects and transmits sensor data with power efficiency that extends the life of battery-operated sensors.
The EVAL2000 development board accelerates prototyping with GPIO, I2C, and SPI sensor interfaces. Sensor integration is handled through firmware on the MCU. The kit also includes an onboard temperature/humidity sensor.
The BSC2000 EVAL2000 development kit is available for preorder, with shipping anticipated for Q1 2025. For more information on the backscatter Wi-Fi chip and development kit, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Dev kit uses backscatter Wi-Fi for low-power connectivity appeared first on EDN.
SoC supports multiple wireless protocols

The Talaria 6 family of SoCs from InnoPhase provides Wi-Fi 6, Bluetooth 6.0, Thread, and Zigbee connectivity, along with PSA Level 2 and Level 3 security. Powered by an Arm Cortex-M33 processor and a rich peripheral suite, the SoCs offer the computational performance needed for real-time, on-chip edge AI tasks, including predictive maintenance, sensor analytics, and smart power management.
Talaria 6 wireless SoCs support Wi-Fi 6 (802.11ax) and are Wi-Fi 7 (802.11be) ready, achieving ultra-low power and high-performance connectivity. Integrated digital CMOS radio technology ensures robust throughput in noisy, high-density environments, making them well-suited for smart thermostats, video cameras, and sensors.
Single and dual-band options (2.4 GHz/5 GHz) offer flexible band selection based on use case and network conditions. IEEE 802.11be extensions and multi-link operation improve throughput, lower latency, and increase reliability in congested environments.
Additionally, the SoCs support Bluetooth 6.0, Bluetooth Classic, Thread, and Zigbee mesh networks, enabling seamless integration with a wide range of IoT devices. To protect against cybersecurity threats, Talaria 6 devices feature hardware-based encryption, secure boot, and tamper resistance, safeguarding sensitive data and meeting PSA Level 2 and Level 3 security standards.
The INP6120 2.4-GHz Wi-Fi 6 SoC is expected to sample in Q2 2025, with production starting in Q4 2025. The INP6220 dual-band 2.4/5-GHz Wi-Fi 6 SoC will sample in the second half of 2025, with production beginning in the first half of 2026.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post SoC supports multiple wireless protocols appeared first on EDN.
Synaptics partners with Google to advance edge AI

Synaptics is pairing Google’s ML core with its Astra AI-native hardware and open-source software to simplify context-aware IoT device development. The MLIR-compliant core on Astra hardware accelerates AI processing for vision, image, voice, sound, and other modalities. This combination enables intuitive interaction in wearables, appliances, entertainment systems, embedded hubs, monitoring, and control across consumer, automotive, enterprise, and industrial applications.
The Astra AI-native compute platform for IoT integrates scalable, low-power edge compute silicon with open-source, user-friendly software, robust tools, a strong partner ecosystem, and wireless connectivity. Built on Synaptics’ expertise in neural networks, proven AI hardware, and compiler design for IoT, the platform also supports a wide range of modalities with refined in-house solutions. Google’s ML core, a highly efficient open-source machine learning core, is MLIR-compliant, enhancing compatibility with modern compilers.
For more information about Synaptics’ Astra embedded processors for AI-native IoT, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Synaptics partners with Google to advance edge AI appeared first on EDN.
Mitsubishi samples high-voltage IGBT modules

Mitsubishi announced that it has begun shipping samples of two new S1-Series high-voltage IGBT modules rated at 1.7 kV. These two components are useful for large industrial equipment, such as railcars and DC power transmitters. With proprietary IGBT devices and advanced insulation structures, the S1-Series modules enhance reliability, minimize power loss, and reduce thermal resistance, supporting more reliable and efficient operation of inverters in large industrial equipment.
The S1-Series incorporates Mitsubishi’s Relaxed Field of Cathode (RFC) diode, increasing the Reverse Recovery Safe Operating Area (RRSOA) by 2.2 times over previous models, improving inverter reliability. Additionally, an IGBT element with a Carrier Stored Trench Gate Bipolar Transistor (CSTBT) structure reduces power loss and thermal resistance, enabling more efficient inverter operation. The upgraded insulation structure boosts insulation voltage resistance to 6.0 kVRMS—1.5 times higher than earlier products—allowing more flexible insulation designs for compatibility with a broader range of inverter types.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Mitsubishi samples high-voltage IGBT modules appeared first on EDN.
ADI’s efforts for a wirelessly upgraded software-defined vehicle
In-vehicle systems have massively grown in complexity with more installed speakers, microphones, cameras, displays, and compute burden to process the necessary information and provide the proper, often time-sensitive output. The unfortunate side effect of this complexity is the massive increase in ECUs and subsequent cabling to and from its allocated subsystem (e.g., engine, powertrain, braking, etc.). The lack of practicality with this approach has become apparent where more OEMs are shifting away from these domain-based architectures and subsequently traditional automotive buses such as local interconnect network (LIN), controlled area network (CAN) for ECU communications, FlexRay for x-by-wire systems, and media oriented transport (MOST) for audio and video systems. SDVs rethink underlying vehicle architecture so that cars are broken into zones that will directly service the vehicle subsystems that surround it locally, cutting down wiring, latency, and weight. Another major benefit of this are over-the-air (OTA) updates using Wi-Fi or cellular to update cloud-connected cars, however bringing ethernet to the automotive edge comes with its complexities.
ADI’s approach to zonal architecturesThis year at CES, EDN spoke with Yasmine King, VP of automotive cabin experience at Analog Devices (ADI). The company is closely working with the underlying connectivity solutions that allow vehicle manufacturers to shift from domain architectures to zonal with ethernet-to-edge (E2B) bus, automotive audio bus (A2B), and gigabit multimedia serial link (GMSL) technology. “Our focus this year is to show how we are adding intelligence at the edge and bringing the capabilities from bridging the analog of the real world into the digital world. That’s the vision of where automotive wants to get to, they want to be able to create experiences for their customers, whether it’s the driving experience, whether it’s the back seat passenger experience. How do you help create these immersive and safe experiences that are personalized to each occupant in the vehicle? In order to do that, there has to be a fundamental change of what the architecture of the car looks like,” said King. “So in order to do this in a way that is sustainable, for mobility to remain green, remain long battery range, good fuel efficiency, you have to find a way of transporting that data efficiently, and the E2B bus is one of those connectivity solutions where it’s it allows for body control, ambient lighting.”
E2B: Remote control protocol solution 10BASE-T1S solutionBased on the OPEN alliance 10BASE-T1S physical layer (PHY), the E2B bus aims at removing the need for MCUs centralizing the software to the high performance compute (HPC) or central compute (Figure 1). “The E2B bus is the only remote control protocol solution available on the market today for the 10BASE-T1S so it’s a very strong position for us. We just released our first product in June of this past year, and we see this as a very fundamental way to help the industry transform to zonal architecture. We’re working with the OPEN alliance to be part of that remote control definition.” These transceivers will integrate low complexity ethernet (LCE) hardware for remote operation and, naturally, can be used on the same bus as any other 10BASE-T1S-compliant product
BMW has already adopted the E2B bus for their ambient lighting system, King mentioned that there has already been further adoption by other OEMs but they were not public yet. “The E2B bus is one of those connectivity solutions where it allows for body control, ambient lighting. Honestly, there’s about 50 or 60 different applications inside the vehicle.” She mentioned how E2B is often used for ambient lighting today but there are many other potential applications such as driver monitoring systems (DMSs) that might detect a sleeping driver via the in-vehicle biometric capabilities to then respond with a series of measures to wake them up, E2B allows OEMs to apply these measures with an OTA update. Without E2B, you’d have to not only update the DMS, but you’d have to update the multiple nodes that are controlling the ambient light. The owner might have to take it back into the shop to apply the updates, it just takes longer and is more of a hassle. With E2B, it’s a single OTA update that is an easy, quick download to add safety features so it’s more realistic to get that safer, more immersive driver experience.” The goal for ADI is to move all the software from all edge nodes to the central location for updates.
Figure 1: EDN editor, Aalyia Shaukat (left) and VP of automotive cabin experience, Yasmine King (right) in front of a suspension control demo with 4 edge nodes sensing the location of the weighted ball, sends the information back to the HPC to send commands back to control the motors.
A2B: Audio system based on 100BASE-T1Based upon the 100BASE-T1 standard, the A2B audio follows a similar concept of connecting edge nodes with a specialization in sound limiting the installation of weighty shielded analog cables going to and from the many speakers and microphones in vehicles today for modern functions such as active noise cancellation (ANC) and road noise cancellation (RNC). “We have RNC algorithms that are connected through A2B, and it’s a very low latency, highly deterministic bus. It allows you to get the inputs from, say, the wheel base, where you’re listening for the noise, to the brain of the central compute very quickly.” King mentioned how audio systems require extremely low latencies for an enhanced user experience, “your ears are very susceptible to any small latency or distortion.” The technology has more maturity than the newer E2B bus and has therefore seen more adoption, “A2B is a technology that is utilized across most OEMs, the top 25 OEMs are all using it and we’ve shipped millions of ICs.” ADI is working on a second iteration of the A2B bus that multiplies the data rate of the previous generation, this is likely due to the maturation of the 1000BASE-T1 standard for automotive applications that is meant to reach 1 Gbps. When asked about the data rate King responded, “I’m not sure exactly what we are publicly stating yet but it will be a multiplier.”
GMSL: Single-wire SerDes display solutionGMSL is the in-vehicle serializer/deserializer (SerDes) video solution that shaves off the significant wiring typically required with camera and subsequent sensor infrastructure (Figure 2). “As you’re moving towards autonomous driving and you want to replace a human with intelligence inside the vehicle, you need additional sensing capabilities along with radar, LiDAR, and cameras to be that perception sensing network. It’s all very high bandwidth and it needs a solution that can be transmitted in a low-cost, lightweight cable.” Following a similar theme as the E2B and A2B audio buses, using a single cable to manage a cluster display or an in-vehicle infotainment (IVI) human-to-machine interface (HMI) minimizes the potential weight issues that could damage range/fuel efficiency. King finished by mentioning one overlooked benefit of lowering the weight of vehicle harnessing “The other piece that often gets missed is it’s very heavy during manufacturing, when you move over 100 pounds within the manufacturing facilities you need different safety protocols. This adds expense and safety concerns for the individuals who have to pick up the harness where now you have to get a machine over to pick up the harness because it’s too heavy.”
Figure 2: GMSL demo aggregating feeds from six cameras into a deserializer board going into a single MIPI port on the Jetson HPC-platform.
Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
The post ADI’s efforts for a wirelessly upgraded software-defined vehicle appeared first on EDN.
PWMpot approximates a Dpot

Digital potentiometers (“Dpots”) are a diverse and useful category of digital/analog components with up to a 10-bit resolution, element resistance from 1k to 1M, and voltage capability up to and beyond ±15v. However, most are limited to 8 bits, monopolar (typically 0v to +5v) signal levels, and 5k to 100k resistances with loose tolerances of ±20 to 30%.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This design idea describes a simple and inexpensive Dpot-like alternative. It has limitations of its own (mainly being restricted to relatively low signal frequencies) but offers useful and occasionally superior performance in areas where actual Dpots tend to fall short. These include parameters like bipolar signal range, terrific differential nonlinearity, tight resistance accuracy, and programmable resolution. See Figure 1.
Figure 1 PWM drives opposing-phase CMOS switches and RC network to simulate a Dpot
RC ripple filtering limits frequency response to typically tens to hundreds of Hz.
Switch U1b connects wiper node W to node B when PWM = 1, and to A when PWM = 0. Letting the PWM duty factor, P = 0 to 1, and assuming no excessive loading of W:
Vw = P(Vb – Va) + Va
Meanwhile, switch U1a connects W to node A when PWM = 1, and to B when PWM = 0, thus 180o out of phase with U1b. Due to AC coupling, this has no effect on pot DC output, but the phase inversion relative to U1b delivers active ripple attenuation as described in “Cancel PWM DAC ripple with analog subtraction.”
The minimum RC time-constant required to attenuate ripple to no more than 1 least significant bit (lsb) for any given N = number of PWM bits of resolution and Tpwm = PWM period is given by:
RC = Tpwm 2(N/2 – 2)
For example:
for N = 8, Fpwm = 10 kHz
RC = 10 kHz-1*2(8/2 – 2) = 100 µs*22 = 400 µs
The maximum acceptable value for R is dictated by the required Vw voltage accuracy under load. Minimum R is determined by:
- Required resistance accuracy after factoring in the variability of U1b switch Ron: r which is 40 ±40Ω for the HC4053 powered as in Figure 1.
- Required integral nonlinearity (INL) as affected by switch-to-switch Ron variation, which is just 5 Ω for the HC4053 as powered here.
R = 1k to 10k would be a workable range of choices for N = 8-bit resolution. N is programmable.
The net result is the equivalent circuit shown in Figure 2. Note that, unlike a mechanical pot or Dpot, where output resistance varies dramatically with wiper setting, the PWMpot’s output resistance (R +r) is nominally constant and independent of setting.
Figure 2 The PWMpot’s equivalent circuit where r = switch Ron, P = PWM duty factor, and where the ripple filter capacitors are not shown.
Funny footnote: While pondering a name for this idea, I initially thought “PWMpot” was too long and considered making it shorter and catchy-er by dropping the “WM.” But then, after reading the resulting acronym out loud, I decided it was maybe a little too catchy.
And put the “WM” back!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Cancel PWM DAC ripple with analog subtraction
- A faster PWM-based DAC
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- PWM power DAC incorporates an LM317
- Cancel PWM DAC ripple with analog subtraction but no inverter
- Cancel PWM DAC ripple with analog subtraction—revisited
The post PWMpot approximates a Dpot appeared first on EDN.
AI at the edge: It’s just getting started

Artificial intelligence (AI) is expanding rapidly to the edge. This generalization conceals many more specific advances—many kinds of applications, with different processing and memory requirements, moving to different kinds of platforms. One of the most exciting instances, happening soonest and with the most impact on users, is the appearance of TinyML inference models embedded at the extreme edge—in smart sensors and small consumer devices.
Figure 1 The TinyML inference models are being embedded at the extreme edge in smart sensors and small consumer devices. Source: PIMIC
This innovation is enabling valuable functions such as keyword spotting (detecting spoken keywords) or performing environmental-noise cancellation (ENC) with a single microphone. Users treasure the lower latency, reduced energy consumption, and improved privacy.
Local execution of TinyML models depends on the convergence of two advances. The first is the TinyML model itself. While most of the world’s attention is focused on enormous—and still growing—large language models (LLMs), some researchers are developing really small neural-network models built around hundreds of thousands of parameters instead of millions or billions. These TinyML models are proving very capable on inference tasks with predefined inputs and a modest number of inference outputs.
The second advance is in highly efficient embedded architectures for executing these tiny models. Instead of a server board or a PC, think of a die small enough to go inside an earbud and efficient enough to not harm battery life.
Several approaches
There are many important tasks involved in neural-network inference, but the computing workload is dominated by matrix multiplication operations. The key to implementing inference at the extreme edge is to perform these multiplications with as little time, power, and silicon area as possible. The key to launching a whole successful product line at the edge is to choose an approach that scales smoothly, in small increments, across the whole range of applications you wish to cover.
It is the nature of the technology that models get larger over time.
System designers are taking different approaches to this problem. For the tiniest of TinyML models in applications that are not particularly sensitive to latency, a simple microcontroller core will do the job. But even for small models, MCUs with their constant fetching, loading, and storing are not an energy-efficient approach. And scaling to larger models may be difficult or impossible.
For these reasons many choose DSP cores to do the processing. DSPs typically have powerful vector-processing subsystems that can perform hundreds of low-precision multiply-accumulate operations per cycle. They employ automated load/store and direct memory access (DMA) operations cleverly to keep the vector processors fed. And often DSP cores come in scalable families, so designers can add throughput by adding vector processor units within the same architecture.
But this scaling is coarse-grained, and at some point, it becomes necessary to add a whole DSP core or more to the design, and to reorganize the system as a multicore approach. And, not unlike the MCU, the DSP consumes a great deal of energy in shuffling data between instruction memory and instruction cache and instruction unit, and between data memory and data cache and vector registers.
For even larger models and more latency-sensitive applications, designers can turn to dedicated AI accelerators. These devices, generally either based on GPU-like SIMD processor arrays or on dataflow engines, provide massive parallelism for the matrix operations. They are gaining traction in data centers, but their large size, their focus on performance over power, and their difficulty in scaling down significantly make them less relevant for the TinyML world at the extreme edge.
Another alternative
There is another architecture that has been used with great success to accelerate matrix operations: processing-in-memory (PiM). In this approach, processing elements, rather than being clustered in a vector processor or pipelined in a dataflow engine, are strategically dispersed at intervals throughout the data memory. This has important benefits.
First, since processing units are located throughout the memory, processing is inherently highly parallel. And the degree of parallel execution scales smoothly: the larger the data memory, the more processing elements it will contain. The architecture needs not change at all.
In AI processing, 90–95% of the time and energy is consumed by matrix multiplication, as each parameter within a layer is computed with those in subsequent layers. PiM addresses this inefficiency by eliminating the constant data movement between memory and processors.
By storing AI model weights directly within memory elements and performing matrix multiplication inside the memory itself as input data arrives, PiM significantly reduces data transfer overhead. This approach not only enhances energy efficiency but also improves processing speed, delivering lower latency for AI computations.
To fully leverage the benefits of PiM, a carefully designed neural network processor is crucial. This processor must be optimized to seamlessly interface with PiM memory, unlocking its full performance potential and maximizing the advantages of this innovative technology.
Design case study
The theoretical advantages of PiM are well established for TinyML systems at the network edge. Take the case of Listen VL130, a voice-activated wake word inference chip,which is also PIMIC’s first product. Fabricated on TSMC’s standard 22-nm CMOS process, the chip’s always-on voice-detection circuitry consumes 20 µA.
This circuit triggers a PiM-based wake word-inference engine that consumes only 30 µA when active. In operation, that comes out to a 17-times reduction in power compared to an equivalent DSP implementation. And the chip is tiny, easily fitting inside a microphone package.
Figure 2 Listen VL130, connected to external MCU in the above diagram, is an ultra-low-power keyword-spotting AI chip designed for edge devices. Source: PIMIC
PIMIC’s second chip, Clarity NC100, takes on a more ambitious TinyML model: single-microphone ENC. Consuming less than 200 µA, which is up to 30 times more efficient than a DSP approach, it’s also small enough for in-microphone mounting. It is scheduled for engineering samples in January 2025.
Both chips depend for their efficiency upon a TinyML model fitting entirely within an SRAM-based PiM array. But this is not the only way to exploit PiM architectures for AI, nor is it anywhere near the limits of the technology.
LLMs at the far edge?
One of today’s undeclared grand challenges is to bring generative AI—small language models (SLMs) and even some LLMs—to edge computing. And that’s not just to a powerful PC with AI extensions, but to actual edge devices. The benefit to applications would be substantial: generative AI apps would have greater mobility while being impervious to loss of connectivity. They could have lower, more predictable latency; and they would have complete privacy. But compared to TinyML, this is a different order of challenge.
To produce meaningful intelligence, LLMs require training on billions of parameters. At the same time, the demand for AI inference compute is set to surge, driven by the substantial computational needs of agentic AI and advanced text-to-video generation models like Sora and Veo 2. So, achieving significant advancements in performance, power efficiency, and silicon area (PPA) will necessitate breakthroughs in overcoming the memory wall—the primary obstacle to delivering low-latency, high-throughput solutions.
Figure 3 Here is a view of the layout of Listen VL130 chip, which is capable of processing 32 wake words and keywords while operating in the tens of microwatts, delivering energy efficiency without compromising performance. Source: PIMIC
At this technology crossroads, PiM technology is still important, but to a lesser degree. With these vastly larger matrices, the PiM array acts more like a cache, accelerating matrix multiplication piecewise. But much of the heavy lifting is done outside the PiM array, in a massively parallel dataflow architecture. And there is a further issue that must be resolved.
At the edge, in addition to facilitate model execution, it’s of primary importance to resolve the bandwidth and energy issues that come with scaling to massive memory sizes. Meeting all these challenges can improve an edge chip’s power-performance-area efficiency by more than 15 times.
PIMIC’s studies indicate that models with hundreds of millions to tens of billions of parameters can in fact be executed on edge devices. It will require 5-nm or 3-nm process technology, PiM structures, and most of all a deep understanding of how data moves in generative-AI models and how it interacts with memory.
PiM is indeed a silver bullet for TinyML at the extreme edge. But it’s just one tool, along with dataflow expertise and deep understanding of model dynamics, in reaching the point where we can in fact execute SLMs and some LLMs effectively at the far edge.
Subi Krishnamuthy is the founder and CEO of PIMIC, an AI semiconductor company developing processing-in-memory (PiM) technology for ultra-low-power AI solutions.
Related Content
- Getting a Grasp on AI at the Edge
- Tiny machine learning brings AI to IoT devices
- Why MCU suppliers are teaming up with TinyML platforms
- Open-Source Development Comes to Edge AI/ML Applications
- Edge AI: The Future of Artificial Intelligence in embedded systems
The post AI at the edge: It’s just getting started appeared first on EDN.
Unconventional headphones: Sonic response consistency, albeit cosmetically ungainly

Back in mid-2019, I noted that the ability to discern high quality music and other audio playback (both in an absolute sense and when relatively differentiating between various delivery-format alternatives) was dependent not only on the characteristics of the audio itself but also on the equipment used to audition it. One key link in the playback chain is the speakers, whether integrated (along with crossover networks and such) into standalone cabinets or embedded in headphones, the latter particularly attractive because (among other reasons) they eliminate any “coloration” or other alteration caused by the listening room’s own acoustical characteristics (not to mention ambient background noise and imperfect suppression of its derogatory effects).
However, as I wrote at the time, “The quality potential inherent in any audio source won’t be discernable if you listen to it over cheap (i.e., limited and uneven frequency response, high noise and distortion levels, etc.) headphones.” To wit, I showcased three case study examples from my multi-headphone stable: the $29.99 (at the time) Massdrop x Koss Porta Pro X:
$149.99 Massdrop x Sennheiser HD 58X Jubilee:
and $199.99 Massdrop x Sennheiser HD 6XX:
I’ve subsequently augmented the latter two products with optional balanced-connection capabilities via third-party cables. Common to all three is an observation I made about their retail source, Drop (formerly Massdrop): the company “partners with manufacturers both to supply bulk ‘builds’ of products at cost-effective prices in exchange for guaranteed customer numbers, and (in some cases) to develop custom variants of those products.” Hold that thought.
And I’ve subsequently added another conventional-design headphone set to the menagerie: Sony’s MDR-V6, a “colorless” classic that dates from 1985 and is still in widespread recording studio use to this day. Sony finally obsoleted the MDR-V6 in 2020 in favor of the MDR-7506, more recent MDR-M1 and other successor models, which motived my admitted acquisition of several gently used MDR-V6 examples off eBay:
One characteristic that all four of these headphones share is that, exemplifying the most common headphone design approach, they’re all based on electrodynamic speaker drivers:
At this point, allow me a brief divergence; trust me, its relevance will soon be more obvious. In past writeups I’ve done on various kinds of both speakers and microphones, I’ve sometimes intermingled the alternative term “transducer”, a “device that converts energy from one form to another,” for both words. Such interchange is accurate; even more precise would be an “electroacoustic transducer”, which converts between electrical signals and sound waves. Microphones input sound waves and output electrical signals; with speakers, it’s the reverse.
I note all of this because electrodynamic speaker drivers, specifically in their most common dynamic configuration, are the conceptual mirror twins to the dynamic microphones I more recently wrote about in late November 2022. As I explained at the time, in describing dynamic mics’ implementation of the principle of electromagnetic induction:
A dynamic microphone operates on the same basic electrical principles as a speaker, but in reverse. Sound waves strike the diaphragm, causing the attached voice coil to move through a magnetic gap creating current flow as the magnetic lines are broken.
Unsurprisingly, therefore, the condenser and ribbon microphones also discussed in that late 2022 piece also have (close, albeit not exact, in both of these latter cases) analogies in driver design used for both standalone speakers and in headphones. Condenser mics first; here’s a relevant quote from my late 2022 writeup, corrected thanks to reader EMCgenius’s feedback:
Electret condenser microphones (ECMs) operate on the principle that the diaphragm and backplate interact with each other when sound enters the microphone. Either the diaphragm or backplate is permanently electrically charged, and this constant charge in combination with the varying capacitance caused by sound wave-generated varying distance between the diaphragm and backplate across time results in an associated varying output signal voltage.
Although electret drivers exist, and have found use both in standalone speakers and within headphones, their non-permanent-charge electrostatic siblings are more common (albeit still not very common). To wit, an excerpt from a relevant section of Wikipedia’s headphones entry:
Electrostatic drivers consist of a thin, electrically charged diaphragm, typically a coated PET film membrane, suspended between two perforated metal plates (electrodes). The electrical sound signal is applied to the electrodes creating an electrical field; depending on the polarity of this field, the diaphragm is drawn towards one of the plates. Air is forced through the perforations; combined with a continuously changing electrical signal driving the membrane, a sound wave is generated…A special amplifier is required to amplify the signal to deflect the membrane, which often requires electrical potentials in the range of 100 to 1,000 volts.
Now for ribbon microphones; here’s how Wikipedia and I described them back in late 2022:
A type of microphone that uses a thin aluminum, duraluminum or nanofilm of electrically conductive ribbon placed between the poles of a magnet to produce a voltage by electromagnetic induction.
Looking at that explanation and associated image, you can almost imagine how the process would work in reverse, right? Although ribbon speakers do exist, my focus for today is their close cousins, planar magnetic (also known as orthodynamic) speakers. Wikipedia again:
Planar magnetic speakers (having printed or embedded conductors on a flat diaphragm) are sometimes described as ribbons, but are not truly ribbon speakers. The term planar is generally reserved for speakers with roughly rectangular flat surfaces that radiate in a bipolar (i.e. front and back) manner. Planar magnetic speakers consist of a flexible membrane with a voice coil printed or mounted on it. The current flowing through the coil interacts with the magnetic field of carefully placed magnets on either side of the diaphragm, causing the membrane to vibrate more or less uniformly and without much bending or wrinkling. The driving force covers a large percentage of the membrane surface and reduces resonance problems inherent in coil-driven flat diaphragms.
I’ve chronologically ordered electrostatic and planar magnetic driver technologies based on their initial availability dates, not based on when examples of them came into my possession. Specifically, I found a good summary of the two approaches (along with their more common dynamic driver forebear) on Ken Rockwell’s always-informative website, which is also full of lots of great photography content (it’s always nice to stumble across a kindred interest spirit online!). Rockwell notes that electrostatics were first introduced in 1957 [editor note: by Stax, who’s still in the business], and “have been popular among enthusiasts since the late 1950s, but have always been on the fringe as they are expensive, require special amplifiers and power sources and are delicate—but they sound flawless.” Conversely, regarding planar magnetics, which date from 1972, he comments, “Planar magnetic drivers were invented in the 1970s and didn’t become popular until modern ultra-powerful magnet technology become common in the 2000s. Planar magnetics need tiny, ultra powerful magnets that didn’t used to exist. Planar magnetics offer much of the sound quality of electrostatics, with the ease-of use and durability of conventional drivers, which explains why they are becoming more and more popular.”
Which takes us, roughly 1,200 words in, to the specifics of my exotic headphone journey, which began with two sets containing planar magnetic drivers. Back in late May 2024, Woot! was selling the Logitech for Creators Blue Ella headset (Logitech having purchased Blue in mid-2018) for $99.99, versus the initial $699.99 MSRP when originally introduced in early January 2017. The Ella looked (and still looks) weird, and is also heavy, albeit surprisingly comfortable; the only time I’ve ever seen anyone actually using one was a brief glimpse on Trey Parker and Matt Stone’s heads while doing voice tracks for South Park within the recently released Paramount+ documentary ¡Casa Bonita Mi Amor!. But reviewers rave about the headphones’ sound quality, a headphone amplifier is integrated for use in otherwise high impedance-unfriendly portable playback scenarios, and my wife was bugging me for a Father’s Day gift suggestion. So…
A couple of weeks later, a $10-off promotional coupon from Drop showed up in my email inbox. Browsing the retailer’s inventory, I came across another set of planar magnetic headphones, the Drop + HIFIMAN HE-X4 (remember my earlier comments about Drop’s longstanding history of partnering with name-brand suppliers to come up with custom product variants?), at the time selling for $99.99. They were well reviewed by the Drop community, and looked much less…err…alien…than the Blue Ella, so…(you’ve already seen one stock photo of ‘em earlier):
Look how happy she is (in spite of how big they are on her head)!
And of course, with two planer magnetic headsets now in the stable, I just had to snag an electrostatic representative too, right? Koss, for example, has been making (and evolving) them ever since 1968’s initial ESP/6 model:
The most recent ESP950 variant came out in 1990 and is still available for purchase at $999.99 (or less: Black Friday promotion-priced at $700 on Amazon as I type these words). Believe it or not, it’s one of the most cost-effective electrostatic headphone options currently in the market. Still, its price tag was too salty for my curiosity taste, lifetime factory warranty temptation aside.
That box to the right is the “energizer”, which tackles both the aforementioned high voltage generation and output signal amplification tasks. Koss includes with the ESP950 kit, believe it or not, a 6 C-cell battery pack to alternatively power the energizer (therefore enabling use of the headphones) when away from an AC outlet. Portability? Hardly, although in fairness, the ESP950 was originally intended for use in live recording settings.
But then I stumbled across the fact that back in April 2019, Drop (doing yet another partnership with a brand-name supplier, this one reflective of a long-term multi-product engagement also exemplified by the earlier-shown Porta Pro X) had worked with Koss to introduce a well-reviewed $499.99 version of the kit called the Massdrop x Koss ESP/95X Electrostatic System:
Drop tweaked the color scheme of both the headphones themselves (to midnight blue) and the energizer, swapped out the fake leather (“pleather”) earpads for foam ones wrapped in velour, and dropped both the battery pack and the leather case (the latter still available for purchase standalone for $150) from Koss’s kit to reduce the price point:
Bad news: at least for the moment, the ESP/95X is no longer being sold by Drop. Good news: I found a gently used kit on eBay for $300 plus shipping and tax (and for likely obvious reasons, I also purchased a two-year extended warranty for it).
And what did all of this “retail therapy” garner me? To set the stage for this section, I’ll again quote from the introduction to Ken Rockwell’s earlier mentioned writeup:
Almost all speakers and headphones today are “dynamic.”
Conventional speakers and headphones stick a coil of wire inside a magnet, and glue this coil to a stiff cone or dome that’s held in place with a springy suspension. Current passes through this coil, and electromagnetism creates force on the coil while in the field of the magnet. The resulting force vibrates the coil, and since it’s glued to a heavy cone, moves the whole mess in and out. This primitive method is still used today because it’s cheap and works reasonably well for most purposes.
Dynamic drivers are the standard today and have been the standard for close to a hundred years. These systems are cheap, durable and work well enough for most uses, however their heavy diaphragms and big cones lead to many more sound degrading distortions and resonances absent in the newer systems below.
By “newer systems below”, of course, he’s referring to alternative electrostatic and planar magnetic approaches. And although he’s not totally off-base with his observations, the choice of words like “primitive method” reveals a bias, IMHO. It’s true that the large, flat, thin and lightweight membrane-based approaches have inherent (theoretical, at least) advantages when it comes to metrics such as distortion and transient response, leading to descriptions such as “unmatched clarity and impressive detail”, which admittedly concur with my own ears-on impressions. That said, theoretical benefits are moot if they don’t translate into meaningful real-life enhancements. To wit, for a more balanced perspective, I’ll close with a (fine-tuned by yours truly) post within an August 2023 discussion thread titled “Is there really any advantage to planar magnetics or electrostats?” at Audio Science Review, a site that I regularly reference:
For electrostatics, the strong points are the low membrane weight and drive across the entire membrane. The disadvantage is output level. The driver surface area is big, which has advantages and disadvantages. On can play with shape to change modal behavior. Electrostatics are difficult to drive in the sense that they require a bias voltage (or electret charge) and high voltage on the plates, which necessitates mains voltage or converters. Mechanical tension is a must and ‘sticking’ to one stator is a potential problem.
For planar magnetics, the strong points are the maximum sound pressure level (SPL), linearity and the driver size. The latter can be both a blessing and (frequency-dependent) downside. Fewer tuning methods are available, and it is difficult to get a bass boost in a passive way. The magnets obstruct the sound waves more than does the stator of electrostatic planars, which has an influence on mid to high frequencies. Planar magnetics are easier to drive than electrostatics but in general are inefficient compared to dynamic drivers, especially when high SPL is needed with good linearity. They are heavy (weight) due to the magnets compared to other drivers. They can handle a lot of power. They need closed front volume to work properly.
Dynamics can have a much higher efficiency, at the expense of maximum undistorted SPL. They can be used directly from low power sources. There are many more ways to ‘shape’ the sound signature of the driver, and the headphone containing it. They are less expensive to make, and lighter in weight. Membrane size and shape can both find use in controlling modal issues. Linearity (max SPL without distortion) can be much worse than planar alternatives, although for low to normal SPLs, this usually is not an issue.
Balanced armature drivers [editor note: an alternative to dynamic drivers not discussed here, commonly found in earbuds] are smaller and can be easily used close to the ear canal. These drivers too have strong and weak points and are quite different from dynamic drivers. They are easier to make custom molds for due to their size.
In closing, speaking of “balance” (along with the just-mentioned difference between theoretical benefits and meaningful real-life enhancements), I found it interesting that none of the electrostatic or planar magnetic headphones discussed here offer the balanced-connection output (even optional) that I covered at length back in December 2020:
And with that, having just passed through the 2,500-word threshold, I’ll close for today with an as-usual invitation for your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Balanced headphones: Can you hear the difference?
- Microphones: An abundance of options for capturing tones
- Is high-quality streamed audio an oxymoron?
- Earbud implementation options: Taking a test drive(r)
- Teardown: Analog rules over digital in noise-canceling headphones
- Audio Perceptibility: Mind The Headphone Sensitivity
The post Unconventional headphones: Sonic response consistency, albeit cosmetically ungainly appeared first on EDN.