EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 25 min 7 sec ago

Multi-solar panel interconnections: Mind the electrons’ directions

Tue, 01/21/2025 - 18:13

The concluding post in a series; here’s part 1 (covering the Energizer PowerSource Pro Battery Generator) and part 2 (discussing its companion 200W portable solar panel)…

Eagle-eyed readers may have already noticed, within the listing for the Energizer Solar Bundle I bought at the beginning of September (and returned shortly thereafter), the following prose:

Pick up another 200-watt solar panel on SideDeal to max out the solar capacity of 400W (but do take note of these instructions)

That I did, for an incremental $179.99. It was (past tense usage because I ended up returning it, too) a “Duracell Heavy Duty 200-Watt Briefcase Solar Panel”, and it also came from Battery Biz. Here are some “stock” images:

Along with a bullet list of specs:

  • Brand: Duracell
  • Material: ‎Monocrystalline Silicon [editor note: claimed conversion efficiency of 22%]
  • Dimensions (unfolded): ‎55.2 x 36.5 x 1.3 in
  • Item Weight: ‎36.2 lbs
  • Maximum Power Point Voltage: ‎18.6 V
  • Maximum Power Point Current: 10.8 V
  • Open Circuit Voltage: 22.8 V
  • Short Circuit Current: 11.2 A
  • Maximum System Voltage (Vmax): 1000VDC
  • Normal Operating Cell Temperature (NOCT): 45±2°C
  • Temperature Range (°C): -40°C~+85°C
  • Power Tolerance: ±5%
  • Application Class: Class A
  • Included Components: ‎Panel
  • Maximum Power: ‎200 Watts
  • Manufacturer: ‎Duracell

When it arrived, the outer packaging was…err…already breached:

Thankfully, the box inside was still intact:

although, as you may have already noticed, the product image on the front didn’t match the “stock” shots, which left me a bit disconcerted before I even opened it.

Here’s the box backside:

Take the “unfolded” dimensions listed in the earlier bullet list, divide the largest by two, and you end up with an approximation of the data shown in this closeup image, which was reassuring:

Open the box, pull out the hefty contents (the promotional prose claims that you can “Experience the convenience of home no matter where you are with our lightweight and foldable solar panel” (foldable = yes, lightweight = not), and the “Zippered Carrying Case with Handle” conceptualized in the previous photo’s graphic is the first thing you’ll come across:

Chug a can of spinach, unzip the case and huff, puff and pull the bulky panel out:

Now undo the clips on the “handle” edge and unfold it:

Like I said, it looks nothing like the “stock” photos, although it ended up being fully functional.

I only realized after returning it that I’d neglected to snap a photo of the backside, so another closeup of the box will have to do:

The cable coming out of the back of the panel terminates in a pair of MC4 connectors:

The also-included adapter cable converts the MC4s into an Anderson PowerPole PP15-45 connector for use with the Energizer PowerSource Pro Battery Generator (along with other Anderson-compatible battery-based products, of course, including Duracell’s own):

Its measured open-circuit output voltage closely approximated the earlier-listed max spec:

and it did charge up the PowerSource Pro reasonably speedily in direct sunlight:

Had I been able to as-intended combine it with the Energizer solar panel, of course, the combo likely would have been speedier still from a charging-rate standpoint. But from my earlier writeup, you already know about the connector woes that prevented such an arrangement and resultant experiment. And since the other gear I’d bought ended up being subpar and sent back to the retailer, I had no other use for this panel, either, so it got returned for full refund, too.

That all said, for the remainder of this writeup I’d like to delve a bit deeper into the suggested dual-panel configuration linked to within the initial Meh teaser and replicated here:

To a first conceptual approximation, you can think of two (or more) solar panels as batteries. Connect them in series and you boost the effective output voltage. Tether them in parallel, conversely, and the aggregate output current goes up. That said, as my research has enlightened me, they also exhibit important differences from batteries in both possible configurations. These variances derive from two fundamental keywords: batteries (ironically) and shade (i.e., “dark”).

Rarely if ever is the solar illumination of a single panel uniform, far from across multiple panels. This inconsistency is due to various factors, such as imperfect orientation versus the sun’s position of the moment in the sky, and the aforementioned shading caused by clouds, trees, and other partially-to-fully obscuring intermediate objects. For series-interconnected panels, illumination inconsistency means that the effective current you’ll be able to squeeze out of a multi-panel configuration is constrained by the least-illuminated panel in the chain. And you’ll of course also want to make sure that the aggregate voltage generated by the multi-panel series arrangement in full illumination doesn’t exceed the max input voltage of whatever it’s driving.

What about multi-panel parallel hookups such as the one recommended in the Energizer documentation? To tether them together requires a combiner cable such as this one I’d bought:

Minimally, you’ll want to ensure that the output voltages of both panels match (for reasons you’ll soon understand) and, this time, that the aggregate current generated by the multi-panel parallel arrangement in full illumination won’t exceed the max input current of whatever it’s driving. “What it’s driving” in my particular case was the Energizer PowerSource Pro Battery Generator, supposedly. Which is where the words “shade” and “battery” fully come to the fore.

Assume first that the combiner cable simply merges the panels’ respective positive and negative feeds, with no added intermediary electronics between them and the electrons’ intended destination. What happens, first, if all the parallel-connected panels are in shade (or to my earlier “dark” wording surrogate, it’s nighttime)? If the generator is already charged up, its battery pack’s voltage potential will be higher than that of the panels themselves, resulting in possible reverse current flow from the generator to the panels. Further, what happens if there’s an illumination discrepancy between the panels? Here again there’ll be a voltage potential differential, this time between them. And so, in this case, even if they’re still charging up the generator’s batteries as intended, there’ll also be charging-rate-inefficient (not to mention potentially damaging; keep reading) current flow from one panel to the other.

The result, described in this crowded diagram from the same combiner-cable listing on Amazon:

is what’s commonly referred to as a “hotspot” on one or all panels. Whether or not it negatively impacts panel operating lifetime is, judging from the online discussions I’ve auditioned, a topic of no shortage of debate, although I suspect that at least some folks who are skeptical are also naïve…which leads to my next point: how do you prevent (or at least minimize) reverse current flow back to one or both panels? With high power-tolerant diodes, I’ll postulate.

Those folks who think you can direct-connect multiple panels in parallel with nothing but wire? What I suspect they don’t realize is that there are probably reverse current-suppressing diodes already in the panels, minimally one per but often also multiple (since each panel, particularly for large-area models, is comprised of multiple sub-panels stitched together within the common frame). The perhaps-already-obvious downside of this approach is that there’s a forward-bias voltage drop across each diode, which runs counter to the aspiration of pushing as much charge power as possible to the destination battery pack. To that point, I suspect this is precisely what this Amazon reviewer is experiencing (also check out the video at the source link):

So I can draw about 86 to 92 watts to my Jackery with the cable from the solar panel but as soon as I plug in the splitter, it drops down to about 60-70 watts. When I plug in both 100 watt solar panels, I get about 120-130 watts. So the splitter somehow loses power when hooked up.

If you look closely at the earlier “crowded diagram” you can see a blurry image of what the combiner cable’s circuitry supposedly looks like inside:

Prior to starting this writeup, I’d returned the original combiner cable I bought, since due to my in-parallel return of the Duracell and Energizer devices, I no longer needed the cable, either. But I’ve just re-bought one, to satisfy my own “what’s inside” research-induced curiosity, which I’ll share with you in a teardown to come. Until then, I welcome your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Multi-solar panel interconnections: Mind the electrons’ directions appeared first on EDN.

Anatomy of MCUs for motor control

Tue, 01/21/2025 - 16:49

Control MCUs increase performance and efficiency of motor control and power conversion systems by enabling real-time control of systems that need to respond to real-time events with minimal delay and low utilization. These MCUs are targeted at battery-powered applications with motors, large appliances, HVAC units, and power supplies for server farms and cloud computing centers.

Read full story at EDN’s sister publication, Planet Analog.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Anatomy of MCUs for motor control appeared first on EDN.

A beginner’s guide to power of IQ data and beauty of negative frequencies – Part 2

Mon, 01/20/2025 - 15:51
Introduction and editor’s note

This is a two-part series where DI authors Damian and Phoenix Bonicatto explore the IQ signal representation and negative frequencies. Part 1 explains the commonly used SDR IQ signal representation and negative frequencies without the complexity of math.

This final part (Part 2) presents a device that allows you to play with and display live SDR signal spectrums with negative frequencies.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Defining real and imaginary signals

Let’s start off with something I left out on purpose in Part 1, there are two more terms we will use: A “real” signal is a term used for the signal represented by the I data. “Imaginary” signal is the term used for the signal represented by the Q data. I left them out until now as they add to the obfuscation of the discussion. Just take them at face value as there is nothing imaginary about an “imaginary” signal.

The “I/Q Explorer” design

Okay, so what we are going to make is an Arduino Nano based device that can generate “real” and “imaginary” data (I and Q samples) per your settings of frequency, amplitude, and phase. Then it will display a spectrum showing positive and negative frequencies. We will not use the FFT feature of your scope, we will generate the spectrum in the MCU and send it out to the oscilloscope to form a trace in the shape of the spectrum. In fact, we will generate a spectrum for the real signal, imaginary signal, and the magnitude of the combination (this is the square root of the sum of the squares of the real and imaginary signals).

We will refer to this design as the I/Q Explorer (Figure 1).

Figure 1 The final I/Q Explorer design with an LCD for displaying menu items and values and a toggle switch to switch the scope between the time and frequency domains.

Let’s look at the schematic in Figure 2.

Figure 2 The schematic of the IQ Explorer device with an Arduino Nano, 4×20 LCD, two rotary encoders, a simple toggle switch, three analog outputs, three lines to go to the scope probes, a digital output, and BNC connectors.

You can see the I/Q Explorer has a relatively simple schematic. It’s based on an Arduino Nano and uses a 4×20 LCD for displaying menu items and values. The two rotary encoders are used to navigate through the menu and to change values. A simple toggle switch is used to change the scope view from the time domain to frequency domain. There are three analog outputs, created using PWM outputs and single pole RC low-pass filters. The three lines go to the scope probes (you don’t need to connect all three if you don’t have a 4-channel scope). There is also a digital output used as a trigger to sync the scope to the analog outputs. The only other electrical parts are two BNC connectors which can be used if you want to get signals from a signal generator, instead of the internal generated signals.

The test stand itself can be easily 3D printed (a link to the 3D files and Arduino code are at the end of the article). The test stand provides a place to mount the LCD, rotary encoders and the time/frequency switch. Two BNC connectors can be installed on the left side. On the right side is a place to install four wires that will be used to clip the scope probe on (you also need to clip one of the leads to a ground). The rest on the circuitry can be laid out on a small solderless breadboard. The breadboard then fits on the back of the test stand. A document showing more images, BOM, and additional instructions can be found in the download from the link at the end of the article.

Using the I/Q Explorer

Ok, what can you do with it now? First, the left side of the test stand has four bare wires that are used to connect the scope probes (also connect one scope probe ground to digital ground on the breadboard. The top wire is the real data, the next one down is the imaginary data, then the magnitude, and the bottom one is the trigger. Connect the real, imaginary, and magnitude to the three inputs of your scope (if you only have a two-channel scope, you can connect the real and imaginary only, without missing much). The trigger can be connected to the last scope channel or to the external trigger of your scope.

Now, set the Freq/Time switch to Freq. On your scope, set the vertical to 2v/div, for each of the inputs. Also set the sweep to 20ms/div.

Now you can power the system via the USB connection on the Arduino Nano. Spread out the traces on the scope and set the scope to trigger using the trigger input. You can now hide the trigger trace. The traces should look like what we see in Figure 3.

Figure 3 The starting spectrum from -2500 to +2500 Hz where there is signal energy at both -1500 and +1500 Hz.

Now adjust the horizontal position so the traces are centered. Ignore the time axis on the bottom and the voltage axis on the right side—they are irrelevant (turn them off if your scope has such a feature). The left-to-right axis is frequency with -2500 Hz on the left side and +2500 Hz on the right side. DC or 0 Hz is the center line. (On scopes with 10 major horizontal markings, each major marking measures off 500 Hz increments.) Vertical is only a relative, auto-scaled, value so it is best to keep the volts/div the same for the real, imaginary, and magnitude traces. So, in Figure 3 you can see we have signal energy at both +1500 Hz and -1500 Hz. (Note, on boot up the real signal is 100 while the imaginary amplitude is 0.)

You will see a menu on the I/Q Explorer LCD. Rotating the left rotary encoder will allow you to move up and down in the menu. The right rotary encoder allows you to change the values of the menu item next to the flashing cursor (second line from the top of the LCD). In the menu you can change the frequency, amplitude and the phase of the real (I data) or imaginary (Q data) signals.) Note that changing the frequency of either the real or imaginary will change the other—they need to be at the same frequency.) The menu also has items to print data, plot data, turn on/off windowing, turn on/off a mixer, and change the source for the FFT from the internal generated data to external signal from a signal generator.

As a sanity check, Figure 4 is a plot of the real output from a simulation package using the same signals, sample rate, and number of samples. These are as follows: The real signal used has an amplitude of 100 and the imaginary signal has an amplitude of 0. The frequency of both is 1500 Hz. The sample rate is 5000 samples/second and the number of samples used in the FFT is 64.

Figure 4 Real FFT simulation using the same settings where the real signal has an amplitude of 100, the imaginary signal has an amplitude of 0, the frequency of both is 1500 Hz, the sample rate is 5000 samples/second, and the number of samples used in the FFT is 64.

You can see that the real signal matches nicely.

Adjusting real and imaginary signal settings

Let’s change the imaginary signal so it has the same amplitude as the real signal. Set the imaginary amplitude to 100. Now we get the following in Figure 5.

Figure 5 The FFT when imaginary and real amplitudes are the same (set to 100).

You can see we only have a signal at +1500 Hz…there is no negative frequency component.

If we flip the Freq/Time to “Time” you will be able to see the time domain samples used to generate the FFT, without adjusting anything on the scope.

Mixer mode

Now let’s try quadrature (complex-to-complex) mixing. Move down in the menu and turn on “Mixer” (more on moving and adjusting settings in the download). This will do the mix using the real and imaginary signals you have sets, and a fixed signal having its real and imaginary setting of 1000 Hz, and amplitudes of 100 and phases set to 0. What you will at first is shown in Figure 6.

Figure 6 Quadrature mixing of 1500 Hz with 1000 Hz showing the signal at 500 Hz and an image created at 2000 Hz.

You can see two things. First, the signal is now at 500 Hz, the difference of 1500 Hz from our signal and the 1000Hz from the mix signal. Second, there is an image created at 2000 Hz (1500 Hz + 500 Hz) but it has been removed by the mathematics of the mixer.

Next change the frequency of the real or imaginary signal (remember one will change the other) to 500 Hz. Also note, as you turn the dial, the FFT frequency will slide down as you approach 500 Hz. And as you go past 1000 Hz you will see the FFT in the negative frequency area. When you get to 500 Hz you will see this in Figure 7.

Figure 7 Quadrature (complex-to-complex) mixing of 500 Hz with 1000 Hz.

You now have generated a negative frequency—as simple as that.

Note that you can also see the plot, of any of the displayed FFTs, on the PC by using the “Serial Plot” selection. You can also get a view of the numerical data by using the “Serial Print Data” selection.

More fun IQ manipulation

So now that you’ve seen a few examples, you can explore I/Q data and quadrature mixing by changing things like the amplitude of the real and/or imaginary signals. Also, try changing the phase of signals. After you get a grasp of that try changing the phase of either. To get a real handle on this, see if you can manipulate the trig equations to better understand how you get to that FFT. The system also has a windowing function, you can play with, that adjusts amplitudes of the data points before the FFT is calculated (for those who know these things, it is a Hann window). 

Remember, you can also use actual external signals, such as those from a two-channel signal generator, to perform the same tests. You’ll note though, the FFTs are not quite as stable. There is more information on that in the downloadable notes.

One last thought; you’ll find lots of discussion on whether negative frequencies are real (in the traditional sense) or not. My two cents is they are not physically real as they cannot be transmitted as a single waveform, but they are a very nice mathematical construct that allows us to manipulate the signals that we can receive. Let the arguing begin.

To download 3D printable files and document containing more operating info, a BOM, and extra photo, visit: https://makerworld.com/en/models/1013533#profileId-993290

The Arduino Nano code can be found at: Arduino Nano code can be found at: https://github.com/DamianB2/IQ_Explorer

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A beginner’s guide to power of IQ data and beauty of negative frequencies – Part 2 appeared first on EDN.

Nvidia, TSMC, and advanced packaging realignment in 2025

Mon, 01/20/2025 - 10:27

Nvidia’s CEO Jensen Huang has made waves by saying that his company’s most advanced artificial intelligence (AI) chip, Blackwell, will transition from CowoS-S to CoWoS-L advanced packaging technology. That also shows how TSMC’s advanced packaging technology—chip on wafer on substrate (CoWoS)—is evolving to overcome interconnect battles inside large, powerful chips for AI and other high-performance computing (HPC) applications.

The CoWoS-S advanced packaging technology uses a single silicon interposer and through-silicon vias (TSVs) to facilitate the direct transmission of high-speed electrical signals between the die and the substrate. However, single silicon interposers often confront yield issues.

On the other hand, CoWoS-L, TSMC’s latest packaging technology, uses a local silicon interconnect (LSI) along with an RDL interposer to form a reconstituted interposer (RI) to enhance chip design and packaging flexibility. It also preserves the attractive feature of CoWoS-S in the form of TSVs while mitigating the yield issues arising from the use of large silicon interposers in CoWoS-S.

According to Reuters, Nvidia is selling its Blackwell chips as quickly as TSMC can manufacture them, but packaging has become a bottleneck due to capacity constraints. That’s startling because, as Huang noted, the amount of advanced packaging capacity at TSMC is probably four times the amount available less than two years ago.

Figure 1 CoWoS-L marks a significant advancement over CoWoS-S in terms of performance and efficiency for AI and HPC applications. Source: TSMC

Huang also told Taiwanese reporters that Nvidia is still producing Blackwell’s predecessor, Hopper, using TSMC’s CoWoS-S advanced packaging technology. “It’s not about reducing capacity. It’s actually increasing capacity into CoWoS-L.”

Advanced packaging in flux

Nvidia’s foray into new technology is a stark reminder of how quickly advanced packaging needs are changing. Apparently, the semiconductor industry is eying a new set of advanced packaging building blocks to dramatically increase the bandwidth and interconnect density of AI chips.

TSMC’s CoWoS is a 2.5D semiconductor packaging technology that increases the number of I/O points while reducing interconnect length between logic and memory components. However, emerging HPC workloads, particularly those related to AI training, demand even higher memory bandwidth due to frequent memory accesses.

CoWoS-L can stack up to 12 HBM3 devices at a lower cost than CoWoS-S and thus has the potential to become the mainstream CoWoS technology for future AI chips. Beyond CoWoS-L, TSMC is warming up to co-packaged optics (CPO), which replaces traditional electrical signal transmission with optical communications.

Figure 2 TSMC has made significant progress in its silicon photonics strategy by integrating CPO with advanced semiconductor packaging. Source: TrendForce

The current AI chips use copper interconnects, which increasingly face bottlenecks as bandwidths widen. In CPO, optical interconnect signals can achieve higher bandwidth than their electrical counterparts. For instance, CPO supports up to 1.6 Tbps bandwidth, which is 1.8 times wider than Gen 4 NVLink interconnect used by Nvidia in its current GPUs. Moreover, power consumption is also up to 50% lower.

According to Taiwanese media outlet UDN, TSMC has completed the development of CPO, and it plans to provide CPO samples to two of its major customers, Broadcom and Nvidia, later this year. Furthermore, UDN reports that TSMC plans to scale up the production of CPO in 2026.

AI chips constituting high logic-to-logic and logic-to-memory bandwidth are driving innovations in the advanced packaging realm. The move from CoWoS-S to CoWoS-L and the advent of CPO are harbinger of this pivot in the semiconductor industry ecosystem, which is now increasingly driven by AI applications.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Nvidia, TSMC, and advanced packaging realignment in 2025 appeared first on EDN.

100-V MOSFETs cut on-resistance

Thu, 01/16/2025 - 20:43

Renesas 100-V N-channel MOSFETs leverage an improved wafer manufacturing process with split gate technology, reducing on-resistance (RDS(on)) by 30%. The REXFET-1 process also cuts total gate charge (Qg) by 10% and gate-to-drain charge (Qgd) by 40%, according to the company.

Designed for high-power applications, these MOSFETs provide high-current switching in motor control, battery management systems, power management, and charging. Typical end products include electric vehicles, e-bikes, charging stations, power tools, and uninterruptible power supplies.

Both the RBA300N10EANS and RBA300N10EHPF MOSFETs feature a standard gate drive voltage of 2.0 V to 4.0 V. Other key specifications include an RDS(on) of 1.5 mΩ, drain current (ID) of 340 A, Qg of 170 nC, and Qgd of 30 nC.

In addition to enhanced electrical characteristics, the RBA300N10EANS and RBA300N10EHPF MOSFETs are offered in TOLL and TOLG packages, respectively. These packages are pin-compatible with devices from other manufacturers and 50% smaller than conventional TO-263 packages. The TOLL package also has wettable flanks for optical inspection.

The RBA300N10EANS and RBA300N10EHPF MOSFETs are now available in production volumes. Renesas also offers a reference design and application note to help shorten design cycles.

RBA300N10EANS product page

RBA300N10EHPF product page

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 100-V MOSFETs cut on-resistance appeared first on EDN.

Reference design highlights GaN for motor drives

Thu, 01/16/2025 - 20:43

EPC offers a GaN-based motor drive inverter reference design for industrial and battery-powered applications. The EPC91200 demonstration board, built for 3-phase brushless DC motors, integrates the EPC2305, a 150-V, 3.0-mΩ GaN FET.

With a wide input voltage range of 30 V to 130 V, the EPC91200 supports 80-V and 110-V battery systems in industrial automation and material handling equipment. It delivers up to 40 ARMS (60 A pk) of output current and operates at PWM switching frequencies up to 150 kHz, demonstrating GaN technology’s efficiency, reliability, and adaptability in power systems.

An optimized PCB layout and GaN technology minimize resistance and heat generation, enhancing performance. The 130×100-mm demo board features current sensing, voltage monitoring, overcurrent protection, and temperature sensing. It also includes a preconfigured shaft encoder/Hall sensor interface and supports field-oriented control. Compatible with various controller boards from STMicroelectronics, Texas Instruments, and Microchip, the EPC91200 offers broad integration flexibility.

The EPC91200 reference design board costs $812.50 and is available from Digi-Key.

EPC91200 product page

Efficient Power Conversion 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Reference design highlights GaN for motor drives appeared first on EDN.

GaN RF switch delivers 20 W

Thu, 01/16/2025 - 20:42

Built on a wide-bandgap GaN HEMT process, Teledyne’s TDSW84230EP reflective SPDT switch covers 30 MHz to 5 GHz, handling 20 W of continuous power. It is intended to replace PIN diode-based RF switches commonly used in the RF front ends of tactical and military communication radios.

The TDSW84230EP tolerates up to 900 mA/mm of saturation current, leveraging GaN’s high breakdown voltage and carrier density. Encased in a compact 3×3×0.8-mm, 16-pin QFN package, it offers 0.2-dB insertion loss and 45-dB port isolation, providing enhanced efficiency and saving board space over PIN diode architectures.

Qualified for operation over the military temperature range of -55°C to +125°C, the TDSW84230EP requires a positive supply voltage of 2.6 V to 5.25 V. Its internal charge pump is disabled to eliminate charge pump spurs in low-noise applications, while a -18-V supply is needed on the VCP pin.

The TDSW84230EP GaN RF switch is available now in commercial versions from Teledyne HiRel and authorized distributors.

TDSW84230EP product page 

Teledyne HiRel Semiconductors 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN RF switch delivers 20 W appeared first on EDN.

Murata introduces ultra-small chip inductor

Thu, 01/16/2025 - 20:42

At this month’s CES 2025 show, Murata unveiled what is claimed to be the world’s smallest 006003-inch (0.16×0.08 mm) chip inductor. This development offers a 75% volume reduction compared to the previous smallest product, the 008004-inch (0.25×0.125 mm) inductor.

“Following our success in introducing the world’s smallest multilayer ceramic capacitor (MLCC) in September 2024, our engineering teams are now developing a pioneering 006003-inch size chip inductor to further meet market demands,” says Takaomi Toi, general manager of Inductor Product Development at Murata Manufacturing.

“With the creation of the world’s smallest class prototype, we’re confident that this product represents an exciting addition to Murata’s extensive portfolio of market-leading chip inductors. This development continues to demonstrate Murata’s commitment to innovation and also marks a significant milestone in our quest to support the miniaturization and enhanced functionality of future electronic devices,” Toi said.

For more information about this chip inductor development, please contact Murata here.

Murata Manufacturing

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Murata introduces ultra-small chip inductor appeared first on EDN.

EDA tool tackles 3D IC design challenges

Thu, 01/16/2025 - 20:42

GENIO EVO, an integrated chiplet/package EDA tool from MZ Technologies, addresses thermal and mechanical stress in the pre-layout stage of 3D IC design. Set to be demonstrated at this month’s Chiplet Summit, GENIO EVO is the second generation of MZ’s flagship GENIO cross-fabric platform for system design. Like its predecessor, GENIO EVO enables co-design of chiplets, dies, silicon interposers, packages, and surrounding PCBs to meet area, power, and performance targets.

GENIO EVO integrates seamlessly with existing commercial implementation platforms or custom EDA flows through plugins. Operating at the architectural level, it provides optimal system choices for 2.5D or 3D multi-die designs. A new user interface supports a cross-hierarchical, 3D-aware design methodology that streamlines the system design process. By integrating IC and advanced packaging design, it ensures full system-level optimization, shorter design cycles, faster time-to-manufacturing, and improved yields.

The platform identifies and analyzes thermal and mechanical failures. It supports architectural exploration and what-if analysis in the early design stages to improve implementation predictability. By planning and managing high-pin-count interconnects in complex multi-fabric designs, it anticipates and avoids downstream thermal and mechanical issues.

GENIO EVO is available for immediate licensing. For more information, click the link below.

MZ Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post EDA tool tackles 3D IC design challenges appeared first on EDN.

Add one resistor to allow DAC control of switching regulator output

Thu, 01/16/2025 - 18:04

Whether its buck, boost, or buck/boost, internal or external switch, milliamps or tens of amps, a literal cornucopia of programmable output switching regulator/converter chips are commercially available. While the required external Ls and Cs (of course) vary wildly from topology to topology and chip to chip, (almost) all use exactly the same basic two-resistor network for output voltage programming shown in Figure 1. Its example buck type regulator was picked more or less arbitrarily, so please ignore the L and Cs and just focus on R1, R2, and (later) R3.

Figure 1 The (almost) universal regulator output programming network where Vout = Vsense(R1/R2 + 1) = 0.8v*(11.5 + 1) = 10v.

Wow the engineering world with your unique design: Design Ideas Submission Guide

For reasons known only to gurus of the mystic and marvelous monolithic realm, the precision Vsense feedback node voltage varies from type to type over a roughly 3:1 range from 0.50v to 1.5v. Recommended values for R1 vary too. 

The point is the topology doesn’t vary. All (or at least most) conform faithfully to Figure 1. This surprising uniformity becomes very useful if your application requires DAC control of the output voltage. See Figure 2 for how this can be done with a positive polarity DAC and just one added resistor: R3.

Figure 2 Regulator output programming with a DAC and the KISS1 network where Vout = (Vc)*(R1/R2) = (2.5 to 0v) 4 = 0 to 10v.

Given reasonable choices for the DAC (e.g., 2.5v), numbers for R1 and Vsense from the regulator chip datasheet, and Vomax from your application requirements, here’s the KISS1 arithmetic:

  1. R2 = R1 Vcmax/Vomax
  2. R3 = R1/(Vomax/Vsense – R1/R2 – 1)

And, in the grand tradition of the KISS1 principle, that’s it. Ok, ok. Except maybe for a couple of (minor?) caveats. For example:

  1. Expression 2 above, and therefore the necessary value for R3, must shake out positive. I can’t think of a practical case where it wouldn’t, but there’s probably some perverse permutation of parameters out there where it won’t, and implementing negative resistors isn’t particularly simple.
  2. The relation between Vout and Vc is inverse. So, the digital version of Vc must be 1’s complemented (a totally KISS-bit of software arithmetic to flip all the bits, so 0s become 1s, and 1s become 0s) before being written to the DAC register.
  3. Vin must be adequate for the chosen chip to generate the chosen Vomax when Vc = 0. Duh.

So maybe it’s not really totally KISS1, just mostly.

1 Famous KISS principle: Is a footnote really necessary?

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Add one resistor to allow DAC control of switching regulator output appeared first on EDN.

How TMDs can transform semiconductor manufacturing

Thu, 01/16/2025 - 08:31

While semiconductors remain in high demand, electronics engineers must stay abreast of associated developments that could eventually affect their work. Case in point: significant advancements in transition metal dichalcogenides (TMDs).

These two-dimensional materials are of particular interest to electronics engineers due to their structural phase and chemical composition; they possess numerous properties advantageous to electronic devices.

The 2D materials like TDM are prominent in the future semiconductor manufacturing landscape. Source: Nature

The ongoing semiconductor shortage has caused some engineers to delay projects or alter plans to acquire readily available supplies rather than those that are challenging to source. However, physical resource concentrations are more significant contributors to the shortage than actual scarcity.

When most of the critical raw materials used in semiconductor production come from only a few countries or regions, supply chain constraints happen frequently.

TDM learning curve

If it was possible to make the materials locally rather than relying on outside sources, electronics engineers and managers would enjoy fewer workflow hiccups. So, researchers are focusing on that possibility while exploring TMD capabilities. They are learning how to grow these materials in a lab while overcoming notable challenges.

One concern was making the growth occur without the thickness irregularities that often negatively affect other 2D materials. Therefore, this research team designed a shaped structure that controls the TMD’s kinetic activities during growth.

Additionally, they demonstrated an option to facilitate layer-by-layer growth by creating physical barriers from chemical compound substrates, forcing the materials to grow vertically. The researchers believe this approach could commercialize the production of these 2D materials. Their problem-solving efforts could also encourage others to follow their lead as they consider exploring how to produce and work with TMDs.

Semiconductor manufacturing is a precise process requiring many specific steps. For example, fluorinated gases support everything from surface-etching activities to process consistency. Although many production specifics will remain constant for the foreseeable future, some researchers are interested in finding feasible alternatives.

So, while much of their work centers around furthering the development of next-generation computer chips, succeeding in that aim may require prioritizing different materials, including TMDs. People have used silicon for decades. Although it’s still the best choice for some projects, electronics engineers and other industrial experts see the value in exploring other options.

Learning more about TMDs will enable researchers to determine when and why the materials could replace silicon.

TDM’s research phase

In one recent case, a team explored TMD defects and how these materials could impact semiconductor performance. Interestingly, the outcomes were not always adverse because some imperfections made the material more electrically conductive.

Another research phase used photoluminescence to verify the light frequencies emitted by the TMDs. One finding was that specific frequencies would characterize five TMDs with defects called chalcogen vacancies.

An increased understanding of common TMD defects and their impacts will allow engineers to determine the best use cases more confidently. Similarly, knowing effective and efficient ways to detect those flaws will support production output and improve quality control.

These examples illustrate why electronics engineers and managers are keenly interested in TDMs and their role in future semiconductors. Even if some efforts are not commercially viable, those involved will undoubtedly learn valuable details that shape their future progress.

Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How TMDs can transform semiconductor manufacturing appeared first on EDN.

Tamron’s TAP-in Console: A nexus for camera lens update and control

Wed, 01/15/2025 - 20:33

Camera lenses were originally fully mechanical (and in some cases, still are; witness my Rokinon Cine optics suite). The user manually focused them, manually set the aperture, and manually zoomed them (for non-fixed-focal length optics, that is). Even when the aperture was camera body-controlled—in shutter-priority and fully auto-exposure modes, for example—the linkage between it and the lens was mechanical, not electrical, in nature.

Analogies between lenses and fly-by-wire aircraft are apt, however, as the bulk of today’s lenses are electronics-augmented and, perhaps more accurately, -dependent. Take, for example, optical image stabilization (OIS), which harnesses electromagnets paired with floating lens elements and multiple gyroscope and accelerometer sensors to counteract one-to-multiple possible variants of unwanted camera system movement:

  • Axis rotation (roll)
  • Horizontal rotation (pitch)
  • Vertical rotation (yaw)
  • And both horizontal and vertical motion (caused, for example, by imperfect panning)

Not only is OIS within the lens itself image quality-desirable (at admitted tradeoffs of added size, weight and cost), its effectiveness can be further boosted when paired with in-body image stabilization (IBIS) within the camera itself. Olympus’ (now OM Systems’) Sync IS and Panasonic’s conceptually similar, functionally incompatible Dual I.S. are examples of this mutually beneficial coordination, which of course requires real-time bidirectional electronic communication. Why, you might ask, is OIS even necessary if IBIS already exists? The answer is particularly relevant for telephoto lenses, where the deleterious effects of camera system movement are particularly acute given the lens’s narrow angle of view, and where subtle movement may be less effectively corrected at the camera body versus at the other end of the long lens mounted to it.

More modest but no less electronics-dependent lens function examples include:

  • Motor-driven autofocus (controlled by focus-determining sensors and algorithms in the camera body)
  • Electronics-signaled, motor-based aperture control (some modern lenses even dispense completely with the manual aperture ring, relying solely on body controls instead)
  • And motor-assisted zoom

And user setting optimization (fine-tuned focus, for example) and customization (constraining the focus range to minimize autofocus-algorithm “hunting”, etc.) is also often desirable.

All these functions, likely unsurprisingly to you, are managed by in-lens processors running firmware which benefits from periodic updates to fix bugs, add features, and augment the compatibility list to support new camera models (a particularly challenging task for third-party lens suppliers such as aforementioned Rokinon, Sigma, and Tamron). I’ve come across several lens firmware update approaches, the first two most practically implemented when the camera and lens come from the same manufacturer (i.e., a first-party lens):

  • The lens’ new firmware image is downloaded to a memory card, which is inserted in the connected camera and activated via an update menu option or control button sequence
  • The lens and body are again mated, but this time the body is then USB-tethered to a computer running a manufacturer-supplied update utility
  • The lens is directly USB-tethered to the computer, with a manufacturer-supplied update utility then run. The key downside to this approach, therefore its comparative uncommonness, is that it requires a dedicated USB port on the lens, with both size and potential dust and water ingress impacts
  • And the approach we’ll be showcasing today, which relies on a lens manufacturer- and camera mount-specific USB port-inclusive intermediary docking station to handle communications between the lens and computer.

Specifically, today’s teardown victim is a Tamron TAP-in Console, this particular model intended for the Canon EF mount used by my Canon DSLRs and one of my BlackMagic Design video cameras (Nikon mount stock images of the TAP-01 from Tamron’s website follow)

Here are some example screenshots of Tamron’s TAP-in Utility software in action, with my Mac connected to my Tamron 15-30 mm zoom lens via the TAP-01E dock intermediary:

along with my 100-400 mm zoom lens:

And both lenses post-firmware updates through the same utility:

Tamron isn’t the only lens manufacturer that goes the intermediary dock route. Here, for example, is Sigma’s UD-01 USB Dock in action with the company’s Optimization Pro software and two of that supplier’s Canon EF mount zoom lenses (24-105 and 100-400 mm) that I own:

Enough with the conceptual chitter-chatter, let’s get to real-life tearing down, shall we? In addition to the TAP-in Console I’ve already screenshot-shown you in action, which I bought used back in January 2024 from KEH Camera for $34.88, I’d subsequently picked up another one for teardown purposes off eBay open-box for about the same price. However, after it arrived and I confirmed it was also functional, I didn’t have the heart to disassemble perfectly good hardware in a potentially permanently destructive manner. I decided instead to hold onto it for future gifting to a friend who also owns Canon EF-mount Tamron lenses, and instead bought one claimed to be a “faulty spares-and-repairs” from MPB for $9. After it arrived, and to satisfy my curiosity, I decided to hook it up. It seems to work just fine, too! Oh well…

By the way, that dock-embedded LED shown in the first photo only illuminates when the TAP-in Utility software is running on the computer and detects a valid lens installed in the mount:

As usual, I’ll start out with some outer-box shots (yes, even though the dock was advertised as a “faulty spares-and-repairs” it still came with the original box, cable and documentation):

Open it up:

(I suspect that in its original brand-new condition there was more padding, etc. inside)

and the contents tumble out (I’m being overly dramatic; I actually lifted them out and placed them on my desk as shown):

Here’s the USB-A to micro-USB power-and-data cable:

Re the just-mentioned “data”, I always find it interesting to encounter a ferrite bead (or not) and attempt to discern whether there was a logical reason for its presence or absence (or not):

A bit of documentation (here’s a PDF version), supplemented by online video tutorials:

And last, but not least, our patient, already-seen LED end first, and as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Two side views: one of the micro-USB connector:

and another, of the lens release button:

Finally, here’s the mount end, first body-capped:

and now uncapped and exposed:

See those four screws around the shiny outer circumference? You know what comes next, right?

The now-unencumbered shiny metal ring, as it turns out, consists of two stacked rings. Here are the top and bottom views of the outer (upper) one:

and the even shinier inner (lower) one:

If you’re thinking those look like “springs” on the bottom, you’re not off-base:

With the rings gone, my attention next turned to the two screws at the inside top, holding a black-colored assembly piece in place:

Four more screws around the inside circumference:

In the process of removing them, the locking pin also popped out:

As you can see, the pin is spring-loaded and normally sticks out from the dock’s mount. When you mate a lens with the dock, with the former’s bayonet tabs aligned with the latter’s recesses, the lens mount presses against the pin, retracting it flush with the dock mount. Subsequently rotating the lens into its fully mounted position mates the pin with a matching indentation on the lens mount, allowing the pin to re-extend and locking the lens in place in the process.

Pressing the earlier-seen side release button manually re-retracts the pin, enabling rotation of the lens in the opposite direction for subsequent removal.

Onward. With the four screws removed:

the middle portion of the chassis lifts away, revealing the PCB underneath:

In the process of turning the middle portion upside-down, the release button (now absent its symbiotic locking pin partner) fell out:

I had admittedly been a bit concerned beforehand that the dock might be nothing more than a high-profit-margin (the TAP-in Console brand-new price is $59) “dummy” USB connection-redirector straight to the mount contacts, with the USB transceiver intelligence built into the lens itself. Clearly, and happily so, my worries were for naught:

Two screws hold the contacts assembly in place:

Four more for the PCB itself:

And with that, ladies and gentlemen, we have achieved liftoff:

Let’s zoom in (zoom…camera lens accessory…get it? Ahem…) on that PCB topside first:

As previously mentioned, the TAP-in Console comes in multiple product options for various camera manufacturers’ lens mounts. My pre-dissection working theory, in the hope that the dock wasn’t just a “dummy” USB connection-redirector as feared, was that the base PCB was generic, with camera manufacturer mount hardware customization solely occurring via the contacts assembly. Let’s see if that premise panned out.

At left is the USB-C connector. At bottom is the connector to the ribbon cable which ends up at the mount contacts assembly (which we’ll see more closely shortly). But what’s that connector at the top for? I ended up figuring out the answer to that question indirectly, in the process of trying (unsuccessfully) to identify the biggest IC in the center of the PCB, marked:

846AZ00
F51116A
DFL

I searched around online for any other published references to “F51116A”, and found only one. It was for the Nikon version of the TAP-in Console (coincidentally the same version in the stock images at the beginning of this piece) and was in Japanese (which I can’t read, far from speak), but Google Translate got me to something I could comprehend. Two things jumped out at me:

  • This time, the upper connector was used to ribbon-cable tether to the contacts assembly
  • And the IC was marked somewhat differently this time, specifically in the first line

734AZ00
F51116A
DFL

So, here’s my revised working theory. The PCB itself is the same (with confirmation that you’ll shortly see), as are the bulk of the components mounted to it. The main IC is either a PLD or FPGA appropriately programmed for the intended product model, a model-specific ASIC, or a microcontroller with camera mount-specific firmware. And depending on the product variant, either the top or bottom connector (or maybe both in some cases) gets ribbon-cable-populated.

Let’s flip the PCB over now:

Not much to see versus the other side, comparatively, although note the LED at bottom and another (also unpopulated this time) connector to the right of it. And to my recent comments, note that the stamp on the right:

TAMRON
AY042-901
-0000-K1

exactly matches the markings shown on the PCB in the Nikon-version teardown.

About that contacts assembly I keep mentioning…here’s the “action” (electrically relevant) end:

And here’s the seemingly (at least initially) more boring side:

I thought about stopping here. But those two screws kept calling to me:

And I’m glad I listened to them. Nifty!

With that I’ll wrap up and, after the writeup’s published, see if I might be able to get it back together again…functionally, that is…mindful of the Japanese teardown enthusiast’s comments that “The lens lock release switch part was a bit of a pain to assemble (lol).” Reader thoughts are as-always welcomed in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Tamron’s TAP-in Console: A nexus for camera lens update and control appeared first on EDN.

How neon lamps can replace LEDs in AC-powered designs

Wed, 01/15/2025 - 15:02

It’s not difficult to drive an LED indicator from the AC line, but it requires many active and passive components. It also poses safety challenges. EDN and Planet Analog blogger Bill Schweber explains how engineers can replace LEDs with neon lamps to design AC power-on indicators while addressing modern design challenges.

Read full story at EDN’s sister publication, Planet Analog.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How neon lamps can replace LEDs in AC-powered designs appeared first on EDN.

Part 1: A beginner’s guide to the power of IQ data and beauty of negative frequencies

Tue, 01/14/2025 - 17:23

Editor’s Note: This is a two-part series where DI authors Damian and Phoenix Bonicatto explore IQ signal representation and negative frequencies to ease the understanding and development of SDRs.

Part 1 explains the commonly used SDR IQ signal representation and negative frequencies without the complexity of math.

Part 2 (to be published) presents a device that allows you to play with and display live SDR signal spectrums with negative frequencies.

Introduction

Software-defined radio (SDR) firmware makes extensive use of I/Q representation of the received and transmitted signal. This representation can simplify and add ease to the manipulation of incoming signal. I/Q data also allows us to work with negative frequencies. My goal here is to explain the I/Q representation and negative frequencies without the complexity usually invoked by obscure terms and non-intuitive mathematics. Also, I will present a device that you can build to allow you to play with and display live spectrums with negative frequencies. So, let’s get started.

Wow the engineering world with your unique design: Design Ideas Submission Guide

I/Q and quadrature concepts

What is I/Q data? “I” is short for in-phase and “Q” is short for quadrature. It’s the first set of SDR terms that sound mysterious and tends to put people off—let’s just call them I and Q. Simply, if you have a waveform, like you see on an oscilloscope, you can break it into two sinusoidal components—one based on a sine, and another based on a cosine. This is done by using the trig “angle sum identity”. The I and Q are the amplitudes of these components, so our signal is now represented as:

Where: “A” is the original signal amplitude and:

We have just created the in-phase signal, I*cos(ωt), and the quadrature signal, Q*sin(ωt). Just to add to the confusion, when we deal with the in-phase and quadrature signals together it is referred to as “quadrature signaling” …sigh.

[Note: In SDR projects IQ data (or I/Q data) is generally referring to the digital data pairs at each sample interval.]

An aside:

Most signal processing textbooks work with exponentials to describe and manipulate signals. For example, a transmitted signal is always “real” and is typically shown as something like:

This is another formula that creates obfuscation and puts off people just starting out in signal processing and SDR. I will say that exponential notation creates cleaner mathematical manipulation, but my preference is to use the trig representation as I can see the signal in my mind’s eye as I manipulate the equations. Also, explaining your design to people who are not up on signal processing is much easier when using things everyone learned in high school. Note that, although most SDR simulations tools like MATLAB use the exponential for signal processing work, when it comes down to writing C code in an MCU, the trig representation is normally used.

Without going into it, this exponential representation is based on Euler’s formula, which is related to the beautiful and cleverly derived Euler’s equation.

Now, you may wonder why we would go through the trouble to convert the data to this quadrature form and what this form of the signal is good for. In receivers, for example, just using the incoming signal and mixing it with another frequency and extracting the data has worked since the early days of radio. To answer this, let’s look at a couple of examples.

Example of the benefits of quadrature form

First, when doing simple mixing of an incoming signal you get, as an output, two signals—the sum of the incoming signal and the mix frequency, and the difference of these two frequencies. The following equation demonstrates this by use for the trig product identity:

To continue in your receiver, you typically need to filter one of these out, usually the higher frequency. (The unwanted resultant frequency is often called the image frequency, which is removed by an image filter.) In a digital receiver this filter can take some valuable resources (cycles and memory). Using the I/Q form above, a mix can be created that removes either just the sum or just the difference without filtering.

You can see how this works in Figure 1. First, define the mix signal in an I/Q format:

Mix Signal I part = cos(ωmt)
Mix Signal Q part = sin(ωmt)

Figure 1 Quadrature (complex-to-complex) mix returning the lower frequency.

(There is more to this, but this mix architecture is the basic idea of this technique.)

You can see that only the lower frequency is output from the mixer. If you want the higher frequency and to remove the lower frequency, just change where the minus sign is in the final additions as shown in Figure 2.

Figure 2 Quadrature mix returning the higher frequency.

This quadrature, or complex-to-complex, mixing is a very powerful technique in SDR designs.

Next, let’s look at how I/Q data can allow us to play with negative frequencies.

When you perform a classical (non-quadrature) mix, any result that you get cannot go below a frequency of zero. The result will be two new frequencies: the sum of the input frequencies and the absolute value of the difference. This absolute value means the output frequencies cannot go negative. In a quadrature mixer the frequency is not constrained with an absolute value function, and you can get negative frequencies.

Let’s think about what this means if you are sweeping one of the inputs. In the classical mixer as the two input frequencies approach each other, the difference frequency will approach 0 Hz and then start to go back up in frequency. In a quadrature mixer the difference frequency will go right through 0 Hz and continue getting more and more negative.

One implication of this is that, in a sampled system you’re working on, bandwidth is the sample rate divided by 2. When using a quadrature representation, you have a working bandwidth that is twice as large. This is especially handy when you have a system where you want to deal with a large range of frequencies at a time. You can move any of the frequencies to baseband; the higher frequencies will stay in their relative position in the positive frequencies; and the lower frequencies will stay in their relative positions in the negative frequencies. You can slide up and down, by mixing, without image filters or corrupting the spectrum with images. Another very powerful technique in SDR designs.

A tool for exploring IQ data

This positive and negative spectrum is very interesting but unfortunately the basic FFT on your oscilloscope probably won’t display them. It typically only displays positive frequencies. Vector network analyzers (VNAs) can display negative frequency but not all labs have one. You can play around in tools like MATLAB, but I usually like something a little closer to actual hardware and more real-time to get a better feel for the concept. A signal generator and a scope always help me. But I already said a scope does not display negative frequency. Well, the tool presented in Part 2 will allow us to play with I/Q data, negative frequencies, and mixing.

[Editor’s Note: An Arduino-Nano-based device will be presented in Part 2 that can generate IQ samples based upon user frequency, amplitude, and phase settings. This generated data will then display the spectrum showing both positive and negative frequencies. Stay tuned for more!]

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Part 1: A beginner’s guide to the power of IQ data and beauty of negative frequencies appeared first on EDN.

The 2025 CES: Safety, Longevity and Interoperability Remain a Mess

Mon, 01/13/2025 - 18:40

Once again this year, I’m thankfully reporting on CES (formerly also known by its de-acronym’d “Consumer Electronics Show” moniker, although the longer-winded version is apparently no more) from the remote comfort of my home office. There are admittedly worse places to visit than Las Vegas, especially given its newfound coolness courtesy of the Sphere (which I sadly have yet to experience personally):

That said, given the option to remain here, I’ll take it any day, realizing as I say this that it precludes on-camera cameos…which, come to think of it, is a plus for both viewers and myself!

(great job, Aalyia!)

Anyhoo, I could spend the next few thousand words (I’m currently guesstimating, based on repeated past experience, which in some years even necessitated a multi-part writeup series), telling you about all the new and not-new-but-maturing products and technologies showcased at the show. I’ll still do some of that, in part as case study examples of bigger-picture themes. But, to the title of this writeup, this year I wanted to start by stepping back and discussing three overriding themes that tainted (at least in my mind) all the announcements.

Safety

(Who among you is, like me, old enough to recognize this image’s source without cheating by clicking through first?)

A decade-plus ago, I told you the tale of my remote residence-located Linksys router that had become malware-infected:

Ever since then, I’ve made it a point to collect news tidbits on vulnerabilities and the attack vectors that subsequently exploit them, along with manufacturers’ subpar compromise responses. It likely won’t surprise you to learn that the rate of stories I’ve accumulated has only accelerated over time, as well as broadened beyond routers to encompass other LAN and WAN-connected products. I showcased some of them in two-part coverage published five years ago, for example, and disassembled another (a “cloud”-connected NAS) just a few months back.

The insecure-software situation has become so rampant, in fact, that the U.S. Federal Communications Committee (FCC) just unveiled a new program and associated label, the U.S. Cyber Trust Mark, intended to (as TechCrunch describes it) “help consumers make more informed decisions about the cybersecurity of the internet-connected products they bring into their homes.” Here’s more, from Slashdot’s pickup of the news, specifically referencing BleepingComputer’s analysis:

It’s designed for consumer smart devices, such as home security cameras, TVs, internet-connected appliances, fitness trackers, climate control systems, and baby monitors, and it signals that the internet-connected device comes with a set of security features approved by the National Institute of Standards and Technology (NIST). Vendors will label their products with the Cyber Trust Mark logo if they meet NIST cybersecurity criteria. These criteria include using unique and strong default passwords, software updates, data protection, and incident detection capabilities. Consumers can scan the QR code included next to the Cyber Trust Mark labels for additional security information, such as instructions on changing the default password, steps for securely configuring the device, details on automatic updates (including how to access them if they are not automatic), the product’s minimum support period, and a notification if the manufacturer does not offer updates for the device.

Candidly, I’m skeptical that this program will be successful, even if it survives the upcoming Presidential administration transition (speaking of which: looming trade war fears weighed heavily on folks’ minds at the show) and in spite of my admiration for its honorable intention. As reader “Thinking_J” pointed out in response to my recent teardown of a Bluetooth receiver that has undergone at least one mid-life internal-circuits switcheroo, the FCC essentially operates on the “honor system” in this and similar regards after manufacturers gain initial certification.

One of the root causes of such vulnerabilities, IMHO, is any reliance on open-source code, no matter that doing so may ironically also improve initial software quality. Requoting myself:

Open-source software has some compelling selling points. For one thing, it’s free, and the many thousands of developer eyeballs peering over it generally result in robust code. When a vulnerability is discovered, those same developers quickly fix it. But among those thousands of eyeballs are sets with more nefarious objectives in mind, and access to source code enables them to develop exploits for unpatched, easily identified software builds.

I also suspect that at least some amount of laissez-faire tends to creep into the software-development process when you adopt someone else’s code versus developing your own, especially if you subsequently “forget” to make proper attribution and take other appropriate action regarding that adoption. The result is a tendency to overlook the need to maintain that portion of the codebase as exploits and broader bugs in it are discovered and dealt with by the developer community or, more often than note, the one-and-only developer.

Sometimes, though, code-update neglect is intentional:

Consumer electronics manufacturers as a rule make scant (if any) profit on each unit sold, especially after subtracting the “percentage” taken by retailer intermediaries. Revenue tangibly accrues only as a function of unit volume, not from per-unit profit margin. Initial-sale revenue is sometimes supplemented by after-sale firmware-unlocked feature set updates, services, and other add-ons. But more often than not, a manufacturer’s path to ongoing fiscal stability involves straightforwardly selling you a brand-new replacement/upgrade unit down the road; cue obsolescence by design for the unit currently in your possession.

Which leads to my next topic…

Longevity

 

One of the products “showcased” in my August 2020 writeup didn’t meet its premature demise due to intentionally unfixed software bugs (as was the case for a conceptually similar product in Belkin’s Wemo line, several examples of which I owned when the exploit was announced). Instead, its early expiration was the result of an intentional termination of the associated “cloud” service done by its retail supplier, Best Buy (Connect WiFi Smart Plug shown above).

More recently, I told you about a similar situation (subsequently resolved positively via corporate buyout and resurrection, I’m happy to note) involving SmartLabs’ various Insteon-branded powerline networking products. Then there was the Spotify Car Thing, which I tore down in early 2023. And right before this year’s CES opened its doors to the masses, ironically, came yet another case study example of the ongoing disappointing trend: the $800 (nope, no refunds) Moxie “emotional support” robot, although open source (which, yes, I know I just critiqued earlier here) may yet come to the rescue for the target 5-10 year old demographic:

Government oversight to the rescue, again (?). Here’s a summary, from Slashdot’s highlight:

Nearly 89% of smart device manufacturers fail to disclose how long they will provide software updates for their products, a Federal Trade Commission staff study found this week. The review of 184 connected devices, including hearing aids, security cameras and door locks, revealed that 161 products lacked clear information about software support duration on their websites.

 Basic internet searches failed to uncover this information for two-thirds of the devices. “Consumers stand to lose a lot of money if their smart products stop delivering the features they want,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. The agency warned that manufacturers’ failure to provide software update information for warranted products costing over $15 may violate the Magnuson Moss Warranty Act. The FTC also cautioned that companies could violate the FTC Act if they misrepresent product usability periods. The study excluded laptops, personal computers, tablets and automobiles from its review.

Repeating what I said earlier, I’m skeptical that this effort will be successful, despite my admiration for its honorable intentions. In no small part, my pessimism stems from recent US election results, given that Republicans have (historically, at least) been disproportionally pro-business to the detriment of consumer rights. That said, were the manufacturer phase-out to instead be the result of something other than the shutdown of a proprietary “cloud” service, such as (for example) a no-longer-maintained-therefore-viable (or at-all available, for that matter) proprietary application, the hardware might still be usable if it could alternatively be configured and controlled using industry-standard command and communications protocols.

Which leads to my next topic…

Interoperability

 

Those of you who read to the bitter end of my recently published “2024 look-back” tome might have noticed a bullet list of topics there that I’d originally also hoped to cover but eventually decided to save for later. The first topic on the list, “Matter and Thread’s misfires and lingering aspirations,” I held back not just because I was approaching truly ridiculous wordcount territory but also because I suspected I’d have another crack at it a short time later, at CES to be precise.

I was right; that time is now. Matter, for those of you not already aware, is:

…a freely available connectivity standard for smart home and IoT (Internet of Things) devices. It aims to improve interoperability and compatibility between different manufacturers and security, always allowing local control as an option.

And Thread? I thought you’d never ask. It’s:

…an IPv6-based, low-power mesh networking technology for Internet of things (IoT) products…

Often used as a transport for Matter (the combination being known as Matter over Thread), the protocol has seen increased use for connecting low-power and battery-operated smart-home devices.

Here’s what I wrote about Matter and Thread a year ago, in my 2024 CES discourse:

The Matter smart home communication standard, built on the foundation of the Thread (based on Zigbee) wireless protocol, had no shortage of associated press releases and product demos in Las Vegas this week. But to date, its implementation has been underwhelming (leading to a scathing but spot-on recent diatribe from The Verge, among other pieces), both in comparison to its backers’ rosy projections and its true potential.

 Not that any of this was a surprise to me, alas. Consider that the fundamental premise of Matter and Thread was to unite the now-fragmented smart home device ecosystem exemplified by, for example, the various Belkin Wemo devices currently residing in my abode. If you’re an up-and-coming startup in the space, you love industry standards, because they lower your market-entry barriers versus larger, more established competitors. Conversely, if you’re one of those larger, more established suppliers, you love barriers to entry for your competitors.

 Therefore the lukewarm-at-best (and more frequently, nonexistent or flat-out broken) embrace of Matter and Thread by legacy smart home technology and product suppliers (for which, to be precise, and as my earlier Blink example exemplifies, conventional web browser access, vs a proprietary app, is even a bridge too far)…Suffice it to say that I’m skeptical about Matter and Thread’s long-term prospects, albeit only cautiously so. I just don’t know what it might take to break the logjam that understandably prevents competitors from working together, in spite of the reality that a rising tide often does end up lifting all boats…or if you prefer, it’s often better to get a slice of a large pie versus the entirety of a much smaller pie.

A year later, is the situation better? Not really, candidly. For a more in-depth supplier-sourced perspective, I encourage you to read Aalyia’s coverage of her time spent last week in Silicon Labs’ product suite, including an interview with Daniel Cooley, CTO of the company. Cooley is spot-on when he notes that “it is not unusual for standards adoption to progress slower than desired.” I’ve seen this same scenario play out plenty of times in the past, and Matter and Thread (assuming they eventually achieve widespread success) won’t be the last. I’m reminded, for example, of a quote attributed to Bill Gates, that “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next 10.”

Cooley is also spot-on when he notes that Matter and Thread don’t necessarily need to go together; the Matter connectivity standard can alternatively use Ethernet (either wireless, aka Wi-Fi, or wired) for transport, along with Bluetooth Low Energy for initial device setup purposes (and speaking of wireless smart home network protocols, by the way, a quick aside: check out Z-Wave’s just-announced long range enhancements). And granted, there has been at least progress with both Matter (in particular) and Thread over the past year.

Version 1.4 of the Matter specification, announced last November, promises (quoting from Ars Technica’s coverage) “more device types, improvements for working across ecosystems [editor note: a concept called “Enhanced Multi-Admin”], and tools for managing battery backups, solar panels, and heat pumps”, for example. And at CES, the Connectivity Standards Alliance (CSA), which runs Matter, announced that Apple, Google, and Samsung will accept its certification results for their various “Works With” programs, too. That said, Amazon is notably absent from the CSA’s fast-track certification list. And more generally, Ars Technica was spot-on with the title of its writeup, “Matter 1.4 has some solid ideas for the future home—now let’s see the support.” See you back here this same time next year?

The Rest of the Story

(no, I don’t know what ballet has to do with smart rings, either)

Speaking of “approaching truly ridiculous wordcount territory”, I passed through 2,000 words a couple of paragraphs back, so I’m going to strive to make the rest of this piece more concise. Looking again at the list of potential coverage technology and product topics I scribbled down a few days ago, partway through CES, and after subtracting out the “Matter and Thread” entry I just discussed, I find…16 candidates left. Let’s divide that in two, shall we? Without further ado, and in no particular order save for how they initially streamed out of my noggin:

  • Smart glasses: Ray-Ban and Meta’s jointly developed second-generation smart glasses were one of the breakout consumer electronics hits of 2024, with good (initial experience, at least) reason. Their constantly evolving AI-driven capabilities are truly remarkable, on top of the first-generation’s foundational still and video image capture and audio playback support. Unsurprisingly, therefore, a diversity of smart glasses implementations in various function and price-point options, from numerous suppliers and in both nonfunctional mockup, prototype and already-in-production forms, populated 2025 CES public booths and private meeting rooms alike in abundance. I actually almost bought a pair of Ray-Ban Meta glasses during Amazon’s Black Friday…err…week-plus promotion to play around with for myself (and subsequently cover here at EDN, of course). But I decided to hold off for the inevitable barely-used (if at all) eBay-posting markdowns to come. Why? Well, the recent “publicity” stemming from the New Orleans tragedy didn’t help (and here I thought “glassholes” were bad). Even though Meta Ray-Ban offers product options with clear lenses, not just sunglasses, most folks don’t (and won’t) wear glasses all the time, not to mention that battery life limitations currently preclude doing so anyway (and don’t get me started on the embedded batteries’ inherent obsolescence by design). And when folks do wear them, they’re fashion statements. Multiple pairs for various outfits, moods, styles (invariably going in and out of fashion quickly) and the like are preferable, something that’s not fiscally feasible for the masses when the glasses cost several hundred dollars apiece.
  • Smart rings: This wearable health product category is admittedly intriguing because unlike glasses (or watches, for that matter), rings are less obvious to others, therefore it’s less critical (IMHO, at least) for the wearer to perfectly match them with the rest of the ensemble…plus you have 10 options of where to wear one (that said, does anyone put a ring on their thumb?). There were quite a few smart rings at CES this year, and next year there’ll probably be more. Do me a favor; before you go further, please go read (but come back afterwards!) The Verge’s coverage of Ultrahuman’s Rare ring family (promo videos at the beginning of this section). The snark is priceless; it was the funniest piece of 2025 CES coverage I saw!
  • HDMI: Version 2.2 is enroute, with higher bandwidth (96 Gbps) now supportive of 16K resolution displays (along with 4K displays at head-splitting 480 fps), among other enhancements. And there’s a new associated “Ultra96” cable, too. At first, I was a bit bummed when I heard this, due to the additional infrastructure investment that consumers will need to shoulder. But then I thought back to all the times I’d grabbed a random legacy cable out of my box o’HDMI goodies only to discover that, for example, it only supported 1080p resolution, not 4K…even though the next one I pulled out of the box, which looked just like its predecessor down to the exact same length, did 4K without breaking a sweat. And I decided that maybe making a break from HDMI’s imperfect-implementation past history wasn’t such a bad idea, after all…
  • 3D spatial audio: Up to this point, Dolby’s pretty much had the 3D spatial audio (which expands—bad pun intended—beyond conventional surround sound to also encompass height) stage all to itself with Atmos, but on the eve of CES, Samsung unveiled the latest fruits of its partnership with Google to promulgate an open source alternative called IAMF, for Immersive Audio Model and Formats, now also known by its marketing moniker, “Eclipsa Audio”. In retrospect, this isn’t a terrible surprise; for high-end video, Samsung has already settled on HDR10+ versus Dolby Vision. But I have questions, specifically as to whether Google and Samsung are really going to be able to deliver something credible that doesn’t also collide with Dolby’s formidable patent portfolio. And I also gotta say that the fact that nobody at Samsung’s booth was able to answer one reporter’s questions doesn’t leave me with a great deal of early-days confidence.
  • TVs: Speaking of video, I mentioned more than a decade ago that Chinese display manufacturers were beginning to “make serious hay” at South Korea competitors’ expense, much as those same South Korea-based companies had previously done to their Japanese competitors (that said, it sure was nice to see Panasonic’s displays back at CES!). To wit, TCS has become a particularly formidable presence in the TV market. While it and its competitors are increasingly using viewer-customized ads (logging and uniquely responding to the specific content you’re streaming at the time) and other smart TV “platform” revenue enhancements to counterbalance oft-unprofitable initial hardware prices, TCS takes it to the next level with remarkably bad AI-generated drivel shown on its own “free” (translation: advertising-rife) channel. No thanks, I’ll stick with reruns of The Office. That said, the on-the-fly auto-translation capabilities built into Samsung’s newest displays (along with several manufacturers’ earbuds and glasses) were way
  • Qi: Good news/bad news on the wireless charging Bad news first: the Qi Consortium recently added the “Qi Ready” category to its Qi2 specification suite. What this means, simply stated, is that device manufacturers (notably, at least at the moment, of Android smartphones) no longer need to embed orientation-optimization magnets in the devices themselves. Instead, as I’m already doing with my Pixel phones, they can alternatively rely on magnets embedded in accompanying cases. On the one hand, as Apple’s MagSafe ecosystem already shows, if you put a case on a phone it needs to have magnets anyway, because the ones in the phone aren’t strong enough to work through the added intermediary case material. And—I dunno—maybe the magnets add notable bill-of-materials cost? Or they interfere with the phone’s speakers, microphones and the like? Or…more likely (cynically, at least), the phone manufacturers see branded cases-with-magnets as a lucrative upside revenue streams? Thoughts, readers? Now for the good news: auto-movable coils to optimize device orientation! How cool is that?
  • Lithium battery-based storage systems: Leading suppliers are aggressively expanding beyond portable devices into full-blown home backup systems. EcoFlow’s monitoring and management software looks quite compelling, for example, although I think I’ll skip the solar cell-inclusive hat. And Jackery’s now also selling solar cell-augmented roof tiles.
  • Last but not least: (the) RadioShack (licensed brand name, to be precise) is back, baby!

And, now well past 3,000 words, I’m putting this one to bed, saving discussions on robots, Wi-Fi standards evolutions, full-body scanning mirrors with cameras (!!), the latest chips, inevitable “AI” crap and the like for another day. I’ll close with iFixit’s annual “worst of show” coverage:

And with that, I look forward to your thoughts on the things I discussed, saved for later and overlooked alike in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The 2025 CES: Safety, Longevity and Interoperability Remain a Mess appeared first on EDN.

Automotive insights from CES 2025

Sat, 01/11/2025 - 03:46

OEMs are shifting from installing black box solutions that specialized functions in the more conventional domain architecture to a zone architecture and a function-agnostic processing backbone where each node handles location-specific data. Along with this trend, there is a push towards optimizing sensor functions, fusing multimodal input data with ML for contextual awareness. Sensors no longer serve one function, instead they can be leveraged in a series of automotive systems from driver monitoring systems (DMSs) to smart door access. As a result, camera/sensor count is minimized and power consumption maximized. A tour of several booths at CES 2025 showed some of the automotive-oriented solutions.

Automotive lighting

Microchip’s intelligent smart embedded LED (ISELED), ISELED light and sensor network (ILaS), and Macroblock lighting solutions can be seen in Figure 1. The ISELED protocol was developed to overcome the issue of requiring an external IC per LED to control the color/brightness of individual LEDs. Instead, Microchip has integrated an intelligent ASIC into each LED where the entire system can be controlled with a simple 16-bit MCU. The solution allows for more styling control for aesthetics with additional use cases such as broadcasting the status of a car via text that appears on display-based matrix lighting.

Figure 1: Microchip ISELED lighting solution where all of these LEDS are individually addressable allowing designers to change color/brightness levels of each LED. 

ADI’s 10BASE-T1S ethernet to edge bus (E2B) tech has been used as a body control and automotive lighting connectivity solution. And, while this solution is not directly related to LED control, it can be used to update OEM automotive lighting systems that leverage the 10BASE-T1S automotive bus. 

In-cabin sensing systems

One of the more pervasive themes were child presence detection (CPD) and occupancy monitoring system (OMS) products, with many companies showing off their ultra-wide band (UWB) detection and/or ranging tech and 60-GHz radar chips. The inspiration here comes from the incessant pressure on OEMs to meet stringent safety regulations. For instance, The Euro NCAP advanced program will only offer rewards to OEMs for direct sensing systems for CPD. For UWB sensing, the typical setup involved 4 UWB anchors placed outside of the vehicles and two on the inside to detect a phone equipped with UWB. The NXP booth’s automotive UWB demo can be seen in Figure 2. As shown in the image, the UWB radar will be able to identify the distance of the phone from the UWB anchor and unlock the car from the outside using the UWB ranging feature with time of flight (ToF) measurements. The very same principles can be applied for smart door locks and train stations, allowing passengers with pre-purchased train tickets to pass the turnstile from outside of the station to the inside of it.  

Figure 2: The NXP automotive UWB radar smart car access solution.

Qorvo also showed their UWB solution, Figure 3 shows one UWB anchor on a toy car for demonstration purposes. The image also highlights another ADAS application of radar (UWB or 60 GHz): respiration and heartbeat detection. 

An engineer at NXP granted a basic explanation of the process: the technology measures signal reflections from occupants to detect, for instance, how often the chest is expanding/contracting to measure breathing. This allows for direct-sensing of occupants with algorithms that can discern whether or not a child is present in the vehicle, offering a CPD, OMS, intrusion & proximity alert, and a host of other functions with the established sensor infrastructure. It is apparent that there is no clear answer on the number of wireless chips but there is more of a clear requirement that sensors are becoming more intelligent to minimize part-count—a single radar chip could eliminate five in-seat weight sensors. 

Figure 4: Qorvo’s UWB keyless entry and vitals monitoring solutions in partnership with other companies.

TI’s CPD, OMS, and driver monitoring system (DMS) can be seen in Figure 5 with a combination of their 60-GHz radar chip and a camera. Naturally, the shorter wavelength 60-GHz radar offers much more range resolution so this system would likely be more accurate in CPD applications potentially offering less false positives. However, possibly the most obvious benefit of utilizing 60 GHz radar is the fact that a single module replaces the 6 UWB modules for CPD, OMS, intrusion detection, gesture detection, etc. This however, does not entirely sidestep UWB technology; the ranging aspect of UWB allows for accurate smart door access and this is something that may be impractical for 60-GHz technology, especially considering the atmospheric absorption at that particular frequency. 

Figure 5: TI’s CPD, OMS, and driver monitoring system (DMS) CES demo.

AD and surround view systems

Automotive surround view cameras for AD and ADAS functions were also presented in a number of booths. Microchip’s can be seen in Figure 6 where their serializers are used in three cameras that can transmit up to 8 Gbps. The Microchip deserializers are configured to receive the video data and aggregate it via the Automotive SerDes Alliance Motion Link (ASA-ML) standard to the central compute, or high-performance computer (HPC), mimicking a zonal architecture.

Figure 6: Microchip’s ASA-ML standard 360o surround view solution.

ADI also used a serializer/deserializer (SerDes) solution with a gigabit multimedia serial link (GMSL) demo. GMSL’s claim to fame is its lightweight nature, the single-strand solution transports up to 12 Gbps over a single bidirectional cable, shaving weight.

Figure 7:  ADI GMSL demo aggregating feeds from six cameras into a deserializer board and going into a single MIPI port on the Jetson HPC-platform.

Using VLMs for AD

Ambarella, a company that specializes in AI vision processors showed a particularly interesting AD demo that integrated LLM in the stack. This technology was originally developed by Vislab, an Italian startup that is now an R&D automotive center under Ambarella. The system consisted of 6 cameras, 5 radars, and Ambarella’s CV3 automotive domain controller for  L2+ to L4 autonomy. The use of the vision language model (VLM) LLaVA-OneVision allowed for more context-aware decision making. 

Founder of Vislab, Alberto Broggi hosted the demo and explained the benefits of leveraging an LLM in this particular use case, “Suppose you have the best perception in the world, so you can perceive everything; you can understand the position of cars, locate pedestrians, and so on. You will still have problems, because there are situations that are ambiguous.” He continued by describing a few of these situations, “If you have a car in front of you in your lane, you don’t really know whether or not you can overtake because it depends on the situation. If its a broken down vehicle, you can obviously overtake it. If it’s a vehicle that is waiting for a red light, you can’t. So you really need some higher level description and context.”

Figure 8 and the video below shows one such example of contextual-awareness that a VLM can offer.

Figure 8: Ambarella VLM AD demo with use case offering some contextual-awareness and suggestions.  

Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Automotive insights from CES 2025 appeared first on EDN.

CES 2025 coverage

Sat, 01/11/2025 - 00:00

Editors from EDN and our AspenCore sister publications are covering the Consumer Electronics Show (CES). Scroll down to see coverage of this year’s CES! 

CES 2025: Day 2 Wrap and Interview with EdgeCortix’s CEO

A constant theme at CES 2025 this week has been around the deployment of AI in all kinds of applications, how to drive as much intelligence as possible to the edge, sensor fusion and making everything smart. We saw many large and small companies developing technologies and products to optimize this process, aiming to get more “smarts” or performance with less effort and power.

CES 2025: Approaches towards hardware acceleration

It is clear that support for some kind of hardware acceleration has become paramount for success in breaking into the intelligent embedded edge. Company approaches to the problem run the full gamut from hardware accelerated MCUs with abundant software support and reference code, to an embedded NPU.

CES 2025: It’s All About Digital Coexistence, and AI is Real

CES 2025 commenced in Las Vegas, Nev., on Sunday at the Mandalay Bay Convention Center for the trade media with the Consumer Technology Association’s annual tech trends survey and forecast. Plus, there was a sneak preview provided to some of the exhibiting companies at the CES Unveiled event.

Integration of AI in sensors prominent at CES 2025

Miniaturization and power efficiency have long defined sensor designs. Enter artificial intelligence (AI) and software algorithms to dramatically improve sensing performance and enable a new breed of features and capabilities. This trend has been apparent at this year’s CES in Las Vegas, Nevada.

Software-defined vehicle (SDV): A technology to watch in 2025

Software-defined vehicle (SDV) technology has been a prominent highlight in the quickly evolving automotive industry. But how much of it is hype, and where is the real and tangible value? CES 2025 in Las Vegas will be an important venue to gauge the actual progress this technology has made with a motto of bringing code on the road.

CES 2025: Wirelessly upgrading SDVs

SDVs rethink underlying vehicle architecture so that cars are broken into zones that will directly service the vehicle subsystems that surround it locally, cutting down wiring, latency, and weight. Another major benefit of this is over-the-air (OTA) updates using Wi-Fi or cellular to update cloud-connected cars; however, bringing Ethernet to the automotive edge comes with its complexities.

CES 2025: Moving toward software-defined vehicles

TI’s automotive innovations are currently focused in powertrain systems; ADAS; in-vehicle infotainment (IVI); and body electronics and lighting. The recent announcements fall into the ADAS with the AWRL6844 radar sensor as well as IVI with the AM275 and AM62D processors and the class-D audio amplifier.

CES 2025: Day 1 Recap with Synaptics, Ceva

EE Times and AspenCore staff are on-site at CES 2025, providing expert coverage on the latest and greatest developments at one of the largest tech events in the world.

CES 2025: A Chat with Siemens EDA CEO Mike Ellow

Siemens showcased its latest PAVE360 digital twin solution this year at CES 2025, lowering the barrier between design efforts that are typically siloed. EE Times had an opportunity to chat with Siemens EDA CEO Mike Ellow about how this approach to design is relevant for the semiconductor industry—especially considering the recent uptick in using AI tools at every level of a system to dynamically assess the trickle up/down effects of design adjustments. 

CES 2025: An interview with Si Labs’ Daniel Cooley

At the forefront of many of the CES wireless solutions is WiFi’s newest iteration (WiFi 6), BLE and BLE audio for their already-established place in consumer devices. A chat with Silicon Labs CTO Daniel Cooley illuminated the company’s presence and future in IoT and the intelligent edge.

 

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post CES 2025 coverage appeared first on EDN.

Integration of AI in sensors prominent at CES 2025

Fri, 01/10/2025 - 15:20

Miniaturization and power efficiency have long defined sensor designs. Enter artificial intelligence (AI) and software algorithms to dramatically improve sensing performance and enable a new breed of features and capabilities. This trend has been apparent at this year’s CES in Las Vegas, Nevada.

See full story at EDN’s sister publication, Planet Analog.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Integration of AI in sensors prominent at CES 2025 appeared first on EDN.

CES 2025: Approaches towards hardware acceleration

Fri, 01/10/2025 - 06:55

Edge computing has naturally been a hot topic at CES with companies highlighting a myriad of use cases where the pre-trained edge device runs inference locally to produce the desired output, never once interacting with the cloud. The complexity of these nodes has grown to not only include multimodal support with the fusion and collaboration between sensors for context-aware devices but also multiple cores to ratchet up the compute power.

Naturally, any hardware acceleration has become desirable with embedded engineers craving solutions that ease the design and development burden. The solutions vary where many veer towards developing applications with servers in the cloud that are then virtualized or containerized to run at the edge. Ultimately, there is no one-size-fits-all solution for any edge compute application.

It is clear that support for some kind of hardware acceleration has become paramount for success in breaking into the intelligent embedded edge. Company approaches to the problem run the full gamut from hardware accelerated MCUs with abundant software support and reference code, to an embedded NPU.

Table 1 highlights this with a list of a few companies and their hardware acceleration support.

Company

Hardware acceleration

Implemented in

Throughput

Software

NXP

eIQ Neutron NPU

select MCX, i.MX RT crossover MCUs, and i.MX applications processors

32 Ops/cycle to over 10,000 Ops/cycle

eIQ Toolkit, eIQ Time Series Studio

STMicroelectronics

Neural-ART Accelerator NPU

STM32N6

up to 600 GOPS

ST Edge AI Suite

Renesas

DRP-AI

RZ/V2MA, RZ/V2L, RZ/V2M

DRP-AI Translator,  DRP-AI TVM 

Silicon Labs

Matrix Vector Processor, AI/ML co-processor

BG24 and MG24

MVP Math Library API, partnership with Edge Impulse

TI

NPU

TMS320F28P55x, F29H85x, C2000 and more

Up to 1200 MOPS (on 4bWx8bD)

Up to 600 MOPS (on 8bWx8bD)

Model Composer GUI or Tiny ML Modelmaker

Synaptics

NPU

Astra (SL1640, SL1680)

1.6 to 7.9 TOPS

Open software with complete GitHub project

Infineon

Arm Ethos-U55 micro-NPU processor

PSOC Edge MCU series, E81, E83 and E84

ModusToolbox

Microchip

AI-accelerated MCU, MPU, DSC, or FPGA

8-, 16- and 32-bit MCUs, MPUs, dsPIC33 DSCs, and FPGAs

MPLAB Machine Learning Development Suite, VectorBlox Accelerator Software Development (for FPGAs)

Qualcomm

Hexagon NPU

Oryon CPU, Adreno GPU

45 TOPS

Qualcomm Hexagon SDK

Table 1: Various company’s approaches for hardware acceleration.

Synaptics, for instance, has their Astra platform that is beginning to incorporate Google’s multi-level intermediate representation (MLIR) framework. “The core itself is supposed to take in models and operate in a general-purpose sense. It’s sort of like an open RISC-V core based system but we’re adding an engine alongside it, so the compiler decides whether it goes to the engine or whether it works in a general-purpose sense.” said Vikram Gupta, senior VP and general manager of IoT processors and chief product officer, “We made a conscious choice that we wanted to go with open frameworks. So,whether it’s a Pytorch model or a TFLite model, it doesn’t matter. You can compile it to the MLIR representation, and then from there go to the back end of the engine.” One of their CES demos can be seen in Figure 1.

Figure 1:  A smart camera solution showing the Grinn SoM that uses the Astra SL1680 and software from Arcturus to provide both identification and tracking. New faces are assigned an ID and an associated confidence interval that will adjust according to the distance from the camera itself. 

TI showcased its TMS320F28P55x C2000 real-time controller (RTC) MCU series with an integrated NPU with an arc fault detection solution for solar inverter applications. The system performs power conversion while at the same time doing real-time arc fault detection using AI. The solution follows the standard process of obtaining data, labeling, and training the arc fault models that are then deployed onto the C2000 device (Figure 2).

Figure 2: TI’s solar arc fault detection edge AI solution

One of Microchip’s edge demos detected true touches in the presence water using its mTouch algorithm in combination with their PIC16LF1559 MCU (Figure 3). Another solution highlighted was in partnership with Edge Impulse and used the FOMO ML architecture to perform object detection in a truck loading bay. Other companies, such as Nordic Semiconductor, have also partnered with Edge Impulse to ease the process of labeling, training, and deploying AI to their hardware. The company has also eased the process of leveraging NVIDIA TAO models to adapt well-established AI models to a specific end-application on any Edge-Impulse-supported target hardware. 

Figure 3: Some of Microchip’s edge AI solutions at CES 2025. Truck loading bay augmented by AI in partnership with Edge Impulse (left) and a custom-tailored Microchip solution using their mTouch algorithm to differentiate between touch and water (right).

Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post CES 2025: Approaches towards hardware acceleration appeared first on EDN.

Dev kit uses backscatter Wi-Fi for low-power connectivity

Thu, 01/09/2025 - 20:38

HaiLa Technologies has introduced the EVAL2000 development board, featuring its BSC2000 passive backscatter Wi-Fi chip and ST’s STM32U0 MCU. The platform empowers developers and researchers to create ultra-low-power connected sensor applications over Wi-Fi.

The BSC2000 is a monolithic chip that combines analog front-end and digital baseband components to implement HaiLa’s backscatter protocol for 802.11 1-Mbps Direct Sequence Spread Spectrum (DSSS) over Wi-Fi. By using backscattering, it enables low-power communication by reflecting existing Wi-Fi signals instead of generating its own. This allows devices to transmit data with minimal energy consumption. Leveraging readily available, standard Wi-Fi infrastructure, the BSC2000 backscatter Wi-Fi chip collects and transmits sensor data with power efficiency that extends the life of battery-operated sensors.

The EVAL2000 development board accelerates prototyping with GPIO, I2C, and SPI sensor interfaces. Sensor integration is handled through firmware on the MCU. The kit also includes an onboard temperature/humidity sensor.

The BSC2000 EVAL2000 development kit is available for preorder, with shipping anticipated for Q1 2025. For more information on the backscatter Wi-Fi chip and development kit, click here.

HaiLa Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Dev kit uses backscatter Wi-Fi for low-power connectivity appeared first on EDN.

Pages