EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 40 хв 22 секунди тому

Photo tachometer sensor accommodates ambient light

Срд, 04/16/2025 - 16:03

Tachometry, the measurement of the speed of spin of rotating objects, is a common application. Some of those objects, however, have quirky aspects that make them extra interesting, even scary. One such category includes outdoor noncontact sensing of large, fast, and potentially hazardous objects like windmills, waterwheels, and aircraft propellers. The tachometer peripheral illustrated in Figure 1 implements optical sensing using available ambient light that provides a logic-level signal to a microcontroller digital input and is easily adaptable to different light levels and mechanical contexts.

Figure 1 Logarithmic contrast detection accommodates several decades of variability in available illumination.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Safe sensing of large rotating objects is best done from a safe (large) distance and passive available-light optical methods are the obvious solution. Unless elaborate lens systems are used in front of the detector, the optical signal is apt to have a relatively low-amplitude due to the tendency of the rotating object (propeller blade, etc.) to fill only a small fraction of the field of view of simple detectors. This tachometer (Figure 1) makes do with an uncomplicated detector (phototransistor Q1 with a simple light shield) by following the detector with a high-gain, AC coupled, logarithmic, threshold detector.

Q1’s photocurrent produces a signal across Q2 and Q3 that varies by ~500 µV pp for every 1% change in incident light intensity that’s roughly (e.g. neglecting various tempcos) given by:

V ~ 0.12 log10(Iq1/Io)
Io ~ 10 fA

This approximate log relationship works over a range of nanoamps to milliamps of photocurrent and is therefore able to provide reliable circuit operation despite several orders of magnitude variation in available light intensity. A1 and the surrounding discrete components comprise high gain (80 dB) amplification that presents a 5-Vpp square-wave to the attached microcontroller DIO pin.

Programming of the I/O pin internal logic for pulse counting allows a simple software routine to divide the accumulated count by the associated time interval and by the number of counted optical features of the rotating object (e.g., number of blades on the propeller) to produce an accurate RPM reading.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

 

The post Photo tachometer sensor accommodates ambient light appeared first on EDN.

How NoC architecture solves MCU design challenges

Срд, 04/16/2025 - 10:33

Microcontrollers (MCUs) have undergone a remarkable transformation, evolving from basic controllers into specialized processing units capable of handling increasingly complex tasks. Once confined to simple command execution, they now support diverse functions that require rapid decision-making, heightened security, and low-power operation.

Their role has expanded across industries, from managing complex control systems in industrial automation to supporting safety-critical vehicle applications and power-efficient operations in connected devices.

As MCUs take on greater workloads, the conventional bus-based interconnects that once sufficed now limit performance and scalability. Adding artificial intelligence (AI) accelerators, machine learning technology, reconfigurable logic, and secure processing elements demands a more advanced on-chip communication infrastructure.

To meet these needs, designers are adopting network-on-chip (NoC) architectures, which provide a structured approach to data movement, alleviating congestion and optimizing power efficiency. Compared to traditional crossbar-based interconnects, NoCs reduce routing congestion through packetization and serialization, enabling more efficient data flow while reducing wire count.

This is how efficient packetization works in network-on-chip (NoC) communications. Source: Arteris

MCU vendors adopt NoC interconnect

Many MCU vendors relied on proprietary interconnect solutions for years, evolving from basic crossbars to custom in-house NoC implementations. However, increasing design complexity encompassing AI/ML integration, security requirements, and real-time processing has made these solutions costly and challenging to maintain.

Moreover, as advanced packaging techniques and die-to-die interconnects become more common, maintaining in-house interconnects has grown increasingly complex, requiring constant updates for new communication protocols and power management strategies.

To address these challenges, many vendors are transitioning to commercial NoC solutions that offer pre-validated scalability and significantly reduce development overhead. For an engineer designing an AI-driven MCU, an NoC’s ability to streamline communication between accelerators and memory can dramatically impact system efficiency.

Another major driver of this transition is power efficiency. Unlike general-purpose systems-on-chip (SoCs), many MCUs must function within strict power constraints. Advanced NoC architectures enable fine-grained power control through power domain partitioning, clock gating, and dynamic voltage and frequency scaling (DVFS), optimizing energy use while maintaining real-time processing capabilities.

Optimizing performance with NoC architectures

The growing number of heterogeneous processing elements has placed unprecedented demands on interconnect architectures. NoC technology addresses these challenges by offering a scalable, high-performance alternative that reduces routing congestion, optimizes power consumption, and enhances data flow management. NoC enables efficient packetized communication, minimizes wire count, and simplifies integration with diverse processing cores, making it well-suited for today’s MCU requirements.

By structuring data movement efficiently, NoCs eliminate interconnect bottlenecks, improving responsiveness and reducing die area. So, the NoC-based designs achieve up to 30% higher bandwidth efficiency than traditional bus-based architectures, improving overall performance in real-time systems. This enables MCU designers to achieve higher bandwidth efficiency and simplify integration, ensuring their architectures remain adaptable for advanced applications in automotive, industrial, and enterprise computing markets.

Beyond enhancing interconnect efficiency, NoC architectures support multiple topologies, such as mesh and tree configurations, to ensure low-latency communication across specialized processing cores. Their scalable design optimizes interconnect density while minimizing congestion, allowing MCUs to handle increasingly complex workloads. NoCs also improve power efficiency through modularity, dynamic bandwidth allocation, and serialization techniques that reduce wire count.

By implementing advanced serialization, NoC architectures can reduce the number of interconnect wires by nearly 50%, as shown in the above figure, lowering overall die area and reducing power consumption without sacrificing performance. These capabilities enable MCUs to sustain high performance while balancing power constraints and minimizing die area, making NoC solutions essential for next-generation designs requiring real-time processing and efficient data flow.

In addition to improving scalability, NoCs enhance safety with features that help toward achieving ISO 26262 and IEC 61508 compliance. They provide deterministic communication, automated bandwidth and latency adjustments, and built-in deadlock avoidance mechanisms. This reduces the need for extensive manual configuration while ensuring reliable data flow in safety-critical applications.

Interconnects for next-generation MCUs

As MCU workloads grow in complexity, NoC architectures have become essential for managing high-bandwidth, real-time automation, and AI inference-driven applications. Beyond improving data transfer efficiency, NoCs address power management, deterministic communication, and compliance with functional safety standards, making them a crucial component in next-generation MCUs.

To meet increasing integration demands, ranging from AI acceleration to stringent power and reliability constraints, MCU vendors are shifting toward commercial NoC solutions that streamline system design. Automated pipelining, congestion-aware routing, and configurable interconnect frameworks are now key to reducing design complexity while ensuring scalability and long-term adaptability.

Today’s NoC architectures optimize timing closure, minimize wire count, and reduce die area while supporting high-bandwidth, low-latency communication. These NoCs offer a flexible approach, ensuring that next-generation architectures can efficiently handle new workloads and comply with evolving industry standards.

Andy Nightingale, VP of product management and marketing at Arteris, has over 37 years of experience in the high-tech industry, including 23 years in various engineering and product management positions at Arm.

 

Related Content

The post How NoC architecture solves MCU design challenges appeared first on EDN.

Aftermarket drone remote ID: Let’s see what’s inside thee

Втр, 04/15/2025 - 16:01

The term “aftermarket” finds most frequent use, in my experience, in describing hardware bought by owners to upgrade vehicles after they initially leave the dealer lot: audio system enhancements, for example, or more powerful headlights. But does it apply equally to drone accessories? Sure (IMHO, of course). For what purposes? Here’s what I wrote last October:

Regardless of whether you fly recreationally or not, you also often (but not always) need to register your drone(s), at $5 per three-year timespan (per-drone for commercial operators, or as a lump sum for your entire drone fleet for recreational flyers). You’ll receive an ID number which you then need to print out and attach to the drone(s) in a visible location. And, as of mid-September 2023, each drone also needs to (again, often but not always) support broadcast of that ID for remote reception purposes…

DJI, for example, firmware-retrofitted many (but not all) of its existing drones with Remote ID broadcast capabilities, along with including Remote ID support in all (relevant; hold that thought for next time) new drones. Unfortunately, my first-generation Mavic Air wasn’t capable of a Remote ID retrofit, or maybe DJI just didn’t bother with it. Instead, I needed to add support myself via a distinct attached (often via an included Velcro strip) Remote ID broadcast module.

I’ll let you go back and read the original writeup to discern the details behind my multiple “often but not always” qualifiers in the previous two paragraphs, which factor into one of this month’s planned blog posts. But, as I also mentioned there, I ended up purchasing Remote ID broadcast modules from two popular device manufacturers (since “since embedded batteries don’t last forever, don’cha know”), Holy Stone and Ruko. And…

I also got a second Holy Stone module, since this seems to be the more popular of the two options) for future-teardown purposes.

The future is now; here’s a “stock” photo of the device we’ll be dissecting today, with dimensions of 1.54” x 1.18” x 0.51”/3.9 x 3 x 1.3 cm and a weight of 13.9 grams (14.2 grams total, including Velcro mounting strips) and a model number variously reported as 230218 and HSRID01:

Some outer box shots to start (I’ve saved you from boring photos of the blank sides):

And opening the box, its contents, with our victim in the middle, within a cushioned envelope:

At bottom is the user manual; I can’t find a digital copy of it on the Holy Stone support site, but Manuals+ hosts it in both HTML and PDF formats. You can also find this documentation (among other interesting info) on the FCC website; the FCC ID, believe it or not, is 2AJ55HOLYSTONEBM. At top is the Velcro mounting pair, also initially cushion-packaged (for unknown reasons):

And now, fully freed from its prior captivity, is our patient, as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (once again, I’ve intentionally saved you from exposure to boring blank-side shots):

A note on this next one; the USB-C port shown is used to recharge the embedded battery:

Prior to disassembly, I plugged the device into my Google Pixel Buds Pro earbuds charging cable (which has USB-C connectors on both ends) to test charge functionality, but the left-side battery indicator LED on the front panel remained un-illuminated. That said, when I punched the device’s front panel power switch, it came to life. The result wasn’t definitive; the battery could have been precharged on the assembly line, with the charging circuitry inside still inoperable.

But, on a hunch, I then instead plugged it into the power cable for my Google Chromecast with Google TV, which has USB-A on the power-source end, and the charge-status LED lit up and began blinking, indicative of charging in progress. What’s with Chinese-sourced gear and its non-cognizance of USB Power Delivery negotiation protocols? The user manual shows and discusses an “original charging cable” with USB-A on one end which, had it actually been included as inferred, would have constrained the possible charging-source options. Just sayin’.

Speaking of “circuitry inside,” note the visible screw head at the bottom of this next shot:

That’s, I suspect, our pathway inside. Before we dive in, however, what should we expect to see there, circuitry-wise? Obviously there’s a battery, likely Li-ion in formulation, along with the aforementioned associated charging circuitry for it. There’s also bound to be some sort of system SoC, plus both volatile (RAM) and nonvolatile memory, the latter holding both the program code and user-programmable FAA-assigned Remote ID. Broadcast of that ID can occur over Bluetooth, Wi-Fi or both, via an accompanying antenna. And for geolocation purposes, there’ll need to be a GPS subsystem, comprising both another antenna and a receiver.

Now that the stage is set, let’s get inside, after both removing the previously shown screw and slicing through the serial number sticker on one side:

Voila:

The wire in the lower right corner is, I suspect, the wireless communications antenna. Given its elementary nature, along with the lack of mention of Wi-Fi in the product documentation, I’m guessing it’s Bluetooth-only. To its left is the square mostly-tan GPS antenna. In the middle is the multifunction switch (power cycling and user (re)configuration). Above it are the two LEDs, for power/charging status (left) and current operating mode (right).

And on both sides of it are Faraday cages, the lids of which we’ll need to rip off (hold that thought) before we can further investigate their contents.

The PCB subsequently lifts right out of the other (back) case half:

revealing the “pouch” battery adhesive-attached to the PCB’s other side:

Peel the battery away (revealing a near-blank PCB underneath).

Peel off the tape, and the battery specs (3.7V, 150mAh, 0.55Wh…why do battery manufacturers frequently feel the need to redundantly provide both of the latter two? Can’t folks multiply anymore?) come into view:

Back to the front of the PCB, post-removal of the two Faraday cages’ tops, as foreshadowed previously:

Now fully visible is the USB-C connector, alongside a rubberized ring that had been around it when fully assembled. As for what’s inside those now-mangled Faraday cages, let’s zoom in:

The landscape-dominant IC within the left-located Faraday cage, unsurprisingly given its GPS antenna proximity, is Bekin’s BK1661, a “fully integrated single-chip L1 GNSS [author note: Global Navigation Satellite System] solution” that, as the acronym infers, supports not only GPS L1 but “Beidou B1, Galileo E1, QZSS L1, and GLONASS G1,” for worldwide usage.

The one to the right, on the other hand, was a mystery (although, given its antenna proximity, I suspected it handled Bluetooth transceiver functionality, among other things) until I came across an enlightening Reddit discussion. The company logo mark on the top of the chip is a combination of the letters J and L. And the part number underneath it is:

BP0E950-21A4

Here’s an excerpt of the initial post in the Reddit discussion thread, titled “How to identify JieLi (JL/π) bluetooth chips”:

If you like to open things, particularly bluetooth audio devices, you may have seen chips from manufacturers like Qualcomm, Bestechnic (BES), Airoha, Vimicro WX, Beken, etc.; but cheaper devices have those mysterious chips marked with A3 or AB (from Bluetrum), or those with the JL or “pi” logo (from JieLi).

Bluetrum and JieLi chips have a printed code (like most IC chips), but those codes don’t match any results on Google or the manufacturer’s websites. Why does this happen? Well, it looks like the label on those chips is specific to the firmware they’re running, and there’s no way to know which chip it is exactly (unless the manufacturer of your bluetooth device displays that information somewhere on the package).

I was recently looking at the datasheet for some JieLi chips I have lying around, and noticed something interesting: on each chip the label is formatted like “abxxxxxxx-YYY”, “acxxxxx-YYYY” or similar, and the characters after the “-” look like they indicate part of the model number of the IC.

 

In conclusion, if you find a JL chip inside your device and the label does not show any results, use the last characters (the ones after the “-“) and add ac69 or ac63 at the beginning (those are the series of the chip, like AC69xx or AC63xx. There are more series that I don’t remember, so if those codes don’t work for you, try searching for others).

 

Also, if you find a chip with only one number before the letter in the character group after the “-“, add a 0 before it and then add a series code at the beginning. (For example: 5A8 -> 05A8 -> AC6905A)

By doing so you will probably find the pinout and datasheet of your bluetooth IC.

 Based on the above, what I think we have here is the AC321A4 RISC-based microcontroller with Bluetooth support from Chinese company ZhuHai JieLi Technology. To give you an idea of how much (or, perhaps more accurately, little) it costs, consider the headline of an article I came across on a similar product from the same company, “JieLi Tech AC6329C4 is Another Low Cost MCU but with Bluetooth 5.0 Support.” Check out the price tag in the associated graphic:

That said, an AC6921A also exists from the company, although it seems to be primarily intended for stereo audio Bluetooth, so…🤷‍♂️

That’s what I’ve got for today, folks. Sound off in the comments with your thoughts!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Aftermarket drone remote ID: Let’s see what’s inside thee appeared first on EDN.

Building a low-cost, precision digital oscilloscope – Part 2

Пн, 04/14/2025 - 16:01

Editor’s Note:

In this DI, high school student Tommy Liu modifies a popular low-cost DIY oscilloscope to enhance its input noise rejection and ADC noise with anti-aliasing filtering and IIR filtering.

Part 1 introduces the oscilloscope design and simulation.

This part (Part 2) shows the experimental results of this oscilloscope.

Experimental Results

Three experiments were conducted to evaluate the performance of our precision-enhanced oscilloscope using both analog and digital signal processing techniques.

First, we test the effect of the new anti-aliasing filter described in Part 1. For this purpose, a 2-kHz sinusoidal signal is amplitude modulated (AM) with a 961-kHz sinusoidal waveform by a Rigol DG1022Z signal generator (Rigol Technologies, Inc., 2016) and is used as the analog input to the oscilloscope.

In this scenario, the low-frequency (2 kHz) sinusoidal waveform is our signal, while the high-frequency tones caused by modulation with 961 kHz sinusoidal represent high frequency noises at the signal source. In the experiment, a 10% modulation depth is used to make the high frequency noise easily identifiable by sight. The time division is set at 20 µs with the ADC sampling frequency of 500 KSPS.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Results of anti-aliasing filter

The original DSO138-mini lacks anti-aliasing filter capability due to its insufficient -3-dB cut-off frequencies (around 500 kHz to 800 kHz). As a result, the high-frequency noise tones caused by modulations pass through the analog front-end, without much attenuation, and are sampled by the ADC at 500 KSPS. This creates aliasing noise tones at the ADC output and can be clearly seen in the displayed waveform on the DSO128-mini (Figure 1).

Figure 1 The aliasing noise tones at the ADC output on the DSO138-mini.

Our new anti-aliasing filter provides a significant lower -3-dB cut-off frequency of around 100 kHz, and effectively filters away most of the out-of-band high frequency noises, in this case, the noise tones caused by the signal modulation with 961 kHz sinusoidal. Figure 2 is a screenshot with the new anti-aliasing filter, indicating a significant reduction in the aliasing noise.

Figure 2 Reduction of the aliasing noise with the new anti-aliasing filter.

Detailed analysis on the captured data with the new anti-aliasing filter indicates a 10 dB to 15 dB (3.2x to 5.6x) improvement over the original DSO138-mini on noise rejection at frequencies higher than the oscilloscope’s signal bandwidth.

In practical applications, high frequency noises with a magnitude of a few millivolts RMS are not uncommon. A 5-mV RMS noise at near 900 kHz is attenuated to 0.73 mV (RMS) with our new anti-aliasing filter versus 2.48 mV (RMS) with the original DSO138-mini. With an ADC full-scale input range of 3.3 V, 0.73 mV RMS is of an effective resolution well above 10 bits (ENOB). With the original DSO138-mini, the ENOB would be at only an 8-bit level.

Results of digital post-processing filter

The second test evaluates the performance of the digital post-processing filter. As explained in Part 1, besides the noises at the analog input, other noise sources in oscilloscopes, such as noises on ADC inside the MCU damage the measurement precision. This is evident in Figure 3, which is a screenshot of the DSO138-mini with its Self-Test mode turned on. In Self-Test mode, an internally generated pulse signal—less susceptible to the noises from the external signal source—is used to test and fine tune the oscilloscope. We can see that there are still ripple noises on the pulse waveform.

Figure 3 Ripples on internally generated pulse signal during self-test mode on the DSO138-mini.

It is not easy to identify the magnitude of these ripples due to the limited pixel resolution of the DSO138-mini’s LCD display (320 x 240). We transferred the captured data to a PC via DSO138-mini’s UART-USB link for precise data analysis. Figure 4 shows the waveform of the captured self-test pulses on a PC. The ripple noises are calculated and shown in Figure 5.

Figure 4 Captured self-test pulse signal waveform on PC for more precision data analysis. 

Figure 5 Magnitude of noises on self-test pulse with no digital post-processing.

Considering the voltage division setting (1 V, -20 dB on Input) and attenuation setting (x1), the ripple on the self-test pulse has a peak-peak magnitude of 8 mV. This error is about 10 LSB and the calculated RMS value is about 3 mV, yielding an effective resolution of 8.3 bits. Digital post-processing can be used to suppress some of these noises. 

Figure 6 is the waveform after first-order infinite impulse response (IIR) digital filtering (α = 0.25) is performed on the PC, and Figure 7 shows the noises on the self-test pulse.

After IIR filtering, the noise RMS value reduces to about 0.75 mV, or by a factor of 4. This brings back the effective resolution from 8.3 bits to 10.4 bits. We notice that the rise and fall transition edges of the pulse look a bit less sharp than the signal before post-processing.

This is due to the low-pass nature of the IIR filter. With α=0.25, the passband (-3 dB) is at around 23 kHz, covering an input bandwidth up to audio frequencies (20 kHz). For tracking faster signals, such as fast transition edges of a pulse signal, we can relax α to a higher value allowing for more input bandwidth. 

Figure 6 Self-test pulse with first-order IIR digital filter where α = 0.25.

Figure 7 Noises on self-test pulse with first-order IIR filter where RMS noise reduces to ~0.75 mV.

The effects of both filters

Finally, we test the overall effect of both the new anti-aliasing filter and the digital post processing by inputting a sinusoidal input of 2 kHz from a signal generator to our new oscilloscope. We can see from Figure 8 that even with the new anti-aliasing filter, there are still some noises on the waveform, due to the ADC noises inside the MCU. The RMS value of the noises is about 2.8 mV and the effective resolution is limited to below 9 bits.

Figure 8 Noises on a 2 kHz sinusoidal input waveform despite having the new anti-aliasing filter.

As shown in Figure 9, with the first-order IIR filter in effect, the waveform cleans up. The RMS noise reduces to 0.7 mV and, again, this brings up the effective resolution from below 9 bits to above 10 bits. Other input frequencies, up to 20 kHz (audio), have also been tested and an overall effective resolution of 10 bits or more was observed with the new anti-aliasing filter and the digital post-processing algorithm.

Figure 9 A 2 kHz sinusoidal input waveform after digital post-processing where the RMS noise reduces to 0.7 mV.

Low-cost oscilloscope

Many traditional low-cost DIY type digital oscilloscopes have two major technical drawbacks, namely inadequate anti-aliasing capability and large ADC noises. As a result, these oscilloscopes can only reach an effective resolution of 8 bits or less, even though most of them are based on an MCU, equipped with built-in 12-bit ADCs.

These problems limit DIY oscilloscopes from more demanding professional high school projects. To address these issues, a well-designed first-order analog low-pass filter at the analog front-end of the oscilloscope, plus a programmable first-order IIR digital post-processing filter, are implemented on a popular low-cost DIY platform (DSO138-mini).

Experimental results verified that the new oscilloscope could maintain an overall effective resolution of 10 bits or above with the presence of high frequency noises at its analog input, up to an input bandwidth of 20 kHz and real-time sampling of 1 MSPS. The implementations are inexpensive—the BOM cost of the new anti-aliasing filter is just the cost of a ceramic capacitor (far less than a dollar), and the digital post-processing program is completely implemented in the PC software.

Costing less than fifty dollars, this precision digital oscilloscope can be used in many high schools. This includes high schools without the funds for pricey commercial models and, thus, enable students to perform a wide range of tasks: from the first-time electrical signal capture and observation to the more demanding precision measurement and signal analysis for complex electrical and electronic projects.

Tommy Liu is currently a junior at Monta Vista High School (MVHS) with a passion for electronics. A dedicated hobbyist since middle school, Tommy has designed and built various projects ranging from FM radios to simple oscilloscopes and signal generators for school use. He aims to pursue Electrical Engineering in college and aspires to become a professional engineer, continuing his exploration in the field of electronics.

Related Content

The post Building a low-cost, precision digital oscilloscope – Part 2 appeared first on EDN.

The advent of AI-empowered fab-in-a-box

Пн, 04/14/2025 - 08:20

What’s a fab-in-a-box, and how it’s far more efficient in terms of cost, space, and chip manufacturing operations. Alan Patterson speaks to CEOs of Nanotronics and Pragmatic to dig deeper into how these $30 million fabs work while using AI to boost yields and make these mini-fabs more cost-competitive. These “cubefabs” are also worth attention because many markets, including the United States, aim to bolster local chip manufacturing.

Read the full story at EDN’s sister publication, EE Times.

Related Content

The post The advent of AI-empowered fab-in-a-box appeared first on EDN.

Single sideband generation

Птн, 04/11/2025 - 17:19
The phasing method

In radio communications, one way to generate single sideband (SSB) signals is to make a double sideband signal by feeding a carrier and a modulation signal into a balanced modulator to create a double sideband (DSB) signal and then filter out one of the two resulting sidebands.

If you filter out the lower sideband, you’re left with the upper sideband and if you filter out the upper sideband, you’re left with the lower sideband. However, another way to generate SSB without that filtering has been called “the phasing method.”

Let’s look at that in the following sketch in Figure 1.

Figure 1 Phasing method of generating an SSB signal where the outputs of Fc and Fm are 90° apart with respect to each other

The outputs of the carrier (Fc) quadrature phase shifter and the modulating signal (Fm) quadrature phase shifter need only be 90° apart with respect to each other. The phase relationships to their respective inputs are irrelevant.

Four cases of SSB generation

In the following equations, those two unimportant phase shifts are called “phi” and “chi” for no particular reason other than their pronunciations happen to rhyme. Mathematically, we examine four cases of SSB generation.

Case 1, where “Fc at 90°” and “Fm at 90°” are both +90°, or in the same directions (Figure 2). Case 2, where “Fc at 90°” and “Fm at 90°” are both -90°, or in the same directions (Figure 3).

Figure 2 Mathematically solving for upper and lower side bands where “Fc at 90°” and “Fm at 90°” are both +90°, or in the same directions.

Figure 3 Mathematically solving for upper and lower side bands where “Fc at 90°” and “Fm at 90°” are both -90°, or in the same directions.

Case 3, where “Fc at 90°” is -90°and “Fm at 90°” is +90°, or in the opposite directions (Figure 4). Case 4, where “Fc at 90°” is +90°and “Fm at 90°” is -90°, or in the opposite directions (Figure 5).

Figure 4 Mathematically solving for upper and lower side bands where “Fc at 90°” is -90°and “Fm at 90°” is +90°, or in the opposite directions

Figure 5 Mathematically solving for upper and lower side bands where “Fc at 90°” is +90°and “Fm at 90°” is -90°, or in the opposite directions.

The quadrature phase shifter for the carrier signal only needs to operate at one frequency, which is that of the carrier itself and which we have called “Fc”. The quadrature phase shifter for the modulating signal however has to operate over a range of frequencies. That device has to develop 90° phase shifts for all the frequency components of that modulating signal and therein lies a challenge.

90° phase shifts for all frequency components

There is a mathematical operator called the Hilbert transform which is described here. There, we find an illustration of the Hilbert transformation of a square wave. From that page, we present the sketch in Figure 6.

Figure 6 A square wave and its Hilbert transform, bringing about a 90° phase shift of each frequency component of the input signal in its own time base.

The underlying mathematics of the Hilbert transform is described in terms of a convolution integral but in another sense, you can look at the result as bringing about a 90° phase shift of each frequency component of the input signal in its own time base, in the above case, of a square wave. This phase shift property is the very thing we want for our modulating signal in SSB generation.

In the case of Figure 7, I took each frequency component of a square wave—by which I mean the fundamental frequency plus a large number of properly scaled odd harmonics—and phase shifted each of them by 90° in their respective time frames. I then added up those phase-shifted terms.

Figure 7 A square wave and the result of 90° phase shifts of each harmonic component in that square wave.

Please compare Figure 6 to the result in Figure 5. They look very much the same. The finite number of 90° phase shift and summing steps very nicely approximate the Hilbert transform. 

The ideal case for SSB generation can be expressed as starting with a carrier signal, you create a second carrier signal at the same frequency as the first, but phase shifted by 90°. Putting this another way, the first carrier signal and the second carrier signal are in quadrature with respect to one another.

You then take your modulating signal and generate its Hilbert transform. You now have two modulating signals in which each frequency component of the one is in quadrature with the corresponding frequency component of the other.

Using two balanced modulators, you apply one carrier and one modulating signal to one balanced modulator and apply the other carrier and the other modulating signal to the other balanced modulator. The outputs of the two balanced modulators are then either added to each other or subtracted from each other. Based on the four mathematical examples above, you end up with either an upper sideband SSB signal or a lower sideband SSB signal.

This offers high performance and thus the costly filters described in the first paragraph above are not needed.

Practically applying a Hilbert transform

As a practical matter however, instead of actually making a true Hilbert transformer (I have no idea how or even if that could be done.), we can make a variety of different circuits which will give us the 90° phase shifts we need for our modulating signals over some range of operating frequencies with each frequency component 90° shifted in its own time frame.

One of the earliest purchasable devices for doing this over the range of speech frequencies was a resistor-capacitor network called the 2Q4 which was made by a company called Barker and Williamson. The 2Q4 came in a metal can with a vacuum-tube-like octal base. Its dimensions were very close to that of a 6J5 vacuum tube, but the can of the 2Q4 was painted grey instead of black. (Yes, I know that I’m getting old.)

Another approach to obtaining the needed 90° phase relationships of the modulating signals is by using cascaded sets of all-pass filters. That technique is described in “All-pass filter phase shifters.”

One thing to note is that the Hilbert transformation itself and our approximation of it can lead to some really spiky signals. The spikiness we see for the square wave arises for speech waveforms too. This fact has an important practical implication.

SSB transmitters tend to have high peak output powers versus their average output power levels. This is why in amateur radio, while there is an FCC-imposed operating power limit of 1000 watts, the limit for SSB transmission is 2000 watts peak power.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Single sideband generation appeared first on EDN.

EEPROMs with unique ID improve traceability

Чтв, 04/10/2025 - 21:06

Serial EEPROMs from ST contain a unique 128-bit read-only ID for product recognition and tracking without requiring an extra component. Preprogrammed and permanently locked at the factory, the unique ID (UID) enables basic product identification and clone detection as an alternative to an entry-level secure element.

Initially available in 64-kbit and 128-kbit versions, the M24xxx-U series spans storage densities from 32 kbits to 2 Mbits. Each device retains its UID throughout the end-product lifecycle—from sourcing and manufacturing to deployment, maintenance, and disposal. The UID ensures seamless traceability, aiding reliability analysis and simplifying equipment repair.

These CMOS EEPROMs endure 4 million write cycles and retain data for 200 years. They operate from 1.7 V to 5.5 V and support 100-kHz, 400-kHz, and 1-MHz I2C bus speeds. The devices offer random and sequential read access, along with a write-protect feature for the entire memory array.

The 64-kbit M24C64-UFMN6TP is available now, priced from $0.13, while the 128-kbit M24128-UFMN6TP starts at $0.15 for orders of 10,000 units. Additional densities will be released during the second quarter of 2025.

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

The post EEPROMs with unique ID improve traceability appeared first on EDN.

3D Hall sensor meets automotive requirements

Чтв, 04/10/2025 - 21:05

Diodes’ AH4930Q sensor detects magnetic fields along the X, Y, and Z axes for contactless rotary motion and proximity sensing. As the company’s first automotive-compliant 3D linear Hall effect sensor, the AH4930Q is well-suited for rotary and push selectors in infotainment systems, stalk gear shifters, door handles and locks, and power seat adjusters.

Qualified to AEC-Q100 Grade 1, the AH4930Q operates over a temperature range of -40°C to +125°C and integrates a 12-bit temperature sensor for accurate on-chip compensation. It also features a 12-bit ADC, delivering high resolution in each measurement direction, down to 1 Gauss per bit (0.1 mT) for precise positional accuracy. An I2C interface supports data reading and runtime programming with host systems up to 1 Mbps, enabling real-time adjustments.

The sensor features three operating modes and a power-down mode with a consumption of just 9 nA. Its modes balance power and data acquisition, ranging from a low-power mode at 13 µA (10 Hz) to a fast-sampling mode at 3.8 mA (3.3 kHz) for continuous measurement. Operating with supply voltages from 2.8 V to 5.5 V, the AH4930Q offers a 10-µs wake-up time, 4-µs response time, and wide bandwidth for fast data acquisition in demanding applications.

Supplied in a 6-pin SOT26 package, the AH4930Q costs $0.50 each in lots of 1000 units.

AH4930Q product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

The post 3D Hall sensor meets automotive requirements appeared first on EDN.

Software optimizes AI infrastructure performance

Чтв, 04/10/2025 - 21:05

Keysight AI (KAI) Data Center Builder emulates AI workloads without requiring large GPU clusters, enabling evaluation of how new algorithms, components, and protocols affect AI training. The software suite integrates large language model (LLM) and other AI model workloads into the design and validation of AI infrastructure components, including networks, hosts, and accelerators.

KAI Data Center Builder simulates real-world AI training network patterns to speed experimentation, reduce the learning curve, and identify performance degradation causes that real jobs may not reveal. Keysight customers can access LLM workloads like GPT and Llama, along with popular model partitioning schemas, such as Data Parallel (DP), Fully Sharded Data Parallel (FSDP), and 3D parallelism.

The KAI Data Center Builder workload emulation application allows AI operators to:

  • Experiment with parallelism parameters, including partition sizes and distribution across AI infrastructure (scheduling)
  • Assess the impact of communications within and between partitions on overall job completion time (JCT)
  • Identify low-performing collective operations and pinpoint bottlenecks
  • Analyze network utilization, tail latency, and congestion to understand their effect on JCT

For more information on the KAI Data Center Builder, or to request a demo or price quote, click the product page link below.

KAI Data Center Builder product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

The post Software optimizes AI infrastructure performance appeared first on EDN.

High-power switch operates up to 26 GHz

Чтв, 04/10/2025 - 21:05

Leveraging Menlo’s Ideal Switch technology, the MM5230 RF switch minimizes insertion loss and provides high power handling in a chip-scale package. The device is a SP4T switch that operates from DC to 18 GHz, which extends to 26 GHz in SPST Super-Port mode. Designed for high-power applications, it supports up to 25 W continuous and 150 W pulsed power.

The MM5230 is well-suited for defense and aerospace, medical equipment, test and measurement, and wireless infrastructure applications. With an on-state insertion loss of just 0.3 dB at 6 GHz, it minimizes signal degradation, ensuring high performance in sensitive systems, low-loss switch matrices, switched filter banks, and tunable filters. Additionally, the MM5230 provides high linearity with a typical IIP3 of 95 dBm, preserving signal integrity for smooth communication or data transfer.

The switch’s 2.5×2.5-mm chip-scale package eases integration into a wide range of systems and conserves valuable board space. Additionally, the Ideal Switch fabrication process enhances reliability and endurance. 

The MM5230 RF switch is available for purchase through Menlo Microsystems’ distributor network.

MM5230 product page 

Menlo Microsystems 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

The post High-power switch operates up to 26 GHz appeared first on EDN.

Partners build broadband optical SSD

Чтв, 04/10/2025 - 21:05

Kioxia, AIO Core, and Kyocera have prototyped a PCIe 5.0-compatible broadband SSD with an optical interface. The trio is developing broadband optical SSD technology for advanced applications requiring high-speed, large-volume data transfer, such as generative AI. They will also conduct proof-of-concept testing to support real-world adoption and integration.

Combining AIO Core’s IOCore optical transceiver and Kyocera’s OPTINITY optoelectronic integration module, Kioxia’s prototype delivers twice the bandwidth of the PCIe 4.0 optical SSD demonstrated in August 2024. Replacing electrical wiring with an optical interface increases the allowable distance between compute and storage devices in next-generation green data centers while preserving energy efficiency and signal integrity. 

The prototype was developed under Japan’s “Next Generation Green Data Center Technology Development” project (JPNP21029), part of NEDO’s Green Innovation Fund initiative. The project aims to reduce data center energy consumption by over 40% through next-generation technologies. Kioxia is developing optical SSDs, AIO Core is working on optoelectronic fusion devices, and Kyocera is creating optoelectronic packaging.

No timeline for commercialization has been announced.

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

The post Partners build broadband optical SSD appeared first on EDN.

A negative current source with PWM input and LM337 output

Чтв, 04/10/2025 - 17:13

Figure 1’s negative constant current source has been a textbook application for the LM337 regulator forever (or thereabouts). It precisely maintains a constant output current (Iout) by forcing the OUTPUT pin to be the negative Vadj relative to the ADJ pin. Thus, Iout = Vadj/Rs

Figure 1 Classic LM337 constant negative current source where Iout ≃ Vadj/Rs = 1.25/Rs.

Wow the engineering world with your unique design: Design Ideas Submission Guide

It has worked well for half a century despite its inflexibility. I say it’s inflexible because the way you program Iout is by changing Rs. It may be hard to believe that a part so mature (okay old) as the 337 might have any new tricks left to learn, but Figure 2 teaches it one anyway. It’s a novel topology with better agility. It leaves the resistors constant and instead programs Iout with the (much smaller) control current (Ic). 

 

Figure 2 Rc typically >100Rs, therefore Ic < Iout/100 and Iout ≃ -(1.25 – (IcRc))/Rs.

Rc > 100Rs allows control of current of Iout with only milliamps of Ic. Figure 3 shows the idea fleshed out into a complete PWM-controlled 18 V, 1 A grounded-load negative current source.

Figure 3 An 18 V, 1 A, PWM-programmed grounded load negative current source with a novel LM337 topology. With this topology, accuracy is insensitive to supply rail tolerance. The asterisked resistors are 1% or better and Rs = 1.25 Ω.

The PWM frequency, Fpwm, is assumed to be 10 kHz or thereabouts, if it isn’t, scale C1 and C3 appropriately with:

C1 = 0.5µF*10kHz/Fpwm and,

C3 = 2µF*10kHz/Fpwm.

The resulting 5-Vpp PWM switching by Q1 creates a variable resistance averaged by C1 to R4/Df, where Df = the 0 to 1 PWM duty factor. Thus, at Z1’s Adj point:

Ic = 0 to 1.24V/R4 = 3.1 mA,

The second-order PWM ripple filtering gives a respectable 8-bit settling time of 6 ms with Fpwm = 10 kHz.

Z1 servos the V1 gate drive of Q3 to hold the FET’s source at its precision 1.24-V reference and then level shift the resulting Ic to track U1’s ADJ pin. Also summed with Ic is Iadj bias compensation (1.24V/20k = 62µA) provided by R2.

This term zeros out U1’s typical Iadj and cuts its max 100 µA error by 60%. Meanwhile, D1 insures that Iout is forced to zero when 5 V drops by saturating Q2 and making Ic large enough to turn U1 completely off, thus protecting the load.

About the 1N4001 daisy chain: There’s a possibility of Iout > 0 at Ic = max and a resulting reverse bias of the load; some loads might not tolerate this. The 1N4001s block that, and also provide bias for the power-down cutoff of Iout when +5-V rail shuts down.

Note that the accuracy of IcRc = Vadj is assured by the match of the Rc resistors and precision of the Z1 and U1 internal references. It’s therefore independent of the tolerance of the +5-V rail, although it should be accurate to ± 5% for best PWM ripple suppression. Iout is linear with PWM duty factor Df = 0 to 1:

Iout = -1.25 Df/Rs

If Rs = 1.25 Ω, then Iout(max) = 1 A. 

Note that U1 may have to dissipate as much as 23 W if Iout(max) = 1 A and the load voltage is low. Moral of the story: Be generous with the heatsink area! Also, Rs should be rated for a wattage of 1.252/Rs.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post A negative current source with PWM input and LM337 output appeared first on EDN.

A high-performance current source

Срд, 04/09/2025 - 18:22

The ever innovative and prolific Mr. Woodward has offered “PWM-programmed LM317 constant current source,” an intriguing programmable constant current source which elicited a lively conversation in its comments section. A Zen paradox arose: if the addition of a capacitor between ground and the LM317 ADJ pin reduces the power supply-induced ripple current delivered to the load while also reducing the impedance seen by the load (making it a less “Ideal” current source), is it a better or worse “constant current source”? To answer the question, it must also be considered that the capacitor also slows the response to load current changes which result from alteration of the PWM duty cycle. In the end, the answer depends on the application. But I’m sure a Zen master would have a better answer to the question than “it depends.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

Even without the capacitor, the time constant and non-linear nature of the design idea’s (DI’s) PWM-driven circuit has limitations if used as a source of AC signals. Of course, the title of the DI makes it clear that supplying AC current to the load was not a performance goal. But one of the commenters was interested in delivering both AC and DC currents.

Basic LM317 current source

I wondered if the subcircuit consisting of the LM317 and resistors Rs and Rc could form the basis for such a circuit if it were driven from a suitable control current. In Figure 1, the first step to investigating this was to simulate a basic LM317 current source [1] made of U1 and Rs1 to drive load RL1. The load current is 10 mA.

Figure 1 A series of LM317 circuits investigated in simulation and on the test bench for suitability as a current source.

The circuit’s broadband PSRR was simulated, measured in ohms, and defined as the ratio of the AC voltage of the V1 supply to the AC current through RL1. From DC to almost 1 kHz, the result was a little over 100 kΩ, falling to a bit below 10kΩ at 10kHz. So far, so good. Next, the candidate subcircuit containing U2 was tested. Ideal infinite impedance DC current source I1 (chosen to ensure no degradation of subcircuit performance) arranged for RL2 to also receive 10 mA DC. I expected pretty much the same PSRR here. But to my surprise, the DC to 1kHz impedance had fallen to a little under 2 kΩ and to a bit more than 100 Ω at 10 kHz!

Looking closer, there was no current at all flowing through the LM317 ADJ pin, not even the datasheet’s nominal 50 µA DC. As a result, nor was there any AC current flowing through either Rc2 or the ADJ to explain the PSRR drop. Clearly, the LM317 file [2] I was using for simulation was not suitable for testing PSRR. There are other files from the site listed in the footnote which I’ll investigate at a later date, but for now I decided instead to do some good old lab bench tests.

Bench Tests

The circuit I’ve bench-tested for PSRR is the one which has U3 as its central element. The result was much closer to but also better than the simulated U1: 500 kΩ from DC to 1 kHz, falling to 360 kΩ at 10 kHz and 80 kΩ at 50 kHz. But while I was on the bench, I started to look closely at some other things.

The U3 circuit works by subtracting a voltage drop, Vdrop, across Rc3 from the LM317’s Vref (the voltage difference between OUT and ADJ) and applying Vref – Vdrop across Rs3. Careful attention must be paid to the accuracy of Vdrop, which is challenge enough. But then there is Vref; what are its limits?

I decided to make some DC measurements. I have eight Texas Instruments LM317KCS IC’s (TO-220 package), all with same date code marking. Using the U4 circuit, I measured the Vo (OUT) of each of them with V4 set to 12 VDC. Vo ranged from 1.243 V to 1.263 V, a 20-mV difference. For one of them, I set V4 to 15 V for 5 minutes, and then to 25 V for the same time period.

After these time intervals elapsed, the measurements revealed a drop of 27 mV in Vo. This is more than the 5 mV at 25°C that comes from the spec’s line regulation of .04% per 1-V line voltage change. So, I rechecked my measurements but got the same result. From all these measurements, it’s impossible to determine the limits of Vref over different IC’s, load currents, DC input voltages, and junction temperatures of an arbitrary circuit. Then of course, there’s Noah’s revenge: every 40 days the long-term stability parameter could gift us with a 1% of Vref shift: 12.5 mV. Looking at all of this, I settled on the spec’s reference voltage limit of 1.25 V ± 50 mV for Vref. So what is the impact of this ambiguity?

Implications

We do want a programmable supply, so let’s stick with the U2 configuration and defer considerations for a practical programmable current source in place of I1. Regardless of the resistor values, the current that that circuit delivers to the load is:

ILoad = Vref / Rs + (Iadj – I1) * Rc / Rs

The maximum value of ILoad, Imax, occurs when I1 is zero. When the circuit is asked to deliver Imax/10, (Iadj – I1) * Rc will ideally be set to about .9 * Vref. But now ILoad is equal to 125 ± 50mV; a ± 40% variation! Things get worse if less than Imax/10 is required. I welcome suggestions as to how to deal with the limited accuracy and operational range seen here. But for now, let’s consider the Figure 2 circuit.

Figure 2 Darlington Q1/Q2 feeds 0-Ω load, V_LOAD, with a current. R2, R5, R4 and C2 establish stability and should be checked in assembled circuits. R1 establishes the minimum Q1 DC bias.

In Figure 2, V_Supply provides power at 12 VDC. V_IN takes on DC voltages of 10, 100 and 1000 mV to produce DC currents of 10, 100 and 1000 mA through V_LOAD. Each voltage source in the schematic produces a sine wave at a frequency of 1, 10, 100, 1000 or 10000 Hz to test PSRR (Figure 3), output impedance (Figure 4), and signal transfer (Figure 5); but only one sinusoidal source is active at a time.

Sine amplitudes for V_LOAD and V_Supply when active are 1-V peak, whereas that for V_IN is 1 mV; so that when summed with the three different DC voltages that V_IN takes on, the net voltage and current will remain positive. All simulation measurements are of currents through V_LOAD. Table 1 lists the simulated verses desired DC currents flowing through that load.

Figure 3 The PSRR impedances in ohms verses frequency as seen by V_LOAD for three different DC currents. Higher impedances are associated with more nearly ideal current sources. (The dots on the curves represent simulation measurements.)

Figure 4 The Impedance in ohms seen by V_LOAD at three different DC currents. Higher impedances are associated with more nearly ideal current sources. The dots on the curves represent simulation measurements.

Figure 5 Transfer impedances in ohms verses frequency from V_IN to V_LOAD. The design goal is a value of 1.0000. The dots on the curves represent simulation measurements.

DESIRED CURRENT, mA

V_FET_OUT

CURRENT, mA

10

10.011

100

100.009

1000

999.987

Table 1 Desired and simulated DC currents for the circuit. Op-amp input offset voltages and other aspects of the circuit will contribute errors not accounted for here.

The AD4084-2 op-amp has a worst-case input offset voltage of 300 µV. The two of them could together contribute up to ± 600 µA in error to the load. There are also the tolerances of resistors Rc1, Rc2 and RsM to consider. The limited beta of the 2N3906 could “steal” up to 10 µA from the load; replacing it with the BC857C could significantly reduce that number. And I have conveniently avoided discussing how to generate the signals produced by the voltage source V_IN, which undoubtedly will contribute their own accuracy errors. But the goal of this DI was to investigate potential power current sources capable of handling both AC and DC currents, and I believe that what was presented here is a candidate that is worth consideration for that.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

References

  1. https://www.ti.com/product/LM317#tech-docs See Figure 18 of the LM317 datasheet accessible from this site.
  2. https://groups.io/g/LTspice/filessearch?p=created%2C%2C%2C20%2C2%2C0%2C0&q=lm317, LM317A_002.zip

 

The post A high-performance current source appeared first on EDN.

FPGA learning made easy with a virtual university program

Срд, 04/09/2025 - 17:24

Altera University aims to affordably and easily introduce students to the world of FPGAs and digital logic programming tools by unveiling the curriculum, tutorials, and lab exercises that bridge the gap between academia and real-world design work. The program comprises four courses focused on digital logic, digital systems, computer organization, and embedded systems.

This university program will offer professors, researchers, and students access to a broad collection of pre-developed curricula, software tools, and programmable hardware to help accelerate the FPGA learning experience. Moreover, professors can restructure the lab work with pre-developed curricula and hands-on lab kits.

The program aims to accelerate the FPGA learning experience by making available a broad collection of pre-developed curricula, software tools, and programmable hardware. Source: Altera

“We established Altera University with the singular goal of training the next generation of FPGA developers with both the AI and logic development skills needed to thrive in today’s modern workforce,” said Deepali Trehan, head of product management and marketing at Altera. “Through Altera University, we’re enabling professors to bring real-world experiences to their students using cutting-edge programmable solutions.”

Altera is also offering discounts on select FPGAs for developing custom hardware solutions. Then there is a 20% discount on select Agilex 7 FPGA-based development kits. The company also offers a 50% discount on using LabsLand, a remote laboratory with access to Altera FPGAs.

Altera University also offers higher-level FPGA courses that include an AI curriculum to ensure that students can stay aligned with the latest industry trends and develop an understanding of usage models for FPGAs in the AI workflow.

Altera University’s academic program website provides more information on curricula, software tools, and programmable hardware.

Related Content

The post FPGA learning made easy with a virtual university program appeared first on EDN.

The transformative force of ultra-wideband (UWB) radar

Срд, 04/09/2025 - 08:40

UWB radar is an augmentation of current ultra-wideband (UWB) ranging techniques. To understand the technical side and potential applications of UWB radar, let’s start at the beginning with the platform it builds on. UWB is a communication protocol that uses radio waves over a wide frequency bandwidth, using multiple channels anywhere within the 3.1 to 10.6 GHz spectrum. The most common frequency ranges for UWB are generally between 6 and 8 GHz.

While we’ve only recently seen its use in automotive and other industries, UWB has been around for a very long time, originally used back in the 1880s when the first radio-signal devices relied on spark-gap transmitters to generate radio waves.

Due to certain restrictions, UWB was mainly used for government and military applications in the intervening years. In 2002, however, the modulation technique was opened for public use at certain frequencies in the GHz range and has since proliferated into various applications across multiple industries.

The wide bandwidth delivers a host of benefits in the automotive world, not least that UWB is less susceptible to interference than narrowband technologies. What makes UWB truly transformative is its ability to measure distances precisely and accurately to perform real-time localization. When two devices directly connect and communicate using UWB, we can measure how long it takes for the radio wave pulses to travel between them, which is commonly referred to as Time-of-Flight (ToF).

Figure 1 For automotive applications, UWB radar provides greater precision for real-time localization with a single device. Source: NXP

This enables UWB to achieve hyper-accurate distance measurements in real-time. This accuracy, along with security features incorporated within the IEEE 802.15.4z standard, makes UWB particularly useful where security is paramount—such as keyless entry solutions.

Digging into the details

Where typical UWB applications require two sensors to communicate and operate, UWB radar only requires a single device. It uses an impulse radio technique similar to UWB’s ranging concept, where a sequence of short UWB pulses is sent, but in place of a second device actively returning the signal, a UWB radar sensor measures the time it takes for the initial series of pulses to be reflected by objects. The radar technology benefits from the underlying accuracy of UWB and provides extremely accurate readings, with the ability to detect movements measured in millimeters.

For a single UWB radar sensor to receive and interpret the reflected signal, it first must be picked up by the UWB antenna and then amplified by a low noise amplifier (LNA). To process the frequencies, the signal is fed into an I/Q-mixer powered by a local oscillator. The resulting baseband signal can be digitized using an analog to digital (ADC) converter, which in turn is fed into a symbol accumulator, and the results are correlated with known preamble sequence.

This generates a so-called channel impulse response (CIR), which represents the channel’s behavior as a function of time. This can be used to predict how the signal will distort as it travels. The sequence of CIR measurements over time are the raw data of a UWB radar device.

Additionally, the principles of the Doppler effect can be exploited, measuring the shift in a wave’s frequency as the object it’s reflecting off moves; it’s used to calculate velocity to generate a range-Doppler plot.

Figure 2 Doppler effect turns UWB technology into a highly effective radar tool. Source: NXP

This process makes it possible to use UWB as a highly effective radar device which can detect not only that an object is present, but how it’s moving in relation to the sensor itself, opening a new world of applications over other wireless standards.

How automotive industry is unlocking new applications

UWB radar has a huge potential with its specific attributes delivering plenty of benefits. It operates at comparatively low frequencies, typically between the 6 to 8 GHz range, and these lower wavelengths make it highly effective at passing through solid materials such as clothing, plastics, and even car seats.

What’s more, the combination of pinpoint accuracy, coupled with UWB radar’s ability to detect velocity, low latency, and clear signal is very powerful. This delivers a whole range of potential applications around presence and gesture detection, intrusion alert, and integration with wider systems for reactive automation.

The automotive sector is one industry that stands to gain a lot from UWB ranging and radar. OEMs have previously struggled with weaker security standards when it comes to applications such as keyless entry, with consumers facing vehicle thefts and rising insurance premiums as a result.

Today’s key fob technologies are often the subject of relay station attacks, where the car access signals are intercepted and replicated to emulate a valid access permission signal. With UWB sensors, their ability to protect the integrity of distance estimation prevents the imitation of signals.

UWB is already found in many smartphones, providing another possibility that OEMs can use to increase connectivity, turning phones into secure state-of-the-art key fobs. This enables a driver to open and even start a car while leaving their phone in their pocket or bag, and the same secure functionality can be applied to UWB-enabled key fobs.

UWB radar goes one step further with applications such as gesture control, helping drivers to open the trunk or bonnet of a car without using their hands. Of course, such features are already available using kick sensors at the front or rear of the vehicle, but this requires additional hardware, which means additional costs.

UWB anchor points can either be used in Ranging Mode for features such as smart access and keyless entry, or in Radar Mode for features like kick sensing, helping to increase functionality without adding costs or weight.

UWB radar’s greater fidelity and ability to detect signs of life is where the most pressing use case arguably lies, however. Instances of infants and children accidentally left in vehicles and suffering heatstroke and even death from heat exposure have led to the European New Car Assessment Program (Euro NCAP), introducing rating points for child presence detection systems, instructing that they become mandatory features from 2025 onward.

Figure 3 UWB radar facilitates child presence detection without additional hardware. Source: NXP

A UWB radar system can accurately scan the car’s interior using the same UWB anchor points as the vehicle’s digital key without needing additional sensors. This helps OEMs to implement child presence detection systems without having to invest in, or package, additional hardware. By detecting the chest movements of the child, a UWB radar system can alert the driver with its penetration capabilities, helping pulses to easily pass through obstructions such as blankets, clothing, and even car seats.

The art of mastering UWB radar

UWB radar has proven its effectiveness in detecting the presence of objects of interest with an emphasis on signs of life. The focus of UWB in the automotive sector is currently on short-range applications typically measured within meters, which makes it ideal for use within the cabin or trunk of a vehicle.

There are some interesting challenges when it comes to interpreting data with UWB radar. With automotive applications, the software and algorithms need to detect the required information from the provided signals, such as differentiating between a child and an adult, or even an animal.

Using UWB radar as a child presence detection solution is also more energy-hungry than other UWB applications because the radio for radar is on for longer period. It’s still more energy efficient than other technologies, however, and it doesn’t necessarily pose a problem in the automotive sphere.

Research is currently being done to optimize the on-time of the USB chip, along with enabling different power modes on an IC level that allows the development of smarter and more effective core applications, particularly regarding how they use the energy budget. These updates can be carried out remotely over-the-air (OTA).

Interference is another area that needs to be considered when using UWB radar. If multiple applications in the vehicle are designed to use UWB, it’s important that they are coordinated to avoid interference. The goal is that all UWB applications can happily coexist without interference.

UWB radar outside automotive

Through child presence detection, UWB radar will save lives in the automotive sector, but its potential reaches far and wide, not least because of its ability to calculate velocity and accurately detect very small movements. Such abilities make UWB radar perfectly suited to the healthcare industry.

There is already literature available on how UWB radar can potentially be used in social and healthcare situations. It can recognize presence, movement, postures, and vital signs, including respiration rates and heartbeat detection.

These same attributes also make UWB radar an appealing proposition when it comes to search and rescue. The ability to detect the faintest of life signs through different materials can make a huge difference following earthquakes, where time is of upmost importance when it comes to locating victims buried under rubble.

UWB radar’s precise movement detection also enables highly effective gesture recognition capabilities, offering a whole host of potential applications outside of the automotive sector. When combined with computer vision and AI technologies, for example, UWB radar could provide improved accessibility and user experiences, along with more consumer-led applications in gaming devices.

One of the most readily accessible applications for UWB radar is the augmentation of smart home and Internet of Things (IoT) deployments. Once again, presence detection capabilities can provide a cost-effective alternative to vision or thermal cameras while affording the same levels of reliability.

Figure 4 UWB radar can be employed in smart home and IoT environments. Source: NXP

When combined with power management systems such as heating, lighting and displays, buildings can achieve far greater levels of power efficiency. UWB radar also has exciting potential when it comes to making smart homes even smarter. For example, with the ability to recognize where people are located within rooms, it can control spatial audio, delivering a more immersive audio experience as a result.

Such spatial awareness could also lead to additional applications within social care, offering the ability to monitor the movement of elderly people with cognitive impairments. This could potentially negate the need for wearables for monitoring purposes, which can easily be forgotten or lost.

Looking to the future

The sheer breadth of possibilities that UWB radar enables is what makes the technology such a compelling proposition. Being able to detect precise micro movements while penetrating solid materials opens the door to near endless applications.

UWB radar could provide more effective and accurate information for seatbelt reminder systems, for example, with the ability to detect where passengers are sitting. Combined with information about whether the seatbelt is plugged in or not, this can help to avoid setting off alarms by accident, such as when a bag is placed on a seat. The seat belt reminder is a natural extension to child presence detection, but where the position of the occupant also needs to be determined.

UWB radar could also be used for more accurate security and movement detection, not only outside the vehicle, but inside as well. It’s especially effective as an intrusion alert, detecting when somebody has smashed a window or entered the vehicle.

This extra accuracy can help to avoid falsely setting off alarms during bad weather, only alerting the owner to possible thefts when signs of life are detected alongside movement. It even opens the door to greater gesture recognition within the vehicle itself, enabling drivers or passengers to carry out additional functions without having to touch physical buttons.

The ability to integrate these features without requiring additional sensors, while using existing hardware, will make a huge difference for OEMs and eventually the end consumer. Through a combination of UWB ranging and UWB radar, there’s potential to embrace multiple uses for every sensor, from integrating smarter digital keys and child presence detection to kick sensing, seatbelt reminders, and intrusion alert. This will save costs, weight, and reduce packaging challenges.

Such integration can also impact the implementation of features. Manufacturers will be able to utilize OTA updates to deliver additional functionality, or increased efficiency, without any additional sensors or changes to hardware. In the spirit of software-defined vehicles (SDV), this also means that OEMs don’t need to decide during production which feature or technology needs to be implemented, with UWB radar helping to deliver maximum flexibility and reduced complexity.

We’re at the beginning of an exciting journey when it comes to UWB radar, with the first vehicles set to hit the road in 2025, and a whole lot more to come from the technology in the future. With the ability to dramatically cut down on sensors and hardware, it’s one of the most exciting and transformative wireless technologies we’ve seen yet, and as industry standards, integrations, and guides are put in place, adoption will rise and applications proliferate, helping UWB radar to meet its incredible potential.

Bernhard Großwindhager, Marc Manninger and Christoph Zorn are responsible for product marketing and business development at NXP Semiconductors.

Related Content

The post The transformative force of ultra-wideband (UWB) radar appeared first on EDN.

Semiconductor industry strategy 2025: Semiconductors at the heart of software-defined products

Втр, 04/08/2025 - 16:47

Electronics are everywhere. As daily life becomes more digital and more devices become software defined and interconnected, the prevalence of electronics will inevitably rise. Semiconductors are what makes this all possible. So, it is no surprise that the entire semiconductor industry is on a path to being a $1 trillion market by 2030.

While accelerating demand will help semiconductors reach impressive gains, many chip makers may be held back by the costs of semiconductor design and manufacturing. Already, building a cutting-edge fab costs about $19 billion and the design of each chip is around a $500 million investment on average. With AI integration on the rise in consumer devices also fueling growth, companies will need to push the boundaries of their electronic design and manufacturing processes to cost effectively supply chips at optimal performance and environmental efficiency.

Ensuring the semiconductor industry continues its aggressive growth will require organizations to approach both fab commissioning and operation as well as chip design with a more unique, collaborative strategy. The three pillars of this strategy are:

  1. Collaborative semiconductor business platform
  2. Software-defined semiconductor enabled for software-defined products
  3. The comprehensive digital twin
First pillar: Collaborative semiconductor business platform

Creating next-generation semiconductors is expensive yet necessary as more products begin to rely heavily on software. Ensuring maximum efficiency within a business will be imperative. Consequently, many chip makers are striving to create metrics-driven environments for semiconductor lifecycle optimization. Typically, companies use antiquated methods to track roles and responsibilities, causing them to rely on information that can be weeks old. As a result, problem solving can become inefficient, negatively impacting the product lifecycle.

Chip makers must upgrade to a truly metrics-driven business platform that enables real-time analysis and facilitates the management of the entire process, from new product introduction through design and verification to final product delivery. By using semiconductor lifecycle management as the foundation and accessing the wealth of data generated during design and manufacturing, companies can take control of their new product introduction processes and have integrated traceability throughout the product lifecycle.

Figure 1 Semiconductor lifecycle optimization is driven by real-time metrics analysis, enabling seamless collaboration from design to final product delivery. Source: Siemens

With this collaborative business platform in place, businesses can know the status of their teams at any point during a project. For example, the design team can take advantage of real-time data to have accurate status of the project anytime, without relying on manually generated status reports with weeks old data. Meanwhile, manufacturing can focus on both the front and back ends of IC manufacturing planning with predictability based on actual data. Once all of this in place, companies can feasibly build AI metric analysis and a business intelligence platform on top of that.

Second pillar: Software-defined semiconductor for the software-defined product (SDP)

Software is increasingly being used to define customer experience with a product, Figure 2. Because of this, SDPs will become increasingly central to the evolution of the semiconductor industry. And as AI and ML workloads continue to drive requirements, the traditional boundaries between hardware and software will blur.

Figure 2 Software-defined products are driving the evolution of semiconductors, as AI and ML blur the lines between hardware and software for enhanced innovation and efficiency. Source: Vertigo3d

The convergence of software and hardware will force the semiconductor industry to rethink everything from design methodologies to verification processes. Success in this new landscape will require semiconductor companies to position themselves as enablers of software innovation through holistic co-optimization approaches. No longer will hardware and software teams work in siloed environments; they will become a holistic engineering team that works together to optimize products.

Improved product optimization from integrated teams works in tandem with the industry’s trend toward purpose-built compute platforms to handle the software workload. Consumers are already seeking out customizable chips and they will continue to do so in even greater numbers as general-purpose processors lag expectations. Simultaneously, companies are already creating specialized parts for their products. Apple has several different processors for its host of products; this will become even more important as software becomes more crucial to the functionality of a product.

Supporting the software defined products not only impacts the semiconductors that drive the software but impacts everything from the semiconductor design through ECAD, E/E, and MCAD design. Chip makers need to create environments where they can handle these types of products while getting the requirements right and then drive all requirements to all design domains to develop the product correctly moving forward.

Third pillar: The comprehensive digital twin

Part of creating improved environments to better fabricate next generation semiconductors is making sure that the process remains affordable. To combat production costs that are likely to rise, semiconductor companies should lean into digitalization and leverage the comprehensive digital twin for both the semiconductor design and fabrication.

The comprehensive and physics-based Digital Twin (cDT) addresses the challenge of how to weave together the disparate engineering and process groups needed to design and create tomorrow’s SW-defined semiconductor. To enable all these players to interact early and often, the cDT incorporates mechanical, electronic, electrical, semiconductor, software, and manufacturing to fully capture today’s smart products and processes. 

Specifically, the cDT merges the real and digital worlds by creating a set of consistent digital models representing different facets of the design that can be used throughout the entire product and production lifecycle and the supply chain, Figure 3. Now it is possible to do more virtually before committing to expensive prototypes or physically commissioning a fab. The result is higher quality products while meeting aggressive cost, timeline and sustainability goals. 

Figure 3 The comprehensive digital twin merges real and digital worlds, enabling faster product introductions, higher yields, and improved sustainability by simulating and optimizing semiconductor design and production processes. Source: Siemens

In design, this “shift-left” provides a physics-based virtual environment for all the engineering teams to interact and create, simulate, and improve product designs. Design and manufacturing iterations in the virtual world happen quickly and consume few resources outside of the engineer’s brain power, enabling them to explore a broader design space. Then in production, it empowers companies to virtually evaluate and optimize production lines, commission machines, and examine entire factories or networks of factories to improve production speed, efficiency, and sustainability. It can analyze and act on real data from the fab and then use that wealth of data for AI metrics analysis.

Businesses can also leverage the cDT to virtualize the entire product process design for the SW-defined product. This digital twin enables manufacturers to simulate and optimize everything from initial design concepts to manufacturing processes and final product integration, which dramatically reduces development cycles and improves outcomes. Companies can verify and test changes earlier in the design process while keeping teams across disciplines in sync and on track, leading to enhanced design exploration and optimization. And since sustainability starts at design, the digital twin can help chip makers meet sustainability metrics by enabling them to choose components that have lower carbon footprints, more thermal tolerance, and reduced power requirements.

The comprehensive digital twin for the semiconductor ecosystem helps businesses manage the complexities of the SDP as well as mechanical and production requirements while bolstering efficiency. Benefits of the digital twin include:

  • Faster new product introductions: Virtualizing the entire semiconductor ecosystem allows faster time to yield. Along with the quest to pursue “More than Moore,” creating a virtual environment for heterogenous packaging allows for early verification and optimization of advanced packaging techniques.
  • Faster path to higher yields: Simulating the production process makes enhancing IC quality easier, enabling workers to enact changes dynamically on the shop floor to quickly achieve higher yields for greater profitability
  • Traceability and zero defects: It is now possible to update the digital twin of both the product and production in tandem with their real-world counterparts, enabling manufacturers to diagnose issues and detect anomalies before they happen in the pursuit of zero defects
  • Dynamic planning and scheduling: Since the digital twin provides an adaptive comparison between the physical and digital counterparts, it can detect disturbances within systems and trigger rescheduling in a timely manner
Connectivity is the future

Creating next-generation semiconductors is expensive. Yet, chip manufacturers must continue to develop and fabricate new designs that require ever-more advanced fabrication technology to efficiently create semiconductors for tomorrow’s software-defined products. To handle the changing landscape, businesses within the semiconductor industry will need to rely on the comprehensive digital twin and adopt a collaborative semiconductor business platform that enables them to partner both inside and outside of the industry.

The emergence of collaborative alliances within the semiconductor industry as well as across related industries will break down traditional organizational boundaries, enabling unprecedented levels of cooperation across and beyond the semiconductor industry. The result will be extraordinary innovation that leverages collective expertise and capabilities. Already, well-established semiconductor companies have begun partnering to move forward in this rapidly evolving ecosystem. When Tata Group wanted build fabs in India, Analog Devices, Tata Electronics, and Tata Motors signed an agreement that would allow Tata to use Analog Devices’ chips in its applications like electric vehicles and network infrastructure. At the same time, Analog Devices will be able to take advantage of Tata’s plants to fab their next generation chips.

And this is just one example of the many innovative collaborations starting to emerge. The marketplace is now moving toward cooperation and partnerships that have never existed before across different industries to develop the technology and capabilities needed to move forward. To ease this transition, the semiconductor industry is a cross-industry collaboration environment that will facilitate these strategic partnerships.  

Michael Munsey is the Vice President of Electronics & Semiconductors for Siemens Digital Industries Software. In this role, Munsey is responsible for setting the strategic direction for the company with a focus on helping customers drive unprecedented growth and innovation in the semiconductor and electronics industries through digital transformation.

Munsey began his career as a designer at IBM more than 35 years ago and has the distinction of contributing to products that are currently in use on two planets: Earth and Mars, the latter courtesy of his work on the Mars Rover.  

Before joining Siemens in 2021, Munsey spent his career working in positions of increasing responsibility across the semiconductor and electronics industries where he did everything from leading cross-functional teams to driving product creation and executing business development in new regions to setting the vision for corporate strategy. Munsey holds a BSEE in Electrical and Electronics Engineering from Tufts University. 

Related Content

The post Semiconductor industry strategy 2025: Semiconductors at the heart of software-defined products appeared first on EDN.

Optimize power and wakeup latency in swift response vision systems – Part 2

Втр, 04/08/2025 - 12:42

Part 1 of this article series provided a detailed overview of a trigger-based vision system for embedded applications. It also delved into latency measurements of this swift response vision system while explaining latency-related design strategy and measurement methods. Now, Part 2 provides a detailed treatment of optimizing power consumption and wakeup latency of this embedded vision system.

In Linux, power management is a key feature that allows the system to enter various sleep states to conserve energy when the system is idle or in a low-power state. These sleep states are typically categorized into “suspend” (low-power modes) and “hibernate” (suspend to disk) modes that are part of the Advanced Configuration and Power Interface (ACPI) specification. Below are the main Linux sleep states.

Figure 1 Here is a highlight of Linux sleep states. Source: eInfochips

  • Wakeup (Idle): System fully active; CPU and components fully powered, used when the device is actively in use; high power consumption, no resume time needed.
  • Deep sleep (Suspend-to-RAM): CPU and motherboard components mostly disabled, RAM refreshed, used for deeper low-power states to save energy; low power consumption varying by C-state, fast resume time (milliseconds).
  • System sleep (Suspend-to-Idle): CPU frozen, RAM in self-refresh mode, shallow sleep state for low-latency, responsive applications (for example, network requests); low power consumption, higher than hibernate, fast resume time (milliseconds).
  • Hibernate (Suspend-to-Disk): Memory saved to disk, system powered off, used for deep power savings over long periods (for instance, laptops); almost zero power consumption, slow resume time (requires reading from disk).

Suspend To Ram (STR) offers a good balance, as it powers down most of the system but keeps RAM active (self-refresh mode) for a quick resume, making it suitable for devices needing quick wakeups and energy savings. Hibernate, on the other hand, saves more power by writing the system’s state to disk and powering down completely, but resulting in slower wakeup times.

Qualcomm’s chips, especially those found in Linux embedded devices, support two power-saving modes to help optimize battery life and improve efficiency. These power-saving modes are typically controlled through the system’s firmware, the operating system, and specific hardware components. Here are the main power-saving modes supported by Qualcomm-based chipsets:

  • Suspend to RAM (STR)
  • Suspend to Idle (S2Idle)

Triggers suspend mode by writing “mem” or “freeze” in /sys/power/state.

Figure 2 Here is how source flow looks like when device enters sleep and wakes up. Source: eInfochips

As the device goes into suspend modes, it performs the following tasks:

  • Check whether the suspend type is valid or not
  • Notify user space applications that device is going into sleep state
  • Freeze the console logs
  • Freeze kernel thread and buses and freeze unwalkable interrupts
  • Disable non-bootable CPU (CPU 1-7) and keep RAM into self-refresh mode
  • Keep the device into sleep state until any wakeup signal is received

Once the device receives the wakeup interrupt or trigger, it starts resuming the device in reverse order while suspending the device.

While the system is suspended, the current consumption of the Aikri QRB4210 system on module (SoM) comes around to ~7 mA at 3.7-V supply voltage. Below is the waveform of the current drained by the system on module.

Figure 3 Here is how current consumption looks like while Aikri QRB4210 is in suspend mode. Source: eInfochips

Camera sensor power modes

Camera sensors are designed to support multiple power modes such as:

  • Streaming mode
  • Suspend mode
  • Standby mode

Each mode has distinct power consumption and latency. Latency varies by power-saving level and sensor state. Based on use case, ensure the camera uses the most efficient mode for its function, especially while the system is in power saving mode like deep sleep or standby. This ensures balanced performance and power efficiency while maintaining quick reactivation.

In GStreamer, the pipeline manages data flow through various processing stages. These stages align with the GStreamer state machine, marking points in the pipeline’s lifecycle. The four main states are NULL, READY, PAUSED and PLAYING, each indicating the pipeline’s status and controlling data and event flow. Here’s a breakdown of each of the stages (or states) in a GStreamer pipeline:

Figure 4 The above image outlines GStreamer’s pipeline stages. Source: eInfochips

  1. Null
  • This is the initial state of the pipeline, and it represents an inactive or uninitialized state. The pipeline is not doing any work in this state. All elements in the pipeline are in their NULL state as well.
  • In this state, the master clock (MCLK) from the processor to the camera sensor is not active; the camera sensor is in reset state and the current consumption by the camera is almost zero.
  1. Ready
  • In this state, the pipeline is ready to be configured but has not yet started processing any media. It’s like a preparation phase before actual playback or processing starts.
  • GStreamer performs sanity check and plugin compatibility for the given pipeline.
  • Resources can be allocated (for example, memory buffers and device initialization).
  • GStreamer entering this state does not impact MCLK’s state or reset signal. If GStreamer enters from the NULL state to the READY state, the MCLK remains inactive. On the other hand, if it enters the READY state from the PLAYING state, the MCLK remains active.
  • The current consumption in the READY state depends on the previous state; this behavior can be further optimized.
  1. Paused
  • This state indicates that the pipeline is set up and ready to process media but is not actively playing yet. It’s often used when preparing for playback or streaming while maintaining control over when processing starts.
  • All elements in the pipeline are initialized and ready to start processing media.
  • Like the READY state, the current consumption in the PAUSED state depends on the previous state, so some optimization in the camera stack can help reduce the power consumption during this state.
  1. Playing
  • The PLAYING state represents the pipeline’s fully active state, where data is being processed and media is either being rendered to the screen, played back through speakers, or streamed to a remote system.
  • MCLK is active and the camera sensor is out of reset. The current consumption is highest in this state as all camera sensor data is being captured and passed through the pipeline.

To minimize wakeup latency of the camera sensor while maintaining the lowest sleep current, GStreamer pipeline should be put in the NULL state when the system is suspended. To understand the power consumption due to MCLK and RESET signals assertion, below is the comparison of current consumption between the NULL state of GStreamer pipeline and the READY state of GStreamer pipeline while QRB4210 is in the suspended state.

Figure 5 Current consumption shown while GStreamer is in NULL state and QRB4210 is in suspend mode at ~7 mA. Source: eInfochips

Figure 6 Current consumption shown while GStreamer is in READY state and QRB4210 is in suspend mode at ~30 mA. Source: eInfochips

While the camera is in the NULL state, the QRB4210 system on module draws a current of ~7mA, which is equivalent to the current drawn by the system on module in the suspended state when no camera is connected. When the camera is in the READY state, the QRB4210 system on module draws a current of around ~30 mA. The above oscilloscope snapshot shows the waveforms of the consumed current. All the measured currents are at 3.7-V supply voltage for the QRB4210 system on module.

Latency measurement results

Latency was measured between two trigger events: the first occurs when the device wakes up and receives the interrupt at the application processor, and the second occurs when the first frame becomes available in the DDR after image signal processor (ISP) runs.

As mentioned earlier in Part 1, the scenario is simulated using bash script that keeps the device into the suspend mode and triggers the QRB4210 platform from sleep and wakeup using the RTC wake alarm.

We have collected the camera wakeup latency by changing the camera state from PLAYING to READY and from PLAYING to NULL. In each scenario, three different use cases are followed, which are recording camera stream into eMMC, recording camera stream into SD card, and previewing camera stream to display. The resulting latency is as follows:

  • Camera state in READY

Table 1 Latency measurements are shown in READY state. Source: eInfochips

  • Camera state in NULL

Table 2 Latency measurements are shown in NULL state. Source: eInfochips

The minimum, maximum, and average values presented in the above tables have been derived by running each scenario for 100 iterations.

Apart from measuring the latency numbers programmatically, below are the results measured using the GPIO toggle operation between two reference events while switching the camera state from READY to PLAYING.

Table 3 Latency measurements are conducted using GPIO. Source: eInfochips

Now refer to the following oscilloscope images for different scenarios used in the GPIO toggle measurement method.

Figure 7 GPIO toggle measurements are conducted while recording into eMMC at 410.641 ms. Source: eInfochips

Figure 8 GPIO toggle measurements are conducted while recording into SD card at 382.037 ms. Source: eInfochips

Figure 9 GPIO toggle measurements are conducted during preview on display at 359.153 ms. Source: eInfochips

Trade-off between current consumption and wakeup latency

Based on the simulated result, we see that current consumption and wakeup latency are dependent on each other.

The consolidated readings show that a camera pipeline in the READY state consumes more current while it takes less time to wake up. On the other hand, if the camera pipeline is in the NULL state, it consumes less current but takes more time to wake up. Refer to the table below for average data readings.

Table 4 The above data shows trade-off between current consumption and wakeup latency. Source: eInfochips

All latency data is measured between the reception of the wakeup IRQ at the application processor and the availability of the frame in DDR after the wakeup. It does not include the time taken by a motion detection sensor to sense and generate an interrupt for the application processor. Generally, the time taken by a motion detection sensor is negligible compared to the numbers mentioned above.

Future scope

To reduce the current consumption of a device in the sleep state optimization, you can follow the steps below:

  • Disable redundant peripherals and I/O ports.
  • Prevent avoidable wakeups by ensuring that peripherals don’t resume from sleep unnecessarily.
  • Disable or mask unwanted wakeup triggers or subsystem that can wake the device from a sleep state.
  • Use camera standby (register retaining) mode so that MCLK can be stopped, or its frequency can be reduced.
  • Enable LCD display only when preview use case is running.

To optimize wakeup latency, follow the guidelines below:

  • Make use of the camera standby mode to further optimize latency to generate the first frame.
  • Reduce camera sensor frame size to optimize frame scan time and ISP processing time.
  • Disable redundant system services.
  • Trigger camera captures from lower-level interface rather than using the GStreamer.

Trigger-based cameras offer an efficient solution for capturing targeted events, reducing unnecessary operation, and managing resources effectively. They are a powerful tool in applications where specific, event-driven image or video capture is needed.

By conducting experiments on the Aikri QRB4210 platform and making minimal optimizations to the Linux operating system, it’s possible to replicate or create a robust trigger-based camera system, achieving ~400-500 ms latency with minimal current consumption.

Jigar Pandya—a solution engineer at eInfochips, an Arrow company—specializes in board bring-up, board support package porting, and optimization.

Priyank Modi—a hardware design engineer at eInfochips, an Arrow company—has worked on various Aikri projects to enhance technical capabilities.

Related content

The post Optimize power and wakeup latency in swift response vision systems – Part 2 appeared first on EDN.

The (more) modern drone: Which one(s) do I now own?

Пн, 04/07/2025 - 18:10

Last September, I detailed why I’d decided to hold onto the first-gen DJI Mavic Air drone that I’d bought back in mid-2021 (and DJI had introduced in January 2018), a decision which then prompted me to both resurrect its long-drained batteries and acquire a Remote ID module to get it copacetic with current FAA usage regulations, as subsequently mentioned in October:

Within both blog posts, however, I intentionally alluded to (but didn’t delve into detail on) the newer drone that I’d also purchased to accompany it, aside from dropping hints that it offered (sneak peek: as-needed enabled) integrated Remote ID support and weighed (sneak peek: sometimes) less than 250 grams. That teasing wasn’t (just) to drive you nuts: to do the topic justice would necessitate a blog post all its own. That time is now, and that blog post is this one.

Behold DJI’s Mini 3 Pro, originally introduced in May 2022 and shown here with its baseline RC-N1 controller:

I bought mine (two of them, actually, as it turned out) roughly two years post-intro, in late June (from eBay) and early July (from Lensrentals) of last year. By that time, the Mini 4 Pro successor, unveiled in September 2023, had already been out for nearly a year. So, why did I pick its predecessor? The two drone generations look identical; they take the same batteries, propellers and other parts, and fit into the same cases. And as far as image capture goes, the sensors are identical as well: 48 Mpixel (effective) 1/1.3″ CMOS.

What’s connected to the image sensors, however, leads to one of several key differences between the two generations. The Mini 3 Pro captures video at up to 4K resolution at a 60-fps peak frame rate. The improved ISP (image signal processor) in the Mini 4 Pro, conversely, also captures video at 4K resolution, but this time up to a 100-fps frame rate. Dim-light image quality is also improved, along with the available capture-format options, now also encompassing both pre-processed HDR and post-processed D-LOG. And the camera now rotates a full 90° vertical for TikTok- and more general smartphone viewing-friendly portrait orientation video frames.

Speaking of cameras, what about the two drones’ collision avoidance systems? The DJI Mini 3 Pro has cameras both front and rear for collision avoidance purposes, along with another pointing downward to (for example) aid in landing. The Mini 4 Pro replaces them with four fisheye-lens cameras (at front, rear and both sides) for collision avoidance all around the drone as well as above it, further augmented by two downward facing cameras for stereo distance and a LiDAR sensor, the latter enhancing after-dark sensing and discerning distance-to-ground when the terrain is featureless. By the way, the rumored upcoming DJI Mini 5 Pro further bolsters the drone’s LiDAR facilities, if the leaked images are true and not just Photoshop-created fakes.

The final notable difference involves the contrasting wireless protocols used by both drones to communicate with and stream live video to the user’s controller and, if used, goggles. The Mini 3 Pro leverages DJI’s O3 transmission system, with an estimated range of 12 km while streaming live 1080p 30 fps video. With the Mini 4 Pro and its more advanced O4 system, conversely, the wirelessly connected range increases to an estimated 20 km. Two important notes here:

  • The controllers for the Mini 3 Pro also support the longer-range (15 km) and higher frame rate (1080p 60 fps) O3+ protocol used by larger DJI drones such as the Mavic 3
  • Unfortunately, however, the DJI Mini 4 is not backwards compatible with the O3 and O3+ protocols, so although I’ll be able to reuse my batteries and the like if I do a drone-generation upgrade in the future, I’ll need to purchase new controllers and goggles for it.

That all said, why did I still go with the Mini 3 Pro? The core reason was cost. In assessing the available inventory of used drone equipment, the bulk of the options I found were at both ends of the spectrum: either in like-new condition, or egregiously damaged by past accidents. But given that the Mini 3 Pro had been in the market nearly 1.5 years longer, its available used inventory was much more sizeable. I was able to find two pristine Mini 3 Pro examples for a combined price tag less than that of a single like-new (far from brand new) Mini 4 Pro. And the money saved also afforded me the ability to purchase two used upgraded integrated-display controllers, the mainstream RC and high-end RC Pro, the latter running full-blown Android.

Although enhancements such as higher quality video, more advanced object detection and longer range are nice, they’re not essential in my currently elementary use case, particularly counterbalanced against the fiscal savings I obtained by going prior-gen. The DJI Mini 4’s expanded-scope collision avoidance might be useful when flying the drone side-to-side for panning purposes, for example, or through a grove of trees, neither of which I see myself doing much if any of, at least for a while. And considering that after 12 km the drone will probably already be out of sight, combined with the alternative ability to record even higher quality video to local drone microSD storage, O4 transmission system support also isn’t a necessity for me.

Speaking of batteries (plenty of spares which I now also own, along with associated chargers, and refresh-charge them every two months to keep them viable) and range, let’s get to the drone’s earlier-alluded Remote ID facilities. The Mini 3 Pro (therefore also Mini 4 Pro) has two battery options: a standard 2453 mAh model that, as conveniently stamped right on it to answer enforcement agency inquiries, keeps the drone just below the 250-gram threshold:

and a “Plus” 3850 mAh model that weighs ~50% more (121 grams vs 80.5 grams). The DJI Mini 3 Pro has built-in Remote ID support, negating the need for an add-on module (which, if installed, would push total weight above 249 grams, even using a standard battery). But here’s the slick bit; when the drone detects that a standard battery is in use, it disables Remote ID transmission, both because the FAA doesn’t require it and to address user privacy concerns, given that scanning facilities are available to the masses, not just to regulatory and enforcement entities.

I’ve admittedly been too busy post-purchase to use the drone gear much yet, but I’m looking forward to harassing the neighbors 😉 (kidding!) with it in the future. I’ve also acquired a Goggles Integra set and a RC Motion 2 Controller, both gently used from Lensrentals:

to test out FPV (first-person view) flying, and even a LTE cellular dongle for remote-locale Internet access to the RC Pro controller (unfortunately, such dongles reportedly can’t also be used on the drone itself, at least in the US, for alternative long-range controller connectivity):

And finally, I’ve acquired used examples of the Googles Racing Edition Set (Adorama) and OcuSync Air System (eBay) for the Mavic Air, again for FPV testing purposes:

Stay tuned for more on all of this if (hopefully more accurately, when) I get time to actualize my drone gear testing aspirations. Until then, let me know your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post The (more) modern drone: Which one(s) do I now own? appeared first on EDN.

A design platform for swift vision response system – Part 1

Пн, 04/07/2025 - 11:27

Trigger-based vision systems in embedded applications are used in various domains to automate responses based on visual input, typically in real-time. These systems detect specific conditions or events—for example, motion and object recognition or pattern detection—and trigger actions accordingly.

Key applications include:

  • Surveillance and security: Detecting motion or unauthorized individuals to trigger alarms or recording.
  • Robotics: Identifying and manipulating objects, triggering robotic actions like picking, or sorting based on visual cues.
  • Traffic Monitoring: Triggering traffic light changes or fines when specific conditions like running a red light are detected.
  • Forest monitoring: Trigger-based vision systems can be highly effective in forest environments for a range of applications, including wildlife monitoring, forest fire detection, illegal logging prevention, animal detection, trail camera, and more.
  • Military and defense: Vision systems used in drones, surveillance systems, and military robots for threat detection and target identification.

These systems leverage camera technologies combined with environmental sensors and AI-based image processing to automate monitoring tasks, detect anomalies, and trigger timely responses. For instance, in wildlife monitoring, vision systems can identify animals in remote areas, while in forest fire detection, thermal and optical cameras can spot early signs of fire or smoke.

Low wakeup latency in trigger-based systems is crucial for ensuring fast and efficient responses to external events such as sensor activations, button presses, and equivalent events. These systems rely on triggers to initiate specific actions, and minimizing latency ensures that the system can respond instantly to these stimuli. This ability of a device to quickly wake up when triggered allows the device to remain in a low-power state for a longer time. The longer a device stays in a low-power state, the more efficiently it conserves energy.

In summary, low wakeup latency improves a system’s responsiveness, reliability, scalability and energy efficiency, making it indispensable in applications that depend on timely event handling and quick reactions to triggers.

Aikri platform developed by eInfochips validates this concept. The platform is based on Qualcomm’s QRB4210 chipset and runs OpenEmbedded-based Linux distribution software.

To simulate the real-life trigger scenario, Aikri platform is put to low-power state using a shell script and is woken up by a real time clock (RTC) alarm. The latency between wakeup interrupt and frame reception interrupt at dual data rate (DDR) has been measured around ~400 ms to ~500 ms. Subsequent sections discuss the measurement setup and approach at length.

Aikri platform: Setup details

  1. Hardware setup

The Aikri platform is used to simulate the use case. The platform is based on Qualcomm’s QRB4210 chipset and demonstrates diverse interfaces for this chipset.

The current scope uses only a subset of interfaces available on the platform; refer to the following block diagram.

Figure 1 The block diagram shows hardware peripherals used in the module. Source: eInfochips

The QRB4210 system-on-module (SoM) contains Qualcomm’s QRB4210 application processor, which connects to DDR RAM, embedded multimedia card (eMMC) as storage, Wi-Fi, and power management integrated circuit (PMIC). The display serial interface (DSI)-based display panel is connected to the DSI connector available on the Aikri platform.

Similarly, the camera daughter board is connected to CSI0 port of the platform. The camera daughter card contains an IMX334 camera module. The camera sensor outputs 3864×2180 at 30 frames per second on four lanes of camera serial interface (CSI) port.

DSI panel is built around the OTM1901 LCD. This LCD panel supports 1920×1080 output resolution. Four lanes of the DSI port are used to transfer video data from the application processor to the LCD panel. PMIC available on QRB4210 SoM contains RTC hardware. While the application processor goes to the low-power mode, the RTC hardware inside the PMIC remains active with the help of a sleep clock.

  1. Software setup

The QRB4210 application processor runs an OpenEmbedded-based Linux distribution using the 5.4.210 Linux kernel version. The default distribution is trimmed down to reduce wakeup latency while retaining necessary features. A bash script is used to simulate the low-power mode entry and wakeup scenario.

The Weston server generates display graphics and GStreamer captures frames from camera sensors. Wakeup latency is measured by taking timer readings from Linux kernel when relevant interrupt service routines are called.

Latency measurement: Procedure overview

To simulate the minimal latency wakeup use case, a shell-based script is run on the Aikri platform. The script automates the simulation of trigger-based low latency vision system on Aikri QRB4210 module.

Below is the flow for the script performed on QRB4210 platform, starting from device bootup to measuring latency.

Figure 2 Test script flow spans from device bootup to latency measurement. Source: eInfochips

The above diagram showcases the operational flow of the script, beginning with the device bootup, where the system initializes its hardware and software. After booting, the device enters the active state, signifying that it’s fully operational and ready for further tasks, such as keeping Wi-Fi configured in an inactive state and probing the camera to check its connection and readiness.

Additionally, it configures the GStreamer pipeline for 1280×960@30 FPS stream resolution. The camera sensor registers are also configured at this stage based on the best-match resolution mode. During this exercise, 3840×2160@30 FPS is the selected resolution for IMX334 camera sensor. Once the camera is confirmed as configured and functional, the device moves to the camera reconfigure step, where it adjusts the camera stream settings like stop/start.

The next step is to set the RTC wake alarm, followed by triggering a device to suspend mode. In this state, the device waits for the RTC alarm to wake it up. Once the alarm triggers, the device transitions to the wakeup state and starts the camera stream.

The device then waits for the first frame to arrive in DDR and measures the latency between capturing the frame and device wakeup Interrupt Request (IRQ). After measuring latency, the device returns to the active state, where it remains ready for further actions.

The process then loops back to the camera reconfigure step, repeating the sequence of actions until the script stops externally. This loop allows the device to continuously monitor the camera, measure latency, and conserve power during inactive periods, ensuring efficient operation.

Latency measurement strategy

While the device is in a suspended state and the RTC alarm triggers, the time between two key events is measured: the wakeup interruption and the reception of the first frame from the camera sensor into the DDR buffer. The latency data is measured in three different scenarios, as outlined below:

  • When the camera is in the preview mode
  • When recording the camera stream to eMMC
  • When recording the camera stream to the SD card

Figure 3 Camera pipeline is shown in the preview mode. Source: eInfochips

Figure 4 Camera pipeline is shown in the recording mode. Source: eInfochips

As shown in the above figures, after the DDR receives the frame, it moves to the offline processing engine (OPE) before returning to the DDR. From there, the display subsystem previews the camera sensor data. In the recording use case, the data is transferred from DDR to the encoder and then stored in the storage. Once the frame is available in DDR, it ensures that it’s either stored in the storage or previewed on the display.

Depending on the processor CPU occupancy, it may take a few milliseconds to process the frame, based on the GStreamer pipeline and the selected use case. Therefore, while measuring latency, we consider the second polling point to be when the frame is available in the DDR, not when it’s stored or previewed.

Since capturing the trigger event is crucial, minimizing latency when capturing the first frame from the camera sensor is essential. The frame is considered available in the DDR when the thin front-end (TFE) completes processing the first frame from the camera.

Latency measurement methods

In the Linux kernel, there are several APIs available for pinpointing an event and time measurement, each offering varying levels of precision and specific use cases. These APIs enable tracking of time intervals, measuring elapsed time, and managing system events. Below is a detailed overview of the commonly used time measurement APIs in the Linux kernel:

  • ktime_get_boottime: Provides the current “time since boot” in a ktime_t value, expressed in nanoseconds.
  • get_jiffies: Returns the current jiffy count that represents the number of ticks since the system booted. Time must be calculated based on the system clock.

Jiffies don’t update during the suspend state, while ktime_t continues to run unaffected by interrupts even in sleep mode. Additionally, ktime_t offers time measurements in nanoseconds, making it highly precise compared to jiffies.

  1. Usage of GPIO toggle method for latency measurement

To get a second level of surety, a GPIO toggle-based method is also employed in the measurement. It creates a positive or negative pulse when a GPIO is toggled between two reference events. The pulse width can be measured on an oscilloscope, signifying latency between the two events.

When the device wakes up at that point, the GPIO value is set to zero, and once the camera driver receives the frame in the DDR, the GPIO value is set to one. This way the GPIO signal creates a negative pulse. Measuring the pulse width using an oscilloscope provides latency between the wakeup interrupt and the frame available at interrupt.

  1. Usage of RTC alarm as wakeup source

The RTC in a system keeps ticking while using a sleep clock even when the processor goes to the low-power mode, continuously maintains time, and triggers a wake alarm when it reaches a set time. This wakes the system or initiates a scheduled task that can be set in seconds from the Unix epoch or relative to the current time.

On Linux, tools like rtcwake and the /sys/class/rtc/rtc0/wakealarm file are used for configuration. The system can wake from power-saving modes like suspend-to-RAM or hibernation for tasks like backups or updates. This feature is useful for automation but may require time zone adjustments as the RTC stores time in UTC.

  • The RTC wake alarm is set by specifying a time in seconds in sysfs or using tools like rtcwake.
  • It works even when the system is in a low-power state like suspension or hibernation.
  • To clear the alarm, write a value of zero to the wake alarm file.

A typical trigger-based system receives triggers from external sources, such as an external co-processor or the external environment. When simulating the script, the RTC wakeup alarm is used as an external event, acting as a trigger for the QRB4210 application processor, which is equivalent to the external event.

Jigar Pandya—a solution engineer at eInfochips, an Arrow company—specializes in board bring-up, board support package porting, and optimization.

Priyank Modi—a hardware design engineer at eInfochips, an Arrow company—has worked on various Aikri projects to enhance technical capabilities.

Editor’s Note: The second part of this article series will further expand into wakeup latency and power consumption of this trigger-based vision system.

Related content

The post A design platform for swift vision response system – Part 1 appeared first on EDN.

Flip ON flop OFF without a flip/flop

Птн, 04/04/2025 - 17:22

There’s been a lot of interesting conversation and DI teamwork lately devising circuits for ON/OFF power control using inexpensive momentary-contact switches (See “Related Content” below). 

Wow the engineering world with your unique design: Design Ideas Submission Guide

Most of these designs have incorporated edge triggered flip/flops (e.g. the CD4013) but of course other possibilities exist. Figure 1 shows one of them.

Figure 1 Flip/flop-free debounced push ON push OFF toggling with power-on reset and low parts count.

Okay, I can (almost) hear your objection. It isn’t (technically) accurate to describe Figure 1 as flip/flop free because the two inverters, U1a and U1b, are connected as a bistable latch. That is to say, a flip/flop. It’s really how its state gets toggled by S1 that’s different. Here’s how that works.

While sitting in either ON or OFF with S1 un-pushed, U1a, being an inverter, charges C2 to the opposite state through R1. So, when S1 gets mashed, C2 yanks U1a’s input, thereby toggling the latch. The R1C2 time-constant of 100 ms is long enough to guarantee that if S1 bounces on make, as it most assuredly will, C2’s complementary charge will ride out the turbulence. 

Then, because R2 < R1, the positive feedback through R2 will overpower R1 and keep the same polarity charge on C2 for as long as S1 is held closed. This ensures that later, when S1 is released, if it bounces on break (as some switches are rumored to be evil enough to do), the new latch state won’t be lost. PFET Q1 now transfers power to the load (or doesn’t). Thus, can we confidently expect reliable flipping and flopping and ONing and OFFing. 

So, what’s the purpose of C1? Figure 2 explains.

Figure 2 Power up turn off where the rising edge of V+ at PFET a’s source with its gate held low by RCs turns it on.

If V+ has been at zero for awhile (because the battery was taken out or the wall wart unplugged), C1 and C2 will have discharged likewise to zero (or thereabouts). So, when V+ is restored, they will hold the inverter’s FET gates at ground. This will make the PFET’s gate negative relative to its (rising) source, turning it on, pulling its output high, and resetting the latch to OFF.

So why R3?

When the latch sits for a while with S1 unpushed, whether ON or OFF, C1 will charge to V+. Then, when S1 is depressed (note this doesn’t necessarily mean it’s unhappy), C1 will be “quickly” discharged. Without R3, “quickly” might be too much of a good thing and involve a high enough instantaneous current through S1, and hence enough energy deposited on its contacts, to shorten its service life.

Thus, making us both unhappy!

Here’s a final thought about parts count. The 4069 is a hextuple part, this makes Figure 1’s use of only two of its six inverters look wasteful. We can hope the designer can find a place for the unused elements elsewhere in their application, but what if she can’t?

Then it might turn out that Figure 3 will work.

Figure 3 Do something useful with the other 2/3rds of U1, eliminate Q1 for loads of less than 10 mA, and gain short-circuit protection for free.

Ron for the 4069 is V+ dependent but can range as low as 200 Ω (typical) at V+ > 10 V. Therefore, if we connect all five of the available inverters in parallel as shown in Figure 3, we’d get a net Ron of 200/5 = 40 Ω from V+ to Vout. This might be adequate for a low power application, making Q1 redundant. As an added benefit, an accidental short to ground will promptly and automatically turn the latch and the shorted load OFF. U1 will therefore be that much less likely to catch fire, and us to be unhappy! Note it also works if the latch is OFF and the output gets shorted to V+.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Flip ON flop OFF without a flip/flop appeared first on EDN.

Сторінки