Українською
  In English
EDN Network
FPGA learning made easy with a virtual university program

Altera University aims to affordably and easily introduce students to the world of FPGAs and digital logic programming tools by unveiling the curriculum, tutorials, and lab exercises that bridge the gap between academia and real-world design work. The program comprises four courses focused on digital logic, digital systems, computer organization, and embedded systems.
This university program will offer professors, researchers, and students access to a broad collection of pre-developed curricula, software tools, and programmable hardware to help accelerate the FPGA learning experience. Moreover, professors can restructure the lab work with pre-developed curricula and hands-on lab kits.
The program aims to accelerate the FPGA learning experience by making available a broad collection of pre-developed curricula, software tools, and programmable hardware. Source: Altera
“We established Altera University with the singular goal of training the next generation of FPGA developers with both the AI and logic development skills needed to thrive in today’s modern workforce,” said Deepali Trehan, head of product management and marketing at Altera. “Through Altera University, we’re enabling professors to bring real-world experiences to their students using cutting-edge programmable solutions.”
Altera is also offering discounts on select FPGAs for developing custom hardware solutions. Then there is a 20% discount on select Agilex 7 FPGA-based development kits. The company also offers a 50% discount on using LabsLand, a remote laboratory with access to Altera FPGAs.
Altera University also offers higher-level FPGA courses that include an AI curriculum to ensure that students can stay aligned with the latest industry trends and develop an understanding of usage models for FPGAs in the AI workflow.
Altera University’s academic program website provides more information on curricula, software tools, and programmable hardware.
Related Content
- All About FPGAs
- FPGAs for beginners
- Power-aware FPGA design
- FPGA programming step by step
- Embedded design with FPGAs: Development process
The post FPGA learning made easy with a virtual university program appeared first on EDN.
The transformative force of ultra-wideband (UWB) radar

UWB radar is an augmentation of current ultra-wideband (UWB) ranging techniques. To understand the technical side and potential applications of UWB radar, let’s start at the beginning with the platform it builds on. UWB is a communication protocol that uses radio waves over a wide frequency bandwidth, using multiple channels anywhere within the 3.1 to 10.6 GHz spectrum. The most common frequency ranges for UWB are generally between 6 and 8 GHz.
While we’ve only recently seen its use in automotive and other industries, UWB has been around for a very long time, originally used back in the 1880s when the first radio-signal devices relied on spark-gap transmitters to generate radio waves.
Due to certain restrictions, UWB was mainly used for government and military applications in the intervening years. In 2002, however, the modulation technique was opened for public use at certain frequencies in the GHz range and has since proliferated into various applications across multiple industries.
The wide bandwidth delivers a host of benefits in the automotive world, not least that UWB is less susceptible to interference than narrowband technologies. What makes UWB truly transformative is its ability to measure distances precisely and accurately to perform real-time localization. When two devices directly connect and communicate using UWB, we can measure how long it takes for the radio wave pulses to travel between them, which is commonly referred to as Time-of-Flight (ToF).
Figure 1 For automotive applications, UWB radar provides greater precision for real-time localization with a single device. Source: NXP
This enables UWB to achieve hyper-accurate distance measurements in real-time. This accuracy, along with security features incorporated within the IEEE 802.15.4z standard, makes UWB particularly useful where security is paramount—such as keyless entry solutions.
Digging into the details
Where typical UWB applications require two sensors to communicate and operate, UWB radar only requires a single device. It uses an impulse radio technique similar to UWB’s ranging concept, where a sequence of short UWB pulses is sent, but in place of a second device actively returning the signal, a UWB radar sensor measures the time it takes for the initial series of pulses to be reflected by objects. The radar technology benefits from the underlying accuracy of UWB and provides extremely accurate readings, with the ability to detect movements measured in millimeters.
For a single UWB radar sensor to receive and interpret the reflected signal, it first must be picked up by the UWB antenna and then amplified by a low noise amplifier (LNA). To process the frequencies, the signal is fed into an I/Q-mixer powered by a local oscillator. The resulting baseband signal can be digitized using an analog to digital (ADC) converter, which in turn is fed into a symbol accumulator, and the results are correlated with known preamble sequence.
This generates a so-called channel impulse response (CIR), which represents the channel’s behavior as a function of time. This can be used to predict how the signal will distort as it travels. The sequence of CIR measurements over time are the raw data of a UWB radar device.
Additionally, the principles of the Doppler effect can be exploited, measuring the shift in a wave’s frequency as the object it’s reflecting off moves; it’s used to calculate velocity to generate a range-Doppler plot.
Figure 2 Doppler effect turns UWB technology into a highly effective radar tool. Source: NXP
This process makes it possible to use UWB as a highly effective radar device which can detect not only that an object is present, but how it’s moving in relation to the sensor itself, opening a new world of applications over other wireless standards.
How automotive industry is unlocking new applications
UWB radar has a huge potential with its specific attributes delivering plenty of benefits. It operates at comparatively low frequencies, typically between the 6 to 8 GHz range, and these lower wavelengths make it highly effective at passing through solid materials such as clothing, plastics, and even car seats.
What’s more, the combination of pinpoint accuracy, coupled with UWB radar’s ability to detect velocity, low latency, and clear signal is very powerful. This delivers a whole range of potential applications around presence and gesture detection, intrusion alert, and integration with wider systems for reactive automation.
The automotive sector is one industry that stands to gain a lot from UWB ranging and radar. OEMs have previously struggled with weaker security standards when it comes to applications such as keyless entry, with consumers facing vehicle thefts and rising insurance premiums as a result.
Today’s key fob technologies are often the subject of relay station attacks, where the car access signals are intercepted and replicated to emulate a valid access permission signal. With UWB sensors, their ability to protect the integrity of distance estimation prevents the imitation of signals.
UWB is already found in many smartphones, providing another possibility that OEMs can use to increase connectivity, turning phones into secure state-of-the-art key fobs. This enables a driver to open and even start a car while leaving their phone in their pocket or bag, and the same secure functionality can be applied to UWB-enabled key fobs.
UWB radar goes one step further with applications such as gesture control, helping drivers to open the trunk or bonnet of a car without using their hands. Of course, such features are already available using kick sensors at the front or rear of the vehicle, but this requires additional hardware, which means additional costs.
UWB anchor points can either be used in Ranging Mode for features such as smart access and keyless entry, or in Radar Mode for features like kick sensing, helping to increase functionality without adding costs or weight.
UWB radar’s greater fidelity and ability to detect signs of life is where the most pressing use case arguably lies, however. Instances of infants and children accidentally left in vehicles and suffering heatstroke and even death from heat exposure have led to the European New Car Assessment Program (Euro NCAP), introducing rating points for child presence detection systems, instructing that they become mandatory features from 2025 onward.
Figure 3 UWB radar facilitates child presence detection without additional hardware. Source: NXP
A UWB radar system can accurately scan the car’s interior using the same UWB anchor points as the vehicle’s digital key without needing additional sensors. This helps OEMs to implement child presence detection systems without having to invest in, or package, additional hardware. By detecting the chest movements of the child, a UWB radar system can alert the driver with its penetration capabilities, helping pulses to easily pass through obstructions such as blankets, clothing, and even car seats.
The art of mastering UWB radar
UWB radar has proven its effectiveness in detecting the presence of objects of interest with an emphasis on signs of life. The focus of UWB in the automotive sector is currently on short-range applications typically measured within meters, which makes it ideal for use within the cabin or trunk of a vehicle.
There are some interesting challenges when it comes to interpreting data with UWB radar. With automotive applications, the software and algorithms need to detect the required information from the provided signals, such as differentiating between a child and an adult, or even an animal.
Using UWB radar as a child presence detection solution is also more energy-hungry than other UWB applications because the radio for radar is on for longer period. It’s still more energy efficient than other technologies, however, and it doesn’t necessarily pose a problem in the automotive sphere.
Research is currently being done to optimize the on-time of the USB chip, along with enabling different power modes on an IC level that allows the development of smarter and more effective core applications, particularly regarding how they use the energy budget. These updates can be carried out remotely over-the-air (OTA).
Interference is another area that needs to be considered when using UWB radar. If multiple applications in the vehicle are designed to use UWB, it’s important that they are coordinated to avoid interference. The goal is that all UWB applications can happily coexist without interference.
UWB radar outside automotive
Through child presence detection, UWB radar will save lives in the automotive sector, but its potential reaches far and wide, not least because of its ability to calculate velocity and accurately detect very small movements. Such abilities make UWB radar perfectly suited to the healthcare industry.
There is already literature available on how UWB radar can potentially be used in social and healthcare situations. It can recognize presence, movement, postures, and vital signs, including respiration rates and heartbeat detection.
These same attributes also make UWB radar an appealing proposition when it comes to search and rescue. The ability to detect the faintest of life signs through different materials can make a huge difference following earthquakes, where time is of upmost importance when it comes to locating victims buried under rubble.
UWB radar’s precise movement detection also enables highly effective gesture recognition capabilities, offering a whole host of potential applications outside of the automotive sector. When combined with computer vision and AI technologies, for example, UWB radar could provide improved accessibility and user experiences, along with more consumer-led applications in gaming devices.
One of the most readily accessible applications for UWB radar is the augmentation of smart home and Internet of Things (IoT) deployments. Once again, presence detection capabilities can provide a cost-effective alternative to vision or thermal cameras while affording the same levels of reliability.
Figure 4 UWB radar can be employed in smart home and IoT environments. Source: NXP
When combined with power management systems such as heating, lighting and displays, buildings can achieve far greater levels of power efficiency. UWB radar also has exciting potential when it comes to making smart homes even smarter. For example, with the ability to recognize where people are located within rooms, it can control spatial audio, delivering a more immersive audio experience as a result.
Such spatial awareness could also lead to additional applications within social care, offering the ability to monitor the movement of elderly people with cognitive impairments. This could potentially negate the need for wearables for monitoring purposes, which can easily be forgotten or lost.
Looking to the future
The sheer breadth of possibilities that UWB radar enables is what makes the technology such a compelling proposition. Being able to detect precise micro movements while penetrating solid materials opens the door to near endless applications.
UWB radar could provide more effective and accurate information for seatbelt reminder systems, for example, with the ability to detect where passengers are sitting. Combined with information about whether the seatbelt is plugged in or not, this can help to avoid setting off alarms by accident, such as when a bag is placed on a seat. The seat belt reminder is a natural extension to child presence detection, but where the position of the occupant also needs to be determined.
UWB radar could also be used for more accurate security and movement detection, not only outside the vehicle, but inside as well. It’s especially effective as an intrusion alert, detecting when somebody has smashed a window or entered the vehicle.
This extra accuracy can help to avoid falsely setting off alarms during bad weather, only alerting the owner to possible thefts when signs of life are detected alongside movement. It even opens the door to greater gesture recognition within the vehicle itself, enabling drivers or passengers to carry out additional functions without having to touch physical buttons.
The ability to integrate these features without requiring additional sensors, while using existing hardware, will make a huge difference for OEMs and eventually the end consumer. Through a combination of UWB ranging and UWB radar, there’s potential to embrace multiple uses for every sensor, from integrating smarter digital keys and child presence detection to kick sensing, seatbelt reminders, and intrusion alert. This will save costs, weight, and reduce packaging challenges.
Such integration can also impact the implementation of features. Manufacturers will be able to utilize OTA updates to deliver additional functionality, or increased efficiency, without any additional sensors or changes to hardware. In the spirit of software-defined vehicles (SDV), this also means that OEMs don’t need to decide during production which feature or technology needs to be implemented, with UWB radar helping to deliver maximum flexibility and reduced complexity.
We’re at the beginning of an exciting journey when it comes to UWB radar, with the first vehicles set to hit the road in 2025, and a whole lot more to come from the technology in the future. With the ability to dramatically cut down on sensors and hardware, it’s one of the most exciting and transformative wireless technologies we’ve seen yet, and as industry standards, integrations, and guides are put in place, adoption will rise and applications proliferate, helping UWB radar to meet its incredible potential.
Bernhard Großwindhager, Marc Manninger and Christoph Zorn are responsible for product marketing and business development at NXP Semiconductors.
Related Content
- UWB to target ticket-less passengers
- Ultra-wideband tech gets a boost in capabilities
- NXP’s Trimension SR250 Combines UWB Radar and Secure Ranging
- Advances in AI-Enabled Automotive Radar Sensors and Audio Processors
- UWB radar’s potential to drive digital key for safety, security and beyond
The post The transformative force of ultra-wideband (UWB) radar appeared first on EDN.
Semiconductor industry strategy 2025: Semiconductors at the heart of software-defined products

Electronics are everywhere. As daily life becomes more digital and more devices become software defined and interconnected, the prevalence of electronics will inevitably rise. Semiconductors are what makes this all possible. So, it is no surprise that the entire semiconductor industry is on a path to being a $1 trillion market by 2030.
While accelerating demand will help semiconductors reach impressive gains, many chip makers may be held back by the costs of semiconductor design and manufacturing. Already, building a cutting-edge fab costs about $19 billion and the design of each chip is around a $500 million investment on average. With AI integration on the rise in consumer devices also fueling growth, companies will need to push the boundaries of their electronic design and manufacturing processes to cost effectively supply chips at optimal performance and environmental efficiency.
Ensuring the semiconductor industry continues its aggressive growth will require organizations to approach both fab commissioning and operation as well as chip design with a more unique, collaborative strategy. The three pillars of this strategy are:
- Collaborative semiconductor business platform
- Software-defined semiconductor enabled for software-defined products
- The comprehensive digital twin
Creating next-generation semiconductors is expensive yet necessary as more products begin to rely heavily on software. Ensuring maximum efficiency within a business will be imperative. Consequently, many chip makers are striving to create metrics-driven environments for semiconductor lifecycle optimization. Typically, companies use antiquated methods to track roles and responsibilities, causing them to rely on information that can be weeks old. As a result, problem solving can become inefficient, negatively impacting the product lifecycle.
Chip makers must upgrade to a truly metrics-driven business platform that enables real-time analysis and facilitates the management of the entire process, from new product introduction through design and verification to final product delivery. By using semiconductor lifecycle management as the foundation and accessing the wealth of data generated during design and manufacturing, companies can take control of their new product introduction processes and have integrated traceability throughout the product lifecycle.
Figure 1 Semiconductor lifecycle optimization is driven by real-time metrics analysis, enabling seamless collaboration from design to final product delivery. Source: Siemens
With this collaborative business platform in place, businesses can know the status of their teams at any point during a project. For example, the design team can take advantage of real-time data to have accurate status of the project anytime, without relying on manually generated status reports with weeks old data. Meanwhile, manufacturing can focus on both the front and back ends of IC manufacturing planning with predictability based on actual data. Once all of this in place, companies can feasibly build AI metric analysis and a business intelligence platform on top of that.
Second pillar: Software-defined semiconductor for the software-defined product (SDP)Software is increasingly being used to define customer experience with a product, Figure 2. Because of this, SDPs will become increasingly central to the evolution of the semiconductor industry. And as AI and ML workloads continue to drive requirements, the traditional boundaries between hardware and software will blur.
Figure 2 Software-defined products are driving the evolution of semiconductors, as AI and ML blur the lines between hardware and software for enhanced innovation and efficiency. Source: Vertigo3d
The convergence of software and hardware will force the semiconductor industry to rethink everything from design methodologies to verification processes. Success in this new landscape will require semiconductor companies to position themselves as enablers of software innovation through holistic co-optimization approaches. No longer will hardware and software teams work in siloed environments; they will become a holistic engineering team that works together to optimize products.
Improved product optimization from integrated teams works in tandem with the industry’s trend toward purpose-built compute platforms to handle the software workload. Consumers are already seeking out customizable chips and they will continue to do so in even greater numbers as general-purpose processors lag expectations. Simultaneously, companies are already creating specialized parts for their products. Apple has several different processors for its host of products; this will become even more important as software becomes more crucial to the functionality of a product.
Supporting the software defined products not only impacts the semiconductors that drive the software but impacts everything from the semiconductor design through ECAD, E/E, and MCAD design. Chip makers need to create environments where they can handle these types of products while getting the requirements right and then drive all requirements to all design domains to develop the product correctly moving forward.
Third pillar: The comprehensive digital twinPart of creating improved environments to better fabricate next generation semiconductors is making sure that the process remains affordable. To combat production costs that are likely to rise, semiconductor companies should lean into digitalization and leverage the comprehensive digital twin for both the semiconductor design and fabrication.
The comprehensive and physics-based Digital Twin (cDT) addresses the challenge of how to weave together the disparate engineering and process groups needed to design and create tomorrow’s SW-defined semiconductor. To enable all these players to interact early and often, the cDT incorporates mechanical, electronic, electrical, semiconductor, software, and manufacturing to fully capture today’s smart products and processes.
Specifically, the cDT merges the real and digital worlds by creating a set of consistent digital models representing different facets of the design that can be used throughout the entire product and production lifecycle and the supply chain, Figure 3. Now it is possible to do more virtually before committing to expensive prototypes or physically commissioning a fab. The result is higher quality products while meeting aggressive cost, timeline and sustainability goals.
Figure 3 The comprehensive digital twin merges real and digital worlds, enabling faster product introductions, higher yields, and improved sustainability by simulating and optimizing semiconductor design and production processes. Source: Siemens
In design, this “shift-left” provides a physics-based virtual environment for all the engineering teams to interact and create, simulate, and improve product designs. Design and manufacturing iterations in the virtual world happen quickly and consume few resources outside of the engineer’s brain power, enabling them to explore a broader design space. Then in production, it empowers companies to virtually evaluate and optimize production lines, commission machines, and examine entire factories or networks of factories to improve production speed, efficiency, and sustainability. It can analyze and act on real data from the fab and then use that wealth of data for AI metrics analysis.
Businesses can also leverage the cDT to virtualize the entire product process design for the SW-defined product. This digital twin enables manufacturers to simulate and optimize everything from initial design concepts to manufacturing processes and final product integration, which dramatically reduces development cycles and improves outcomes. Companies can verify and test changes earlier in the design process while keeping teams across disciplines in sync and on track, leading to enhanced design exploration and optimization. And since sustainability starts at design, the digital twin can help chip makers meet sustainability metrics by enabling them to choose components that have lower carbon footprints, more thermal tolerance, and reduced power requirements.
The comprehensive digital twin for the semiconductor ecosystem helps businesses manage the complexities of the SDP as well as mechanical and production requirements while bolstering efficiency. Benefits of the digital twin include:
- Faster new product introductions: Virtualizing the entire semiconductor ecosystem allows faster time to yield. Along with the quest to pursue “More than Moore,” creating a virtual environment for heterogenous packaging allows for early verification and optimization of advanced packaging techniques.
- Faster path to higher yields: Simulating the production process makes enhancing IC quality easier, enabling workers to enact changes dynamically on the shop floor to quickly achieve higher yields for greater profitability
- Traceability and zero defects: It is now possible to update the digital twin of both the product and production in tandem with their real-world counterparts, enabling manufacturers to diagnose issues and detect anomalies before they happen in the pursuit of zero defects
- Dynamic planning and scheduling: Since the digital twin provides an adaptive comparison between the physical and digital counterparts, it can detect disturbances within systems and trigger rescheduling in a timely manner
Creating next-generation semiconductors is expensive. Yet, chip manufacturers must continue to develop and fabricate new designs that require ever-more advanced fabrication technology to efficiently create semiconductors for tomorrow’s software-defined products. To handle the changing landscape, businesses within the semiconductor industry will need to rely on the comprehensive digital twin and adopt a collaborative semiconductor business platform that enables them to partner both inside and outside of the industry.
The emergence of collaborative alliances within the semiconductor industry as well as across related industries will break down traditional organizational boundaries, enabling unprecedented levels of cooperation across and beyond the semiconductor industry. The result will be extraordinary innovation that leverages collective expertise and capabilities. Already, well-established semiconductor companies have begun partnering to move forward in this rapidly evolving ecosystem. When Tata Group wanted build fabs in India, Analog Devices, Tata Electronics, and Tata Motors signed an agreement that would allow Tata to use Analog Devices’ chips in its applications like electric vehicles and network infrastructure. At the same time, Analog Devices will be able to take advantage of Tata’s plants to fab their next generation chips.
And this is just one example of the many innovative collaborations starting to emerge. The marketplace is now moving toward cooperation and partnerships that have never existed before across different industries to develop the technology and capabilities needed to move forward. To ease this transition, the semiconductor industry is a cross-industry collaboration environment that will facilitate these strategic partnerships.
Michael Munsey is the Vice President of Electronics & Semiconductors for Siemens Digital Industries Software. In this role, Munsey is responsible for setting the strategic direction for the company with a focus on helping customers drive unprecedented growth and innovation in the semiconductor and electronics industries through digital transformation.
Munsey began his career as a designer at IBM more than 35 years ago and has the distinction of contributing to products that are currently in use on two planets: Earth and Mars, the latter courtesy of his work on the Mars Rover.
Before joining Siemens in 2021, Munsey spent his career working in positions of increasing responsibility across the semiconductor and electronics industries where he did everything from leading cross-functional teams to driving product creation and executing business development in new regions to setting the vision for corporate strategy. Munsey holds a BSEE in Electrical and Electronics Engineering from Tufts University.
Related Content
- CES 2025: A Chat with Siemens EDA CEO Mike Ellow
- Shift in electronic systems design reshaping EDA tools integration
- EDA toolset parade at TSMC’s U.S. design symposium
- Overcoming challenges in electronics design landscape
The post Semiconductor industry strategy 2025: Semiconductors at the heart of software-defined products appeared first on EDN.
Optimize power and wakeup latency in swift response vision systems – Part 2

Part 1 of this article series provided a detailed overview of a trigger-based vision system for embedded applications. It also delved into latency measurements of this swift response vision system while explaining latency-related design strategy and measurement methods. Now, Part 2 provides a detailed treatment of optimizing power consumption and wakeup latency of this embedded vision system.
In Linux, power management is a key feature that allows the system to enter various sleep states to conserve energy when the system is idle or in a low-power state. These sleep states are typically categorized into “suspend” (low-power modes) and “hibernate” (suspend to disk) modes that are part of the Advanced Configuration and Power Interface (ACPI) specification. Below are the main Linux sleep states.
Figure 1 Here is a highlight of Linux sleep states. Source: eInfochips
- Wakeup (Idle): System fully active; CPU and components fully powered, used when the device is actively in use; high power consumption, no resume time needed.
- Deep sleep (Suspend-to-RAM): CPU and motherboard components mostly disabled, RAM refreshed, used for deeper low-power states to save energy; low power consumption varying by C-state, fast resume time (milliseconds).
- System sleep (Suspend-to-Idle): CPU frozen, RAM in self-refresh mode, shallow sleep state for low-latency, responsive applications (for example, network requests); low power consumption, higher than hibernate, fast resume time (milliseconds).
- Hibernate (Suspend-to-Disk): Memory saved to disk, system powered off, used for deep power savings over long periods (for instance, laptops); almost zero power consumption, slow resume time (requires reading from disk).
Suspend To Ram (STR) offers a good balance, as it powers down most of the system but keeps RAM active (self-refresh mode) for a quick resume, making it suitable for devices needing quick wakeups and energy savings. Hibernate, on the other hand, saves more power by writing the system’s state to disk and powering down completely, but resulting in slower wakeup times.
Qualcomm’s chips, especially those found in Linux embedded devices, support two power-saving modes to help optimize battery life and improve efficiency. These power-saving modes are typically controlled through the system’s firmware, the operating system, and specific hardware components. Here are the main power-saving modes supported by Qualcomm-based chipsets:
- Suspend to RAM (STR)
- Suspend to Idle (S2Idle)
Triggers suspend mode by writing “mem” or “freeze” in /sys/power/state.
Figure 2 Here is how source flow looks like when device enters sleep and wakes up. Source: eInfochips
As the device goes into suspend modes, it performs the following tasks:
- Check whether the suspend type is valid or not
- Notify user space applications that device is going into sleep state
- Freeze the console logs
- Freeze kernel thread and buses and freeze unwalkable interrupts
- Disable non-bootable CPU (CPU 1-7) and keep RAM into self-refresh mode
- Keep the device into sleep state until any wakeup signal is received
Once the device receives the wakeup interrupt or trigger, it starts resuming the device in reverse order while suspending the device.
While the system is suspended, the current consumption of the Aikri QRB4210 system on module (SoM) comes around to ~7 mA at 3.7-V supply voltage. Below is the waveform of the current drained by the system on module.
Figure 3 Here is how current consumption looks like while Aikri QRB4210 is in suspend mode. Source: eInfochips
Camera sensor power modes
Camera sensors are designed to support multiple power modes such as:
- Streaming mode
- Suspend mode
- Standby mode
Each mode has distinct power consumption and latency. Latency varies by power-saving level and sensor state. Based on use case, ensure the camera uses the most efficient mode for its function, especially while the system is in power saving mode like deep sleep or standby. This ensures balanced performance and power efficiency while maintaining quick reactivation.
In GStreamer, the pipeline manages data flow through various processing stages. These stages align with the GStreamer state machine, marking points in the pipeline’s lifecycle. The four main states are NULL, READY, PAUSED and PLAYING, each indicating the pipeline’s status and controlling data and event flow. Here’s a breakdown of each of the stages (or states) in a GStreamer pipeline:
Figure 4 The above image outlines GStreamer’s pipeline stages. Source: eInfochips
- Null
- This is the initial state of the pipeline, and it represents an inactive or uninitialized state. The pipeline is not doing any work in this state. All elements in the pipeline are in their NULL state as well.
- In this state, the master clock (MCLK) from the processor to the camera sensor is not active; the camera sensor is in reset state and the current consumption by the camera is almost zero.
- Ready
- In this state, the pipeline is ready to be configured but has not yet started processing any media. It’s like a preparation phase before actual playback or processing starts.
- GStreamer performs sanity check and plugin compatibility for the given pipeline.
- Resources can be allocated (for example, memory buffers and device initialization).
- GStreamer entering this state does not impact MCLK’s state or reset signal. If GStreamer enters from the NULL state to the READY state, the MCLK remains inactive. On the other hand, if it enters the READY state from the PLAYING state, the MCLK remains active.
- The current consumption in the READY state depends on the previous state; this behavior can be further optimized.
- Paused
- This state indicates that the pipeline is set up and ready to process media but is not actively playing yet. It’s often used when preparing for playback or streaming while maintaining control over when processing starts.
- All elements in the pipeline are initialized and ready to start processing media.
- Like the READY state, the current consumption in the PAUSED state depends on the previous state, so some optimization in the camera stack can help reduce the power consumption during this state.
- Playing
- The PLAYING state represents the pipeline’s fully active state, where data is being processed and media is either being rendered to the screen, played back through speakers, or streamed to a remote system.
- MCLK is active and the camera sensor is out of reset. The current consumption is highest in this state as all camera sensor data is being captured and passed through the pipeline.
To minimize wakeup latency of the camera sensor while maintaining the lowest sleep current, GStreamer pipeline should be put in the NULL state when the system is suspended. To understand the power consumption due to MCLK and RESET signals assertion, below is the comparison of current consumption between the NULL state of GStreamer pipeline and the READY state of GStreamer pipeline while QRB4210 is in the suspended state.
Figure 5 Current consumption shown while GStreamer is in NULL state and QRB4210 is in suspend mode at ~7 mA. Source: eInfochips
Figure 6 Current consumption shown while GStreamer is in READY state and QRB4210 is in suspend mode at ~30 mA. Source: eInfochips
While the camera is in the NULL state, the QRB4210 system on module draws a current of ~7mA, which is equivalent to the current drawn by the system on module in the suspended state when no camera is connected. When the camera is in the READY state, the QRB4210 system on module draws a current of around ~30 mA. The above oscilloscope snapshot shows the waveforms of the consumed current. All the measured currents are at 3.7-V supply voltage for the QRB4210 system on module.
Latency measurement results
Latency was measured between two trigger events: the first occurs when the device wakes up and receives the interrupt at the application processor, and the second occurs when the first frame becomes available in the DDR after image signal processor (ISP) runs.
As mentioned earlier in Part 1, the scenario is simulated using bash script that keeps the device into the suspend mode and triggers the QRB4210 platform from sleep and wakeup using the RTC wake alarm.
We have collected the camera wakeup latency by changing the camera state from PLAYING to READY and from PLAYING to NULL. In each scenario, three different use cases are followed, which are recording camera stream into eMMC, recording camera stream into SD card, and previewing camera stream to display. The resulting latency is as follows:
- Camera state in READY
Table 1 Latency measurements are shown in READY state. Source: eInfochips
- Camera state in NULL
Table 2 Latency measurements are shown in NULL state. Source: eInfochips
The minimum, maximum, and average values presented in the above tables have been derived by running each scenario for 100 iterations.
Apart from measuring the latency numbers programmatically, below are the results measured using the GPIO toggle operation between two reference events while switching the camera state from READY to PLAYING.
Table 3 Latency measurements are conducted using GPIO. Source: eInfochips
Now refer to the following oscilloscope images for different scenarios used in the GPIO toggle measurement method.
Figure 7 GPIO toggle measurements are conducted while recording into eMMC at 410.641 ms. Source: eInfochips
Figure 8 GPIO toggle measurements are conducted while recording into SD card at 382.037 ms. Source: eInfochips
Figure 9 GPIO toggle measurements are conducted during preview on display at 359.153 ms. Source: eInfochips
Trade-off between current consumption and wakeup latency
Based on the simulated result, we see that current consumption and wakeup latency are dependent on each other.
The consolidated readings show that a camera pipeline in the READY state consumes more current while it takes less time to wake up. On the other hand, if the camera pipeline is in the NULL state, it consumes less current but takes more time to wake up. Refer to the table below for average data readings.
Table 4 The above data shows trade-off between current consumption and wakeup latency. Source: eInfochips
All latency data is measured between the reception of the wakeup IRQ at the application processor and the availability of the frame in DDR after the wakeup. It does not include the time taken by a motion detection sensor to sense and generate an interrupt for the application processor. Generally, the time taken by a motion detection sensor is negligible compared to the numbers mentioned above.
Future scope
To reduce the current consumption of a device in the sleep state optimization, you can follow the steps below:
- Disable redundant peripherals and I/O ports.
- Prevent avoidable wakeups by ensuring that peripherals don’t resume from sleep unnecessarily.
- Disable or mask unwanted wakeup triggers or subsystem that can wake the device from a sleep state.
- Use camera standby (register retaining) mode so that MCLK can be stopped, or its frequency can be reduced.
- Enable LCD display only when preview use case is running.
To optimize wakeup latency, follow the guidelines below:
- Make use of the camera standby mode to further optimize latency to generate the first frame.
- Reduce camera sensor frame size to optimize frame scan time and ISP processing time.
- Disable redundant system services.
- Trigger camera captures from lower-level interface rather than using the GStreamer.
Trigger-based cameras offer an efficient solution for capturing targeted events, reducing unnecessary operation, and managing resources effectively. They are a powerful tool in applications where specific, event-driven image or video capture is needed.
By conducting experiments on the Aikri QRB4210 platform and making minimal optimizations to the Linux operating system, it’s possible to replicate or create a robust trigger-based camera system, achieving ~400-500 ms latency with minimal current consumption.
Jigar Pandya—a solution engineer at eInfochips, an Arrow company—specializes in board bring-up, board support package porting, and optimization.
Priyank Modi—a hardware design engineer at eInfochips, an Arrow company—has worked on various Aikri projects to enhance technical capabilities.
Related content
- The State of Machine Vision
- What Is Machine Vision All About?
- Processors, Sensors Drive Embedded Vision
- Shaping the Scene for Vision Standardization
- Embedded Vision: Giving Machines the Power of Sight
The post Optimize power and wakeup latency in swift response vision systems – Part 2 appeared first on EDN.
The (more) modern drone: Which one(s) do I now own?

Last September, I detailed why I’d decided to hold onto the first-gen DJI Mavic Air drone that I’d bought back in mid-2021 (and DJI had introduced in January 2018), a decision which then prompted me to both resurrect its long-drained batteries and acquire a Remote ID module to get it copacetic with current FAA usage regulations, as subsequently mentioned in October:
Within both blog posts, however, I intentionally alluded to (but didn’t delve into detail on) the newer drone that I’d also purchased to accompany it, aside from dropping hints that it offered (sneak peek: as-needed enabled) integrated Remote ID support and weighed (sneak peek: sometimes) less than 250 grams. That teasing wasn’t (just) to drive you nuts: to do the topic justice would necessitate a blog post all its own. That time is now, and that blog post is this one.
Behold DJI’s Mini 3 Pro, originally introduced in May 2022 and shown here with its baseline RC-N1 controller:
I bought mine (two of them, actually, as it turned out) roughly two years post-intro, in late June (from eBay) and early July (from Lensrentals) of last year. By that time, the Mini 4 Pro successor, unveiled in September 2023, had already been out for nearly a year. So, why did I pick its predecessor? The two drone generations look identical; they take the same batteries, propellers and other parts, and fit into the same cases. And as far as image capture goes, the sensors are identical as well: 48 Mpixel (effective) 1/1.3″ CMOS.
What’s connected to the image sensors, however, leads to one of several key differences between the two generations. The Mini 3 Pro captures video at up to 4K resolution at a 60-fps peak frame rate. The improved ISP (image signal processor) in the Mini 4 Pro, conversely, also captures video at 4K resolution, but this time up to a 100-fps frame rate. Dim-light image quality is also improved, along with the available capture-format options, now also encompassing both pre-processed HDR and post-processed D-LOG. And the camera now rotates a full 90° vertical for TikTok- and more general smartphone viewing-friendly portrait orientation video frames.
Speaking of cameras, what about the two drones’ collision avoidance systems? The DJI Mini 3 Pro has cameras both front and rear for collision avoidance purposes, along with another pointing downward to (for example) aid in landing. The Mini 4 Pro replaces them with four fisheye-lens cameras (at front, rear and both sides) for collision avoidance all around the drone as well as above it, further augmented by two downward facing cameras for stereo distance and a LiDAR sensor, the latter enhancing after-dark sensing and discerning distance-to-ground when the terrain is featureless. By the way, the rumored upcoming DJI Mini 5 Pro further bolsters the drone’s LiDAR facilities, if the leaked images are true and not just Photoshop-created fakes.
The final notable difference involves the contrasting wireless protocols used by both drones to communicate with and stream live video to the user’s controller and, if used, goggles. The Mini 3 Pro leverages DJI’s O3 transmission system, with an estimated range of 12 km while streaming live 1080p 30 fps video. With the Mini 4 Pro and its more advanced O4 system, conversely, the wirelessly connected range increases to an estimated 20 km. Two important notes here:
- The controllers for the Mini 3 Pro also support the longer-range (15 km) and higher frame rate (1080p 60 fps) O3+ protocol used by larger DJI drones such as the Mavic 3
- Unfortunately, however, the DJI Mini 4 is not backwards compatible with the O3 and O3+ protocols, so although I’ll be able to reuse my batteries and the like if I do a drone-generation upgrade in the future, I’ll need to purchase new controllers and goggles for it.
That all said, why did I still go with the Mini 3 Pro? The core reason was cost. In assessing the available inventory of used drone equipment, the bulk of the options I found were at both ends of the spectrum: either in like-new condition, or egregiously damaged by past accidents. But given that the Mini 3 Pro had been in the market nearly 1.5 years longer, its available used inventory was much more sizeable. I was able to find two pristine Mini 3 Pro examples for a combined price tag less than that of a single like-new (far from brand new) Mini 4 Pro. And the money saved also afforded me the ability to purchase two used upgraded integrated-display controllers, the mainstream RC and high-end RC Pro, the latter running full-blown Android.
Although enhancements such as higher quality video, more advanced object detection and longer range are nice, they’re not essential in my currently elementary use case, particularly counterbalanced against the fiscal savings I obtained by going prior-gen. The DJI Mini 4’s expanded-scope collision avoidance might be useful when flying the drone side-to-side for panning purposes, for example, or through a grove of trees, neither of which I see myself doing much if any of, at least for a while. And considering that after 12 km the drone will probably already be out of sight, combined with the alternative ability to record even higher quality video to local drone microSD storage, O4 transmission system support also isn’t a necessity for me.
Speaking of batteries (plenty of spares which I now also own, along with associated chargers, and refresh-charge them every two months to keep them viable) and range, let’s get to the drone’s earlier-alluded Remote ID facilities. The Mini 3 Pro (therefore also Mini 4 Pro) has two battery options: a standard 2453 mAh model that, as conveniently stamped right on it to answer enforcement agency inquiries, keeps the drone just below the 250-gram threshold:
and a “Plus” 3850 mAh model that weighs ~50% more (121 grams vs 80.5 grams). The DJI Mini 3 Pro has built-in Remote ID support, negating the need for an add-on module (which, if installed, would push total weight above 249 grams, even using a standard battery). But here’s the slick bit; when the drone detects that a standard battery is in use, it disables Remote ID transmission, both because the FAA doesn’t require it and to address user privacy concerns, given that scanning facilities are available to the masses, not just to regulatory and enforcement entities.
I’ve admittedly been too busy post-purchase to use the drone gear much yet, but I’m looking forward to harassing the neighbors (kidding!) with it in the future. I’ve also acquired a Goggles Integra set and a RC Motion 2 Controller, both gently used from Lensrentals:
to test out FPV (first-person view) flying, and even a LTE cellular dongle for remote-locale Internet access to the RC Pro controller (unfortunately, such dongles reportedly can’t also be used on the drone itself, at least in the US, for alternative long-range controller connectivity):
And finally, I’ve acquired used examples of the Googles Racing Edition Set (Adorama) and OcuSync Air System (eBay) for the Mavic Air, again for FPV testing purposes:
Stay tuned for more on all of this if (hopefully more accurately, when) I get time to actualize my drone gear testing aspirations. Until then, let me know your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Diagnosing and resuscitating a set of DJI drone batteries
- Teardown: DJI Spark drone
- Oh little drone, how you have grown…
- Drone regulation and electronic augmentation
- Keep your drone flying high with the right circuit protection design
The post The (more) modern drone: Which one(s) do I now own? appeared first on EDN.
A design platform for swift vision response system – Part 1

Trigger-based vision systems in embedded applications are used in various domains to automate responses based on visual input, typically in real-time. These systems detect specific conditions or events—for example, motion and object recognition or pattern detection—and trigger actions accordingly.
Key applications include:
- Surveillance and security: Detecting motion or unauthorized individuals to trigger alarms or recording.
- Robotics: Identifying and manipulating objects, triggering robotic actions like picking, or sorting based on visual cues.
- Traffic Monitoring: Triggering traffic light changes or fines when specific conditions like running a red light are detected.
- Forest monitoring: Trigger-based vision systems can be highly effective in forest environments for a range of applications, including wildlife monitoring, forest fire detection, illegal logging prevention, animal detection, trail camera, and more.
- Military and defense: Vision systems used in drones, surveillance systems, and military robots for threat detection and target identification.
These systems leverage camera technologies combined with environmental sensors and AI-based image processing to automate monitoring tasks, detect anomalies, and trigger timely responses. For instance, in wildlife monitoring, vision systems can identify animals in remote areas, while in forest fire detection, thermal and optical cameras can spot early signs of fire or smoke.
Low wakeup latency in trigger-based systems is crucial for ensuring fast and efficient responses to external events such as sensor activations, button presses, and equivalent events. These systems rely on triggers to initiate specific actions, and minimizing latency ensures that the system can respond instantly to these stimuli. This ability of a device to quickly wake up when triggered allows the device to remain in a low-power state for a longer time. The longer a device stays in a low-power state, the more efficiently it conserves energy.
In summary, low wakeup latency improves a system’s responsiveness, reliability, scalability and energy efficiency, making it indispensable in applications that depend on timely event handling and quick reactions to triggers.
Aikri platform developed by eInfochips validates this concept. The platform is based on Qualcomm’s QRB4210 chipset and runs OpenEmbedded-based Linux distribution software.
To simulate the real-life trigger scenario, Aikri platform is put to low-power state using a shell script and is woken up by a real time clock (RTC) alarm. The latency between wakeup interrupt and frame reception interrupt at dual data rate (DDR) has been measured around ~400 ms to ~500 ms. Subsequent sections discuss the measurement setup and approach at length.
Aikri platform: Setup details
- Hardware setup
The Aikri platform is used to simulate the use case. The platform is based on Qualcomm’s QRB4210 chipset and demonstrates diverse interfaces for this chipset.
The current scope uses only a subset of interfaces available on the platform; refer to the following block diagram.
Figure 1 The block diagram shows hardware peripherals used in the module. Source: eInfochips
The QRB4210 system-on-module (SoM) contains Qualcomm’s QRB4210 application processor, which connects to DDR RAM, embedded multimedia card (eMMC) as storage, Wi-Fi, and power management integrated circuit (PMIC). The display serial interface (DSI)-based display panel is connected to the DSI connector available on the Aikri platform.
Similarly, the camera daughter board is connected to CSI0 port of the platform. The camera daughter card contains an IMX334 camera module. The camera sensor outputs 3864×2180 at 30 frames per second on four lanes of camera serial interface (CSI) port.
DSI panel is built around the OTM1901 LCD. This LCD panel supports 1920×1080 output resolution. Four lanes of the DSI port are used to transfer video data from the application processor to the LCD panel. PMIC available on QRB4210 SoM contains RTC hardware. While the application processor goes to the low-power mode, the RTC hardware inside the PMIC remains active with the help of a sleep clock.
- Software setup
The QRB4210 application processor runs an OpenEmbedded-based Linux distribution using the 5.4.210 Linux kernel version. The default distribution is trimmed down to reduce wakeup latency while retaining necessary features. A bash script is used to simulate the low-power mode entry and wakeup scenario.
The Weston server generates display graphics and GStreamer captures frames from camera sensors. Wakeup latency is measured by taking timer readings from Linux kernel when relevant interrupt service routines are called.
Latency measurement: Procedure overview
To simulate the minimal latency wakeup use case, a shell-based script is run on the Aikri platform. The script automates the simulation of trigger-based low latency vision system on Aikri QRB4210 module.
Below is the flow for the script performed on QRB4210 platform, starting from device bootup to measuring latency.
Figure 2 Test script flow spans from device bootup to latency measurement. Source: eInfochips
The above diagram showcases the operational flow of the script, beginning with the device bootup, where the system initializes its hardware and software. After booting, the device enters the active state, signifying that it’s fully operational and ready for further tasks, such as keeping Wi-Fi configured in an inactive state and probing the camera to check its connection and readiness.
Additionally, it configures the GStreamer pipeline for 1280×960@30 FPS stream resolution. The camera sensor registers are also configured at this stage based on the best-match resolution mode. During this exercise, 3840×2160@30 FPS is the selected resolution for IMX334 camera sensor. Once the camera is confirmed as configured and functional, the device moves to the camera reconfigure step, where it adjusts the camera stream settings like stop/start.
The next step is to set the RTC wake alarm, followed by triggering a device to suspend mode. In this state, the device waits for the RTC alarm to wake it up. Once the alarm triggers, the device transitions to the wakeup state and starts the camera stream.
The device then waits for the first frame to arrive in DDR and measures the latency between capturing the frame and device wakeup Interrupt Request (IRQ). After measuring latency, the device returns to the active state, where it remains ready for further actions.
The process then loops back to the camera reconfigure step, repeating the sequence of actions until the script stops externally. This loop allows the device to continuously monitor the camera, measure latency, and conserve power during inactive periods, ensuring efficient operation.
Latency measurement strategy
While the device is in a suspended state and the RTC alarm triggers, the time between two key events is measured: the wakeup interruption and the reception of the first frame from the camera sensor into the DDR buffer. The latency data is measured in three different scenarios, as outlined below:
- When the camera is in the preview mode
- When recording the camera stream to eMMC
- When recording the camera stream to the SD card
Figure 3 Camera pipeline is shown in the preview mode. Source: eInfochips
Figure 4 Camera pipeline is shown in the recording mode. Source: eInfochips
As shown in the above figures, after the DDR receives the frame, it moves to the offline processing engine (OPE) before returning to the DDR. From there, the display subsystem previews the camera sensor data. In the recording use case, the data is transferred from DDR to the encoder and then stored in the storage. Once the frame is available in DDR, it ensures that it’s either stored in the storage or previewed on the display.
Depending on the processor CPU occupancy, it may take a few milliseconds to process the frame, based on the GStreamer pipeline and the selected use case. Therefore, while measuring latency, we consider the second polling point to be when the frame is available in the DDR, not when it’s stored or previewed.
Since capturing the trigger event is crucial, minimizing latency when capturing the first frame from the camera sensor is essential. The frame is considered available in the DDR when the thin front-end (TFE) completes processing the first frame from the camera.
Latency measurement methods
In the Linux kernel, there are several APIs available for pinpointing an event and time measurement, each offering varying levels of precision and specific use cases. These APIs enable tracking of time intervals, measuring elapsed time, and managing system events. Below is a detailed overview of the commonly used time measurement APIs in the Linux kernel:
- ktime_get_boottime: Provides the current “time since boot” in a ktime_t value, expressed in nanoseconds.
- get_jiffies: Returns the current jiffy count that represents the number of ticks since the system booted. Time must be calculated based on the system clock.
Jiffies don’t update during the suspend state, while ktime_t continues to run unaffected by interrupts even in sleep mode. Additionally, ktime_t offers time measurements in nanoseconds, making it highly precise compared to jiffies.
- Usage of GPIO toggle method for latency measurement
To get a second level of surety, a GPIO toggle-based method is also employed in the measurement. It creates a positive or negative pulse when a GPIO is toggled between two reference events. The pulse width can be measured on an oscilloscope, signifying latency between the two events.
When the device wakes up at that point, the GPIO value is set to zero, and once the camera driver receives the frame in the DDR, the GPIO value is set to one. This way the GPIO signal creates a negative pulse. Measuring the pulse width using an oscilloscope provides latency between the wakeup interrupt and the frame available at interrupt.
- Usage of RTC alarm as wakeup source
The RTC in a system keeps ticking while using a sleep clock even when the processor goes to the low-power mode, continuously maintains time, and triggers a wake alarm when it reaches a set time. This wakes the system or initiates a scheduled task that can be set in seconds from the Unix epoch or relative to the current time.
On Linux, tools like rtcwake and the /sys/class/rtc/rtc0/wakealarm file are used for configuration. The system can wake from power-saving modes like suspend-to-RAM or hibernation for tasks like backups or updates. This feature is useful for automation but may require time zone adjustments as the RTC stores time in UTC.
- The RTC wake alarm is set by specifying a time in seconds in sysfs or using tools like rtcwake.
- It works even when the system is in a low-power state like suspension or hibernation.
- To clear the alarm, write a value of zero to the wake alarm file.
A typical trigger-based system receives triggers from external sources, such as an external co-processor or the external environment. When simulating the script, the RTC wakeup alarm is used as an external event, acting as a trigger for the QRB4210 application processor, which is equivalent to the external event.
Jigar Pandya—a solution engineer at eInfochips, an Arrow company—specializes in board bring-up, board support package porting, and optimization.
Priyank Modi—a hardware design engineer at eInfochips, an Arrow company—has worked on various Aikri projects to enhance technical capabilities.
Editor’s Note: The second part of this article series will further expand into wakeup latency and power consumption of this trigger-based vision system.
Related content
- The State of Machine Vision
- What Is Machine Vision All About?
- Processors, Sensors Drive Embedded Vision
- Shaping the Scene for Vision Standardization
- Embedded Vision: Giving Machines the Power of Sight
The post A design platform for swift vision response system – Part 1 appeared first on EDN.
Flip ON flop OFF without a flip/flop

There’s been a lot of interesting conversation and DI teamwork lately devising circuits for ON/OFF power control using inexpensive momentary-contact switches (See “Related Content” below).
Wow the engineering world with your unique design: Design Ideas Submission Guide
Most of these designs have incorporated edge triggered flip/flops (e.g. the CD4013) but of course other possibilities exist. Figure 1 shows one of them.
Figure 1 Flip/flop-free debounced push ON push OFF toggling with power-on reset and low parts count.
Okay, I can (almost) hear your objection. It isn’t (technically) accurate to describe Figure 1 as flip/flop free because the two inverters, U1a and U1b, are connected as a bistable latch. That is to say, a flip/flop. It’s really how its state gets toggled by S1 that’s different. Here’s how that works.
While sitting in either ON or OFF with S1 un-pushed, U1a, being an inverter, charges C2 to the opposite state through R1. So, when S1 gets mashed, C2 yanks U1a’s input, thereby toggling the latch. The R1C2 time-constant of 100 ms is long enough to guarantee that if S1 bounces on make, as it most assuredly will, C2’s complementary charge will ride out the turbulence.
Then, because R2 < R1, the positive feedback through R2 will overpower R1 and keep the same polarity charge on C2 for as long as S1 is held closed. This ensures that later, when S1 is released, if it bounces on break (as some switches are rumored to be evil enough to do), the new latch state won’t be lost. PFET Q1 now transfers power to the load (or doesn’t). Thus, can we confidently expect reliable flipping and flopping and ONing and OFFing.
So, what’s the purpose of C1? Figure 2 explains.
Figure 2 Power up turn off where the rising edge of V+ at PFET a’s source with its gate held low by RCs turns it on.
If V+ has been at zero for awhile (because the battery was taken out or the wall wart unplugged), C1 and C2 will have discharged likewise to zero (or thereabouts). So, when V+ is restored, they will hold the inverter’s FET gates at ground. This will make the PFET’s gate negative relative to its (rising) source, turning it on, pulling its output high, and resetting the latch to OFF.
So why R3?
When the latch sits for a while with S1 unpushed, whether ON or OFF, C1 will charge to V+. Then, when S1 is depressed (note this doesn’t necessarily mean it’s unhappy), C1 will be “quickly” discharged. Without R3, “quickly” might be too much of a good thing and involve a high enough instantaneous current through S1, and hence enough energy deposited on its contacts, to shorten its service life.
Thus, making us both unhappy!
Here’s a final thought about parts count. The 4069 is a hextuple part, this makes Figure 1’s use of only two of its six inverters look wasteful. We can hope the designer can find a place for the unused elements elsewhere in their application, but what if she can’t?
Then it might turn out that Figure 3 will work.
Figure 3 Do something useful with the other 2/3rds of U1, eliminate Q1 for loads of less than 10 mA, and gain short-circuit protection for free.
Ron for the 4069 is V+ dependent but can range as low as 200 Ω (typical) at V+ > 10 V. Therefore, if we connect all five of the available inverters in parallel as shown in Figure 3, we’d get a net Ron of 200/5 = 40 Ω from V+ to Vout. This might be adequate for a low power application, making Q1 redundant. As an added benefit, an accidental short to ground will promptly and automatically turn the latch and the shorted load OFF. U1 will therefore be that much less likely to catch fire, and us to be unhappy! Note it also works if the latch is OFF and the output gets shorted to V+.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- To press on or hold off? This does both.
- Flip ON flop OFF
- Latching D-type CMOS power switch: A “Flip ON Flop OFF” alternative
- To press ON or hold OFF? This does both for AC voltages
The post Flip ON flop OFF without a flip/flop appeared first on EDN.
Fractal Unveils Acoustic Tech to Disable Drones

Fractal Antenna Systems has introduced Acoustic Resonance Mitigation (ARM), a technology that disables drones using directed acoustic energy. ARM emits sonic, ultrasonic, and subsonic waves to induce vibrations or Prandtl boundary layer instability, leading to flight failure. Propeller blades are especially vulnerable, as turbulence or vibrations can disrupt a drone’s inertial measurement unit (IMU).
Portions of an ARM button array (non-parametric) for a DRONE BLASTR airborne drone.
ARM technology, co-invented by Fractel CEO Nathan Cohen, is backed by U.S. patents and licensed to Fractal. The technology has been demonstrated by foreign groups, though U.S. patents predate these efforts, according to Cohen. Cost-effective and portable, ARM is specifically designed to disable drones, from microdrones to pizza box-sized devices.
In military applications, ARM can be deployed on attack drones to disable adversarial swarms. Known as the DRONE BLASTR, this patent-pending in situ device offers a new method for countering drone swarms. Beyond the battlefield, ARM offers a countermeasure against drones used in illegal surveillance and smuggling.
A timeline for commercialization was not available at the time of this announcement. Government, public safety agencies, and related enterprises can contact Fractal for more information.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Fractal Unveils Acoustic Tech to Disable Drones appeared first on EDN.
Partners demo 200G multimode VCSEL

Keysight and Coherent showcased 200G-per-lane VCSEL technology at OFC 2025, demonstrating characterization, tuning, and validation. The multimode VCSEL enables higher data transfer rates to meet growing data center bandwidth demands.
Keysight’s M8199B 256-Gsample/s AWG
The 200G multimode VCSEL enhances data transfer and network efficiency by doubling data throughput to 200 Gbps per lane, surpassing current multimode interconnects. It offers significant power savings per bit compared to single-mode alternatives, and its lower manufacturing costs make it a more economical choice for short-reach data links. Well-suited for AI pods and clusters, this VCSEL supports the high-speed, short-reach interconnects essential for GPU-driven data sharing.
The setup used at OFC featured Keysight’s DCA-M wideband multimode sampling oscilloscope, M8199B 256-Gsample/s arbitrary waveform generator (AWG), and Coherent’s 200G VCSEL. The AWG drives a 106.25-GBaud PAM4 signal into the VCSEL, with the optical output measured on the oscilloscope to display the eye diagram. This demonstrates the VCSEL’s feasibility and Keysight’s characterization and validation capabilities.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Partners demo 200G multimode VCSEL appeared first on EDN.
IR color sensor enhances automotive displays

Vishay’s VEML6046X00 RGB IR sensor is AEC-Q100 qualified for use in vehicle displays and interior lighting. This compact device integrates a photodiode, low-noise amplifier, and 16-bit ADC in an opaque surface-mount package that is just 2.67×2.45×0.6 mm.
With three color channels and one infrared channel, the VEML6046X00 calculates color temperature to enable white point balancing for displays. Its green channel’s spectral sensitivity aligns with the human eye for accurate measurements, while the IR channel stabilizes output across various light sources.
The sensor performs consistently in daylight with an ambient light range of 0 to 176 klx, preventing saturation. A digital resolution of 0.0053 lx/count allows the VEML6046X00 to operate behind dark cover glass. It supports a supply range of 2.5 V to 3.6 V, an I2C bus voltage range of 1.7 V to 3.6 V, and an ambient temperature range of -40°C to +110°C. Typical shutdown current consumption is 0.5 µA.
The sensor is well-suited for automotive display backlight control, infotainment systems, rear-view mirror dimming, and heads-up displays.
Samples and production quantities of the VEML6046X00 RGB IR sensor are available now, with a lead time of 16 weeks.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post IR color sensor enhances automotive displays appeared first on EDN.
Intelligent high-side switches manage diverse loads

ST’s four-channel high-side switches, the IPS4140HQ and IPS4140HQ-1, drive resistive, capacitive, and inductive loads with one side connected to ground. The IPS4140HQ handles up to 0.6 A per channel, while the IPS4140HQ-1 supports up to 1.0 A. Both have a maximum RDS(on) of 80 mΩ per channel and include extensive diagnostic and protection features.
Housed in compact 8×6-mm QFN48 packages, the devices operate from a 10.5-V to 36-V supply and tolerate up to 41 V for enhanced system safety and reliability. Applications include programmable logic controllers, industrial PC I/O ports, and numerical control machines.
These intelligent power switches provide per-channel short-circuit protection, temperature monitoring, and independent restart to boost fault tolerance and simplify automated recovery. Additional safeguards include case-overtemperature shutdown with sequential restart, output current limiting, undervoltage lockout, and input overvoltage protection. With 5-V/3.3-V logic compatibility, EAS ratings up to 2.5 J per channel, and compliance with IEC 61000-4 and IEC 61131-2 standards, they ensure robust performance in industrial applications.
Prices for the IPS4140HQ and IPS4140HQ-1 high-side switches start at $2.59 in lots of 1000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Intelligent high-side switches manage diverse loads appeared first on EDN.
Infineon prepares DTO247-packaged IGBTs

Infineon is developing TRENCHSTOP H7 IGBTs in a DTO247 package, which is twice the size of a standard TO247. A single high-current IGBT in a DTO247 can replace multiple lower-current TO247 IGBTs connected in parallel. Engineering samples of the 200-A and 350-A H7 IGBT variants are available now.
The DTO247 enables high power density and bridges the gap between TO247-based designs and module architectures. Additionally, its compatibility with both DTO247- and TO247-based architectures within the same system provides greater flexibility and customization. H7 IGBTs can be used in solar inverters, uninterruptible power supplies, and energy storage systems.
The DTO247-packaged portfolio will include 1200-V and 750-V H7 IGBTs with current ratings of 200 A, 250 A, 300 A, and 350 A. They feature 2-mm-wide leads for optimal conduction, a 7-mm pin-to-pin clearance, and a 10-mm creepage distance for enhanced safety and reliability. An integrated Kelvin emitter pin enables faster, more efficient switching.
Volume production of the TRENCHSTOP H7 IGBTs in DTO247 packages is scheduled for mid-2026. Datasheets were not available at the time of this announcement.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Infineon prepares DTO247-packaged IGBTs appeared first on EDN.
PUT a reset in its place

One of my jobs as an engineer was working in the engineering department of an electronic contract manufacturer. Our department designed test equipment for the manufacturing lines, but we also assisted customers with their products issues.
Do you have a memorable experience solving an engineering problem at work or in your spare time? Tell us your Tale
The product being built on the line was a PCB assembly for a coffee maker. One day, the boss came to me and said the customer is getting some complaints about the coffee maker display and controls locking up. He assured the customer we could fix their problem (even though no one knew what the issue was).
The first task was to reproduce the problem. With no clue as to how this problem happened, I started by just letting the coffee makers run. After a few days, nothing—so I tried other things like banging it and shaking it…nothing. I then pushed the buttons in every combination and cadence I could come up with – still no luck. Next ,I tried varying the line voltage slowly from the specified minimum to the specified maximum. The coffee maker still worked fine. I was running out of ideas. Finally, I tried one last thing: I plugged it into a controller that turns the line voltage on and off at varying rates. After a while it locked up.
This behavior hinted at the micro’s reset circuit so I dug into that. It wasn’t the typical design used in those days, a simple RC circuit. The resistor tied to Vcc and the capacitor tied to digital ground, then the other ends of the resistor and capacitor were tied together, and that tied to the micro’s reset.
After a little more testing I concluded that it was due to a fast interruption in the line voltage or a brownout. Either would cause the reset capacitor to discharge partway and then to reset back to Vcc. The micro wasn’t happy with this as it lost power but didn’t get a valid reset.
The boss was happy I found the root cause but now said “fix it” and the fix had to be something easy to tack on to the existing PCB. I spent a few days looking at things like changing the resistor or capacitor value or adding a 555 timer, comparator, or op-amp, but these either didn’t work or were too difficult or expensive to add to the PCB.
That’s when I remembered an obscure device I had read about, a programmable unijunction transistor, or PUT. This has some properties like an SCR. The PUT has 3 pins: anode, cathode, and gate. So, the circuit I came up with was this:
The schematic of the coffee maker fix, introducing the PUT to successfully manage failures due to interruptions in the line voltage or brownouts.
When Vcc is good, the anode to cathode is not conducting, but when the Vcc drops the capacitor stays up for a while, but the PUT triggers if the gate drops below the anode by 0.7v or more. This trigger turns on the anode to cathode path and the capacitor is quickly discharged. This fixed the problem and the PUT and resistor were easily tacked onto the existing PCB.
Epilogue: After the fix I went to the boss and asked if we could apply for a patent for this reset circuit but he turned me down. So as consolation to myself I submitted it to EDN’s Design Ideas column, which they printed. Later in my career I happen to notice that this Design Idea was referenced as prior art in patents from Texas Instruments, Dallas Semiconductor, Shlumberger Technology, and Ericson Inc. Looks like a good patent opportunity we missed. But then again, what’s better; a patent or an EDN Design Idea being published?
Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.
Phoenix Bonicatto is a freelance writer.
Related Content
- Audible ohmmeter cuts down on false readings
- Analog of a thyristor with a controlled switching threshold (comments)
- Exploring software-defined radio (without the annoying RF) – Part 1
- A beginner’s guide to power of IQ data and beauty of negative frequencies – Part 1
The post PUT a reset in its place appeared first on EDN.
Building a low-cost, precision digital oscilloscope

Editor’s note:
In this DI, high school student Tommy Liu modifies a popular low-cost DIY oscilloscope to enhance its input noise rejection and ADC noise with anti-aliasing filtering and IIR filtering.
Part 1 introduces the oscilloscope design and simulation.
Part 2 will show the experimental results of this oscilloscope.
IntroductionA digital oscilloscope is one of the most essential pieces of equipment for high school electrical and electronic labs. As useful and popular as it is for high schoolers, the cost of an oscilloscope can often be prohibitive. Professional digital oscilloscopes are generally expensive, with the entry cost of a basic model ranging from several hundred dollars to over a thousand dollars. One can argue that the advanced specifications and functionalities of these oscilloscopes often exceed most high school needs.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Low-cost DIY digital oscilloscopes provide another option for high schools—these oscilloscopes are inexpensive and typically cost less than a hundred dollars. The problem with DIY oscilloscopes is their performance—they lack measurement precision and the capability of noise immunization. Most DIY oscilloscopes only reach an effective resolution of 6 to 8 bits—even for those with a 12-bit ADC—due to poor noise isolation and rejection. These drawbacks limit DIY digital oscilloscopes from precision measurement and other more demanding applications in high school labs and clubs.
The first part of this design idea (DI) describes a practical, low-cost, and high-performance digital oscilloscope solution suitable for professional high school use, including precision signal measurement and analysis. The second part of this DI describes the experimental results obtained after building it.
The oscilloscope is based on a popular low-cost DIY platform. Analog and digital signal processing techniques, namely anti-aliasing filtering, and infinite impulse response (IIR) digital filtering, respectively, are implemented on the platform, significantly improving the noise rejection and measurement precision of the oscilloscope, with only a minor increase in cost.
Specifications ENOBIn many high school applications of oscilloscopes, an effective resolution of 6 bits to 8 bits is usually sufficient. However, for the most demanding professional high school STEM projects, sometimes a measurement precision within a few mV is required. As the full-scale range of these signals are typically within 3.3 V or 5 V, this requires a measurement precision of about one in a thousand (1/1000), or an effective number of bits (ENOB) of around 10 bits. Since the ENOB of ADCs is lower than their resolution, to achieve 10 bits of effective resolution, the scope’s ADC usually needs to be 12-bit or higher.
Signal bandwidthMost high-school electronic projects deal with signals from DC to audio frequency (20 Hz to 20 kHz). An analog bandwidth (-3 dB) of 100 kHz is chosen, and the oscilloscope needs to maintain an effective resolution of 10 bits with an input frequency up to 20 kHz.
Table 1 summarizes the major specifications of the oscilloscope, including the precision requirement, input bandwidth, and necessary functions for various high-school users on electrical and electronic projects. As a low-cost solution for high schools, we determined the build of materials (BOM) cost should be less than fifty dollars.
Analog bandwidth (-3dB) |
100 kHz |
Resolution of ADC |
12-bit |
Maximum real-time sampling rate |
1 MSPS |
Effective resolution (ENOB) |
10 bits (input from DC to 20 kHz) |
Maximum input voltage |
50 V (peak-peak) |
Voltage division range |
10 mV/div – 5 V/div |
Time division range |
5 s/div – 10 µs/div |
Trigger sources |
Internal/External |
BOM cost |
$50 max |
Table 1 the major specifications of the oscilloscope, including the precision requirement, input bandwidth, and necessary functions for various high-school users on electrical and electronic projects.
Pros and cons of common DIY scopeThe DSO138-mini, a popular type of DIY oscilloscope on the market, was chosen as the base platform for our oscilloscope. DSO138-mini uses STM32F103C8 MCU as its main processing unit, which offers built-in 12-bit, 1 MSPS ADCs [1]. It also has all the essential functions, such as input range and DC/AC selection, voltage division and time division control, along with trigger source control. Besides an LCD display, the DSO138-mini also supports an UART/USB link so that captured waveforms can be sent to a PC for higher resolution display, data measurement, and data storage. Priced at under $40, the DSO138-mini includes a standard oscilloscope probe, which gives value among DIY oscilloscopes with its functionalities and features.
The major issues with DSO138-mini, like many other DIY types of oscilloscopes, are inadequate measurement precision and noise rejection. As will be discussed in the next few sections, DSO138-mini lacks adequate anti-aliasing filtering capability, making it susceptible to input high-frequency noises. It also has large ADC noises, possibly coupled from noisy power rails of digital circuitry inside the microcontroller, making the effective resolution less than 9 bits even in its own self-test mode when there is no external input signal. These two problems of inadequate anti-aliasing filtering capability and large ADC noises make the DSO138-mini unsuitable as a precision signal measurement device in high school labs.
The new oscilloscopeTo fix these issues, a new anti-aliasing filter and a digital filter (1st-order IIR) are implemented on the DSO138-mini platform. The experiment results (Part 2 of this DI) show that the new oscilloscope has a significant improvement over the original DSO138-mini in terms of input noise rejection and ADC noise reduction and is capable of precision signal measurement up to 10 bits (or 1/1000).
The block diagram of the oscilloscope is illustrated in Figure 1. The analog input is first processed by the signal conditioning circuit for input range setup and voltage division selection. The ADC in the MCU converts the analog input signal into digital code. The scope control program of MCU processes and formats the digital data, and sends it over to LCD display, and/or to PC via UART/USB link. Note that the blocks in the blue color, namely the anti-aliasing filter, and the digital post-processing, are the new functions that were added to the DSO138-mini, to bring up its measurement precision to above 10 bits.
Figure 1 Block diagram of the modified DSO138-mini DIY oscilloscope platform where the blue blocks are the new functioned added.
Digital oscilloscopes rely on ADCs to convert the analog input into digital code for further signal processing and storage, one important phenomenon that could damage the conversion precision is called aliasing. Shannon theorem states that if the highest input frequency exceeds one-half of the ADC sampling frequency, or Nyquist frequency, aliasing will happen; meaning that the high frequency components will fold back to the signal bandwidth and contaminate the input signal, Figure 2
Figure 2 When the highest input frequency exceeds the ADC sampling frequency (fs), aliasing will occur, and the sampled frequency will not represent the original input signal.
In theory, ADC sampling frequency should be set two times above the input signal bandwidth to avoid aliasing. In practice, this is usually not sufficient since analog input signals often contain high frequency noises coupled from noisy parts of the system, e.g., the power supply, and high frequency harmonic tones generated by the signal sources.
In high precision applications, anti-aliasing filters of low-pass type are used to filter away these high frequency components. Ideally, a high-order low-pass filter (LPF) with a sharp roll off is preferred and the cut-off frequency of the filter should be placed near the Nyquist frequency, or one half of the sampling frequency. Due to the slow roll off rate (-20 dB/dec) of low-cost 1st order LPFs, the -3dB cut-off frequency often needs to be set significantly lower than the ADC sampling frequency to be effective.
Anti-aliasing filter designWhile many DIY oscilloscopes less than fifty dollars do not have any anti-aliasing filters at all, the DSO138-mini does provide limited LPF functions in its input signal conditioning circuits. Figure 3 illustrates the conceptual schematic of the analog front-end signal path of the DSO138-mini.
Figure 3 Conceptual schematic of the analog front-end signal path of DSO138-mini.
The first amplifier stage consists of input voltage division selection, LPF/frequency compensation, and a unity gain amplifier. The second stage is a non-inverting amplifier serving as a gain stage and a buffer to drive the ADC, with some attenuation adjustment capability at its input. The overall cut-off frequencies of the signal path are inadequate to effectively remove high frequency noises away from the input signal to avoid aliasing.
Table 2 summarizes the SPICE simulation results of the -3-dB cut-off frequencies at the oscilloscope’s different voltage division and attenuation configurations.
Voltage Division |
Attenuation |
Cut-off Frequency (-3dB) |
10 mV |
x1 |
599 kHz |
x2 |
598 kHz |
|
x5 |
593 kHz |
|
0.1 V |
x1 |
488 kHz |
x2 |
487 kHz |
|
x5 |
483 kHz |
|
1 V |
x1 |
813 kHz |
x2 |
805 kHz |
|
x5 |
798 kHz |
Table 2 Cut-off frequencies (-3 dB) at different voltage division / attenuation configurations.
The -3dB cut-off frequencies range from about 500 kHz to 800 kHz, depending on the input range and attenuation settings. The built-in ADC of the MCU of DSO0138-mini has the highest sampling rate of 1 MSPS, and 500 KSPS or below is frequently used as the highest sampling frequency in many applications.
Apparently, these cut-off frequencies are too high for 500KSPS or even 1MSPS—they are all close to or higher than Nyquist frequency at 1MSPS. Severe aliasing and subsequent degradation in measurement precision would happen if the analog input contained high frequency noises. To resolve this problem, we need to introduce an LPF with lower cut-off frequencies.
The right value of the cut-off frequency depends on the sampling rate or time division setup and the analog input bandwidth of the oscilloscope. Ideally, a customized anti-aliasing filter is implemented for each sampling rate/time division configuration. However, customized anti-aliasing filters will add hardware complexity and cost. In most high-school projects, we are mainly interested in the frequency from DC to audio frequency (20 kHz), with the highest sampling frequency of 500 KSPS to 1 MSPS. A cut-off (-3dB) frequency of around 100kHz is chosen for these applications.
Although the new anti-aliasing filter could be implemented at various locations in the input signal conditioning circuits, the best place is at the second amplifier stage, i.e., the ADC driver stage, so that the cut-off frequency is not affected by the input range and voltage division selections.
Figure 4 illustrates the conceptual schematic of the new anti-aliasing filter [2]. The capacitor, C_Filter, is added to the original second amplifier stage and put in parallel with the resistor, R6, forming a first-order LPF in an inverting amplifier configuration.
Figure 4 Conceptual schematic of the new first-order anti-aliasing LPF in an inverting amplifier configuration.
The -3 dB cut-off frequency is determined by the value of the C_Filter and R2 and given by the Equation 1.
Figure 5 shows the SPICE simulation results of the frequency response of the input conditioning circuits, including the newly added anti-aliasing filter, at Voltage Division of 0.1 V, Voltage Attenuation of 0 dB, and the C_Filter value of 1nF (R6 is 1.1 kΩ). The -3 dB cut-off is at 100 kHz. The filter cut-off frequency was found to be centered well around 100 kHz among all other voltage division and attenuation setups.
Figure 5 SPICE simulation results with frequency response of the input conditioning circuits, including the new anti-aliasing filter.
There is one additional benefit of C_Filter; it also lowers the output impedance of the amplifier which interfaces and drives the ADC. A lower output impedance can reduce the kick-back noise coming from the switch capacitor operation of the ADC [3].
Finally, when choosing the filter capacitor value in this type of topology, we also need to make sure that it does not cause issues of op-amp output slew rate and/or stability.
Digital signal post-processing Digital filter introductionThere are other noise sources in oscilloscopes that can damage measurement precision. Among them, noises on the ADC inside the MCU are of particular concern. This is because ADCs are sensitive to noises on their power supply rails and references. MCUs are known for their large digital switching noises and as a result, the signal-to-noise ratio (SNR) of their embedded ADCs are limited by these digital noises. The situation worsens in DIY oscilloscopes as little resources are available to be spent on reducing these digital noises.
The DSO138-mini, for example, has high frequency noises and ripples on its captured data even when the input analog signal is clean (with well-designed anti-aliasing filters). These ripples make precision measurement difficult.
Digital post-processing can be used to reduce these power supply and reference-induced noises. The ADC output digital code, or the raw data, goes through a digital LPF, with some of its high-frequency components (often noise) removed, before presenting to the display or other format of data output. The digital filter algorithms can be implemented either in MCU firmware or in PC programs when a PC is used for final display and data measurement.
Digital filter designDSO138-mini has two “terminals” for displaying waveforms. One is through an LCD for real-time waveform display. Because of its low resolution (320 x 240), the LCD is mainly used for bench waveform observation and monitoring. The oscilloscope also supports a UART/USB interface to transmit captured waveform data to a PC, where most precision measurements and signal analysis are performed. We therefore implement the digital post-processing program on the PC.
A first-order IIR filter is adopted for the digital signal post-processing [4]. The output and input relationship of a first order IIR filter is as follows:
The flow chart of the first-order IIR filter is shown in Figure 6.
Figure 6 Flow chart of the first-order IIR filter. IIR filters are widely used in various applications thanks to their simplicity and effectiveness.
The frequency response of the first-order IIR filter is shown in Figure 7. The pass-band width is decided by the coefficient, α. The smaller the α, the more attenuation to high frequency noises, with the cost of a smaller passband. Three different α values (0.5, 0.25, and 0.125) were plotted to compare their performances.
Figure 7 The frequency response of the IIR filter with three different α values: 0.5, 0.25, and 0.125.
The trade-off is between noise attenuation and useful signal bandwidth. Smaller α values can reject a wider band of noises but result in a smaller analog bandwidth.
For most high school projects, the input signal is from DC up to audio frequency (20 kHz). Therefore, we choose the value of α to be 0.25 as our default value for these purposes, with a -3 dB bandwidth of 23 kHz when ADC samples at 500 KSPS. The value of α is made programmable so that users can easily tune it for different applications.
Digital signal post-processing, if used properly, can significantly reduce the noises and ripples on oscilloscopes and improve measurement accuracy. We will demonstrate the effect of digital post-processing in Part 2.
Tommy Liu is currently a junior at Monta Vista High School (MVHS) with a passion for electronics. A dedicated hobbyist since middle school, Tommy has designed and built various projects ranging from FM radios to simple oscilloscopes and signal generators for school use. He aims to pursue Electrical Engineering in college and aspires to become a professional engineer, continuing his exploration in the field of electronics.
Related Content
- Designing antialias filters for ADCs
- Delta-sigma antialiasing filter with a mode-rejection circuit
- Three alternatives to your aliasing problems
- FIR and IIR digital filter design guide
- Fixed-point-IIR-filter challenges
- How to create fixed- and floating-point IIR filters for FPGAs
References
- ST Microelectronics. (n.d.). Datasheet of STM32F103x8, Medium-density performance line Arm®-based 32-bit MCU with 64 or 128 KB Flash, USB, CAN, 7 timers, 2 ADCs, 9 com. interfaces. https://www.st.com/resource/en/datasheet/stm32f103c8.pdf
- Franco, S. (1998). Design with operational amplifiers and Analog Integrated Circuits. McGraw Hill.
- Reeder, R. (2011, June 20). Kicking back at high-speed, unbuffered adcs. Electronic Design. https://www.electronicdesign.com/technologies/analog/article/21798279/kicking-back-at- high-speed-unbuffered-adcs
- of EECS, University of Michigan, Ann Arbor. (2002, August 2). IIR Filters IV: Case Study of IIR Filters, https://www.eecs.umich.edu/courses/eecs206/archive/spring02/notes.dir/iir4.pdf
The post Building a low-cost, precision digital oscilloscope appeared first on EDN.
AI-empowered optoelectronics reinvigorates biomedical sensing

Researchers are exploring a combination of optoelectronics components, artificial intelligence (AI), and analog drivers and amplifiers to seek new frontiers in biomedical sensing. Bill Schweber explores the design example of a non-invasive blood pressure monitor developed using LEDs, photosensors, and AI algorithms.
Read the full blog at EDN’s sister publication, Planet Analog.
Related content
- How many ways can you ramp an LED
- What’s Driving the ‘Reset’ LED That Won’t?
- Will photosensitive LEDs redefine touchscreens or go dark
- Remote control design: What a difference a single LED can make
The post AI-empowered optoelectronics reinvigorates biomedical sensing appeared first on EDN.
Taking apart a wall wart

Although in general I strive to cover a diversity of topics here in the blog, regular readers may have noticed that some amount of chronological theme-grouping still goes on. A few years back, for example, I wrote a fair bit about building PCs, both conceptually and in un-teardown (i.e., hands-on assembly) fashion. After that, there was a cluster of posts having to do with various still and video photography topics. And last year (extending into early this year) I talked a lot about lithium-based batteries, both in an absolute sense and relative to sealed lead-acid forebears, as well as the equipment containing them (and recharging them, i.e., solar cells).
Well, fair warning: this post is the kickoff of another common-topic cluster, having to do with audio. This isn’t a subject I’ve ignored to this point, mind you; consider just in recent times, for example, my posts on ambient noise suppression, interconnect schemes, lossy compression algorithms and listening gear (portable, too), microphones (plus on-PCB ones, tearing them down, and boosting their outputs) and exotic headphones, among others. But even more recently, I’ve obtained some “Chi-Fi” (i.e., built and often also directly sold by China-based suppliers) audio equipment—class D amplifiers and the like—along with audio gear from a US-based company that also does Stateside assembly, yet still effectively competes in the market.
What is a wall wart?More on all of that in posts to come through the remainder of this year, likely also extending into the next. For now: what does all of this have to do with a wall wart? And what is a wall wart, for those of you not already familiar with the term? Here’s Wikipedia’s take on the topic:
An AC adapter or AC/DC adapter (also called a wall charger, power adapter, power brick, or wall wart) is a type of external power supply, often enclosed in a case similar to an AC plug. AC adapters deliver electric power to devices that lack internal components to draw voltage and power from mains power themselves. The internal circuitry of an external power supply is often very similar to the design that would be used for a built-in or internal supply.
Today’s victim arrived via a Micca PB42X powered speaker set, purchased from an eBay seller:
They’d previously belonged to her son, who according to her never used them (more on that later), so she was offloading them to make some money. Problem was, although she’d sent me photos beforehand of the right speaker (fed by an RCA input connector set and containing the class D amplifier circuitry for both speakers; a conventional strand of speaker wire connects its output to its left-speaker sibling’s input) powered up, complete with a glowing red back panel LED, no AC adapter was accompanying it when it arrived at my front door.
After I messaged her, she sent me the “wall wart” you’ll see today, which not only was best-case underpowered compared to what it should have been—12V@500mA versus 18V@2A—but didn’t even work, outputting less than 200mV, sometimes measuring positive and other times negative voltage (in retrospect, I wish I would have also checked for any AC output voltage evidence before dissecting it):
She eventually agreed to provide a partial refund to cover my replacement-PSU cost, leaving me with a “dead” wall wart suitable only for the landfill. Although…I realized right before tossing it that I’d never actually taken one apart before. And this’d also give me a chance to test out the hypothesis of a hilariously narrated (watch it and listen for yourself) video I’d previously come across, proposing a method for getting inside equipment with an ultrasonic-welded enclosure:
Best video ever, right? The topic was of great interest, as I often came across such-sealed gear and my historical techniques for getting inside (a hacksaw, for example) also threatened to inadvertently mangle whatever was inside.
I didn’t have the suggested wallpaper knife in my possession; instead, I got a paint scraper with a sharp edge and hammer-compatible other end:
And in the following overview shots, with the wall wart as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, you’ll notice (among other things) the ultrasonic welded joint around the circumference, to which I applied my pounding attention:
Complete with a closeup of the (in)famous Prop. 65 sticker…
How’d it work out? Well…I got inside, as you’ll see, but the break along the joint wasn’t exactly clean. I won’t be putting this wall wart back together again, not that I’d want to try in this case:
Maybe next time I’ll use a more lightweight hammer, and/or with wield it with a lighter touch
Anyhoo, with the damage done, the front portion of the enclosure lifts off straightaway:
Two things baffle me about the interior of the front case piece:
- What’s the point of the two glue dabs, which aren’t location-relevant to anything inside?
- And what if any functional use does that extra diagonal plastic piece serve?
That all said, this is what we’re most interested in, right?
The insides similarly lifted right out of the remaining piece(s) of the enclosure:
If you hadn’t already noticed, the heftier front of the case had survived its encounter with the paint scraper and sledge intact. The smaller back portion…not so much:
Here’s an overview of the now-exposed back of the wall wart’s guts. The transformer, which I’m sure you already noticed before, dominates the landscape:
Now continuing (and finishing) the rotation in 90° increments:
Let’s take a closer look at that PCB hanging off the bottom:
I am, as reader feedback regularly reminds me, not an analog or power electronics expert by any means, but what I believe we’re looking at here is visual evidence of a very rudimentary form of AC-to-DC conversion, the four-diode bridge rectifier:
A diode bridge is a bridge rectifier circuit of four diodes that is used in the process of converting alternating current (AC) from the input terminals to direct current (DC, i.e. fixed polarity) on the output terminals. Its function is to convert the negative voltage portions of the AC waveform to positive voltage, after which a low-pass filter can be used to smooth the result into DC.
When used in its most common application, for conversion of an alternating-current (AC) input into a direct-current (DC) output, it is known as a bridge rectifier. A bridge rectifier provides full-wave rectification from a two-wire AC input, resulting in lower cost and weight as compared to a rectifier with a three-wire input from a transformer with a center-tapped secondary winding.
The low-pass filter mentioned in the definition is, of course, the capacitor on the PCB. And re the diodes, the manufacturer (presumably in aspiring to squeeze as much profit as possible out of the design) didn’t even bother going the (presumably more costly) integration route:
Prior to the availability of integrated circuits, a bridge rectifier was constructed from separate diodes. Since about 1950, a single four-terminal component containing the four diodes connected in a bridge configuration has been available and is now available with various voltage and current ratings.
Ironically, in looking back at Wikipedia’s “wall wart” page post-teardown, shortly before I began writing, I happened to notice this exact same approach showcased in one of the photos there:
A disassembled AC adapter showing a simple, unregulated linear DC supply circuit: a transformer, four diodes in a bridge rectifier, and a single electrolytic capacitor to smooth the waveform.
And it’s also documented in an interesting Reddit thread I found, which starts out this way:
Do inexpensive 12v wall warts usually use a transformer to step mains to about 12vac then bridge rectify and regulate to 12vdc?
Or
Do they use some minimal 1:1 transformer for isolation, rectify to dc then use a buck converter to drop to 12v?
Or some other standard clever design?
Look again at the PCB, though, specifically at the markings, and you might notice something curious. Let me move a couple of diodes out of the way to emphasize what I’m talking about:
Capacitor C5, the big one for output filtering, is obviously present. But why are there also markings for capacitors C1-C4 alongside the diodes…and why are they missing? The clue, I’ll suggest, appears in the last bit of Wikipedia’s diode bridge introductory section:
Diodes are also used in bridge topologies along with capacitors as voltage multipliers.
Once again to save cost, I think the manufacturer of this wall wart developed a PCB that could do double-duty. Populated solely with diodes, it (requoting Reddit) “uses a transformer to step mains to about 12vac then bridge rectify and regulate to 12vdc.” And for other wall wart product proliferations with other output DC voltages, you’ll find a mix of both diodes and capacitors soldered onto that same PCB.
Again, as I said before, I’m not an analog or power electronics expert by any means. So, at this point I’ll turn the microphone over to you for your thoughts in the comments. Am I at least in the ballpark with my theory (can you tell that MLB spring training just began as I’m writing this)? Or have I struck out swinging? And what else about this design did you find interesting?
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Wall wart + battery = small UPS: Good idea or not?
- When reliability hangs by a “wall wart” thread
- USB bus-powered devices put an end to Wall Warts
- Those AC/DC modules: my, how you have shrunk!
- Sorting out USB-C power supplies: Specification deceptions and confusing implementations
The post Taking apart a wall wart appeared first on EDN.
Will Intel’s rush to shed non-core assets benefit potential buyers?

Intel, long known for its acquisition misadventures, has finally reached a reality check. Its new CEO, Lip-Bu Tan, has announced spinning off the company’s non-core businesses to focus on core operations: CPU design and contract chip manufacturing. “These parts of Intel are no longer central to its future,” he said during his keynote at the Intel Vision conference in Las Vegas, Nevada.
It’s important to note that Intel has already been on this path since the final days of former CEO Pat Gelsinger. The Santa Clara, California-based semiconductor firm has already spun off FPGA maker Altera. Even before Gelsinger took charge of the top job at Intel in February 2021, the company had sold its NAND memory business to SK hynix for $8.85 billion in 2020.
Tan didn’t indicate whether Intel will divest or sell its non-core businesses. Source: Intel
The company also turned Intel Capital into a standalone investment fund early this year before Tan took the CEO job. While Intel will remain an anchor investor, the fund will help the company to reduce costs and streamline operations.
So, what does this mean when Tan vows to shed the company’s non-core businesses? Apparently, Intel will pursue this endeavor more aggressively now to focus its CPUs on artificial intelligence (AI) and data center applications, along with what Tan calls a Software 2.0 strategy. But will Intel’s rush to shed non-core assets lower their market value? Or will Intel divest these units instead of seeking buyouts? Time will tell.
Intel’s non-core businesses
Now let’s discuss Intel’s non-core businesses. Start with Mobileye, a developer of automotive driver-assist systems, which was listed on Nasdaq in 2022. Though Intel has denied the plan to divest a majority stake in Mobileye in the past, it will now be one of the easiest targets for Intel to handle.
Intel’s networking division could also be up for grabs. However, many industry watchers consider Intel’s Network and Edge (NEX) group a core business of Intel. It focuses on edge computing, networking, and AI solutions while developing modified versions of consumer and data center CPUs for telecom companies and similar entities.
It’s worth noting that Altera and Mobileye are worth approximately $17 billion to 20 billion. Intel can generate a huge amount of cash from those two entities, which, in turn, will bring financial stability to this once-mighty semiconductor outfit now attempting to reclaim its past glory.
Still, the elephant in the room is not the non-core assets but whether Intel will remain whole or split up its CPU and contract manufacturing businesses. At the same time, however, Intel’s decision to shed non-core assets will bring much-needed stability during the turnaround that Tan envisions for this chip industry pioneer.
Related Content
- Who will get Altera in 2025
- We Really Need to Talk about Mobileye
- Intel More Likely to Divest Units Than Seek Buyout
- Intel’s Embarrassment of Riches: Advanced Packaging
- How will Intel’s purchase of Altera affect embedded space?
The post Will Intel’s rush to shed non-core assets benefit potential buyers? appeared first on EDN.
Can a household manage on just 50 amps?

Today’s typical home is wired for at least 100-amp service, and many are wired for twice that number. This makes sense given the multiplicity of modern appliances and electronics in a home. The demand is obviously higher if you have an electric stove/range, or an electric vehicle using a basic in-house Level 1 charger.
The need for power—and more of it—became clear when a neighbor who was having a modest addition put on the house asked me for some advice. The situation was this: the contractor told him that for various reasons, their 100+ A service would be cut to 50 A for a month or two during the construction. The reasons for this cutback were not clear, but it had something to do with cable capacity.
The question I was asked was simple enough: could they—a couple plus two children approaching teen years—manage on just 50 A and, if not, what steps could the take to minimize disruption? (Actually, the word my neighbor used was “survive” rather than “manage” but I feel that’s overly dramatic.)
The semi-quantitative assessmentMy answer was also simple, as I gave the prudent engineering response “it depends” followed by “I’ll think about it and get back to you.” Then I set out to develop a firmer answer by doing some semi-quantitative assessment.
My first impulse was to check the web and, sure enough, there were plenty of apps for assessing house power needs. However, these required a detailed inventory of the loads which was more than I was ready to do. Then I thought I would create an Excel spreadsheet but soon realized that sort of analysis could easily become more precise than the problem merited. After all, my neighbor wanted a simple answer:
- It’s no problem,
- it’s definitely a problem, or
- it’s a manageable “maybe” problem.
Instead, I took out my “back of the envelope” pad and decided to do some rough assessments, Figure 1.
Figure 1 This “back of the envelope” pad serves as a visible reminder that rough and imprecise input numbers should get appropriate analysis and not impute undeserved precision to the results. Source: Bill Schweber
I didn’t actually use this custom-made pad, but instead I kept it in front of me as a constant reminder that I should stick to estimates that were rough enough that they could be added up “in my head” on that pad. The reason for this simplicity is there are a lot of fuzzy numbers in the assessment.
For example, without knowing the make and model of various higher-current appliances such as the electric stove/range, any number I did use would likely have a ±10 to ±20% error band. Further, while the individual errors might cancel each other out to some extent they could also accumulate, resulting in a fairly large error band. In other words, random errors can aggregate either way.
The danger when using a spreadsheet is that soon you fall into a mental false-accuracy trap, since its available precision of more digits soon leads to the sense that there is corresponding accuracy as well, which is clearly not the case here (yes, I could restrict the cells to a few digits, but that’s another thing to do). It’s been my experience that it is very easy make the leap from rough estimate to a firm “you can bet on it” number, even if there is no basis for doing so; I’ve seen that happen in preliminary design review meetings many times when the project manager asks for some numbers.
Complicating the assessment, some of the larger loads such as the stove/range or microwave oven are under the direct control of the house occupants, while others such as heating system, refrigerator, and separate freezer control their own on/off cycles.
Numbers guide but don’t proveI asked some questions about what was in the house, made a list, and went online to get a sense of how much current each uses. These rough estimates are for current consumption from a 120 VAC line; for those with 230 VAC lines, the current numbers should be cut in half, so that 50-A maximum would be 25 A:
1) Big loads you can’t control (these intermittent, asynchronous loads cycle on and off with unknown duty cycle; they may add up all at once, or hardly at all)
- Refrigerator/freezer: 6 A (will be higher for a few seconds, as the compressor kicks in)
- Separate outside freezer: 3 to 5 A, depending on outside temperature (same note as above)
- Oil-fired heating system: 5 A, temperature-dependent (same note as above)
- Electric water heater: 5 to 8 A
Total: around 20 A
2) Small loads (some you can control, some not; not an issue unless you are close to maximum limit
- Large TV: 1 to 2 A
- smaller screens: 0.5 A
- Various chargers: 0.5 A or less
- House lights: 0.5 A each
- House network boxes: 1 A
Total: 5 to 10 A
3) Bigger loads that you can control
- Clothes washer: 4 to 6 A
- Clothes dryer: 15 to 20 A
- Air conditioning: 8 to 10 A (but not a factor as this is a winter situation)
- Kitchen range top: 5 to 10 A, depending on model and temperature setting
- Kitchen oven: 8 to 12 A, depending on model and temperature setting
- Toaster Oven: 8 A
- Dishwasher: 9 to 12 A
- Microwave oven: 8 A
- Hair dryer: 10 to 12 A
Total: it depends on what you are using and when, but it adds up very quickly!
Conclusion: The loads you can’t control add up to around 25 A (all of these won’t be on 100% of the time) plus small loads of 5 to 10 A bring the total to 30 to 35 A, so the family will have about 15 to 20 amps of headroom on the loads that can be controlled. That’s doable but also cutting it close; you could have a case when just one additional modest load causes a droop and a brown-out of the supply voltage. That, in turn, brings on other operational problems in both motorized and all-electronic products.
Electrical service to older homesAs a curiosity, I checked out some older houses (1930 vintage) in the area, many of which are still occupied by descendants of the original families. Some of the present occupants said when the houses were built, they were outfitted with 30-A service and used knob-and-tube wiring rather than metal conduit or Romex (NM, or non-metallic sheathed) cable, Figure 2. While they have upgraded to 100+ A overs the years, some still have the knob-and-tube in the attic (not even close to code-approved now).
Figure 2 Early wiring used knob and tube insulation (a) which was replaced by (b) metal conduit (still in wide use) and (c) PVC-coated non-metallic sheathed cable, usually referred to as Romex. Sources: Arc Angel Electric Co., Meteor Electrical, D&F Liquidators
What’s your sense of the home-AC service situation? Have you ever been on a temporary or permanent limited-power budget at home? Have you ever had the corresponding “average load” versus “peak load” power-supply rating dilemma, either for line AC or with the DC supply in a product?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- AC-line safety monitor brings technical, privacy issues
- Ground-fault interruption protection—without a ground?
- When extreme-precision numbers are legitimate
- Even a well-known numerical “constant” may need a major revision
- Why I’m fine with my calculator’s tiny decimal point
Reference
- Arc Angel Electric, “Understanding Knob and Tube Wiring: A Comprehensive Guide”
The post Can a household manage on just 50 amps? appeared first on EDN.
Canadian tech accelerator engages Nokia in quantum telecom testbed

After quantum computing, quantum communication is now stealing the headlines. Numana, a Montreal, Québec-based non-profit technology accelerator, has engaged Nokia and Honeywell Aerospace Technologies in its Kirq Quantum Communication Testbed to advance quantum-safe communication networks. While Nokia will contribute its advanced cryptographic network technologies, Honeywell is to share quantum encryption techniques.
Read the full story at EDN’s sister publication, EE Times.
Related Content
- Quantum Computers Explained
- Hardware security entering quantum computing era
- A Global Race for Supremacy in Quantum Computing
- Toshiba Claims Breakthrough in Quantum Communication
- BASF and Kipu Focus on End-User Mastery of Quantum Computing
The post Canadian tech accelerator engages Nokia in quantum telecom testbed appeared first on EDN.
EcoFlow’s Delta 2: Abundant Stored Energy (and Charging Options) for You

As I briefly noted back in mid-November, I ended up replacing my initial failed-experiment lithium battery-based portable power unit, Energizer’s PowerSource Pro Battery Generator:
with two EcoFlow successors, the smaller RIVER 2:
and this writeup’s subject, the DELTA 2, which I bought in claimed factory-refurbished condition (albeit, like the RIVER 2, also seemingly actually brand new) from EcoFlow via eBay in mid-September on sale (20% off list price) for $479 with an included 2-year extended warranty:
Why both? Or said another way, how do they differ? As you can likely already tell from the stock photo of each, the DELTA 2 is the huskier of the two:
|
Dimensions |
Weight |
RIVER 2 |
9.6 x 8.5 x 5.7 in |
7.7 lbs |
DELTA 2 |
15.7 x 8.3 x 11 in (400 x 211 x 281 mm) |
27 lbs (12 kg) |
That said, EcoFlow puts the DELTA 2’s larger volume to good use with 4x the storage capacity: 1,024 Wh versus 256 Wh with the RIVER 2. The expanded front, rear and sides’ cumulative real estate also affords the DELTA 2 a larger and broader allotment of output power ports:
- Six AC (four two-prong, two three-prong with ground): 120V, 50Hz/60Hz, 1800W (along with 2200W at sub-120V per X-Boost technology, as detailed in my RIVER 2 coverage, and 2700W surge), pure sine wave, not simulated
- Two USB-A DC: 5V, 2.4A, 12W max
- Two USB-A “Fast Charge” DC: 5V @ 2.4A / 9V @ 2A / 12V @ 1.5A, 18W max
- “Cigarette lighter” car DC: 12.6V, 10A, 126W max
- Two DC5521 DC: 12.6V, 3A, 38W max (DC5525 adapter cable also included in kit)
- And two USB-C DC: 5/9/12/15/20V 5A, 100W max (unlike with the RIVER 2, however, these can’t do double-duty as charging input ports)
Unlike the RIVER 2, the DELTA 2’s storage capacity can be further expanded to between 2 kWh and (beyond) 3 kWh by tethering it to a separate DELTA 2, DELTA MAX or DELTA 2 MAX extra battery via its integrated XT150 connector:
That same XT150 connection also enables in-vehicle fast charging at up to 800W using the Alternator Charger, which I also now own ($319.20 on sale) and plan to install in my van soon:
Unfortunately, the very cool (and similar) looking PowerStream residential power unit, which I’m guessing also communicates with the DELTA 2 over XT150, isn’t currently available in the United States due to regulatory restrictions on plug-in grid solutions.
But the XT150-compatible Smart Generator is:
It’s a bit of an enigma, at least to me, given the company’s seeming heavy emphasis on solar and other renewable energy recharging sources. But hey, when the sun’s not shining but your battery’s drained, I suppose this gas-powered generator will do in a pinch instead. And although the product page implies that it only works with higher-capacity DELTA Pro and Max units, this company-published video confirms that it’s mainstream DELTA 2-compatible, too:
Whereas the RIVER 2’s XT60i DC charging input, usable with both solar and “cigarette lighter” car sources via cable adapters, is 110W max (for solar, specifically, 100W for car), the one in the DELTA 2 is beefier, supporting (again, for solar) an 11-60V and up to 15A/500W max input. Last September, I also bought two refurbished 220W second-generation EcoFlow solar panels, on sale at the time for $299 each inclusive of a two-year extended warranty:
which I’ll be cable-extending and in-parallel combining:
Stand by for coverage of them, along with hands-on impressions of the entire setup, to come.
What about AC charging? Although, as previously mentioned, the DELTA 2 has 4x the storage capacity of its RIVER 2 sibling, the charging speeds are surprisingly similar. Whereas the RIVER 2 will charge from 0% to full in 60 minutes, EcoFlow claims that the DELTA 2 will get to 80% in 50 minutes and completely full in 80 minutes. Photos taken during the first-time charging of my unit show that the initial charging rate:
automatically slows down as the full-charge threshold nears (note the input power variance):
and is eventually reached:
Here are those same first two charging segments captured by the wireless-tethered mobile app:
which is capable of simultaneously communicating with both of my EcoFlow devices:
assuming they’re both powered on at the time:
And what of generational enhancements and broader differences? As with the RIVER-to-RIVER 2 sequence I discussed in my recent coverage, EcoFlow also evolved the DELTA 2’s core battery technology from its precursor’s NMC (lithium nickel manganese cobalt), which is only capable of a few hundred recharge cycles before its maximum storage capacity degrades to unusable levels in realistic usage scenarios, to a LiFePO4 (lithium iron phosphate), also known as LFP (lithium ferrophosphate), battery formulation. Whereas the first-generation DELTA was guaranteed for only 500 recharge cycles, with the DELTA 2 it’s 3,000 (in both cases to 80+% of the original battery pack capacity), along with offering a boosted 5-year warranty.
And last September, EcoFlow launched not only the RIVER 3 family but also its first two DELTA 3 devices. The first, the DELTA 3 Plus, is now shipping as I write these words at the end of 2024:
Improvements versus the DELTA 2 predecessor include:
faster switching from wall outlet-sourced to inverter-generated AC (higher power, too) for more robust UPS functional emulation, as with the RIVER 3, along with improved airflow (leading to claimed 30 dB noise levels in normal operation), newer-generation denser 40135 batteries (translating to smaller dimensions and lighter weight, along with a boosted recharge cycle count to 4,000), expansion support up to 5 kWh, and even faster AC charging (sub-1 hour to 100%).
That all said, the DELTA 3 Plus has the same 1-kWh capacity as the non-Plus DELTA 2. What then, of the baseline DELTA 3 also briefly mentioned in last September’s unveiling, and supposedly available in October? Detailed specs are not yet public, at least to the best of my knowledge, as I submit this writeup. Instead (or in addition?), EcoFlow has stealth-launched the DELTA 3 1500:
whose two-color-option styling is reminiscent of the DELTA 2 but with boosted 1.5 kWh capacity and other tweaks. Specs are also scant for this device, but the Reddit crowd was able to dig up a user manual. My guess? EcoFlow is struggling to source enough lithium batteries (brand new DELTA 2 supplemental batteries are also MIA right now, although refurbs occasionally appear on eBay, the company website, etc.) and is dynamically evolving its product line in response.
In closing, after re-reading this piece, I realize that I may have come off as a bit (or more than a bit) of an EcoFlow “fanboy”. To be abundantly clear…I paid for all this gear myself (with no post-publication kickbacks), and the company doesn’t even know I’m doing these writeups. I just think that the products and their underlying technologies are quite cool. Agree or disagree? Let me know your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- A holiday shopping guide for engineers: 2024 edition
- Multi-solar panel interconnections: Mind the electrons’ directions
- EcoFlow’s RIVER 2: Svelte portable power with lithium iron phosphate fuel
- The Energizer 200W portable solar panel: A solid offering, save for a connector too fragile
The post EcoFlow’s Delta 2: Abundant Stored Energy (and Charging Options) for You appeared first on EDN.