Українською
  In English
EDN Network
Edge AI powers the next wave of industrial intelligence

Artificial intelligence is moving out of the cloud and into the operations that create and deliver products to us every day. Across manufacturing lines, logistics centers, and production facilities, AI at the edge is transforming industrial operations, bringing intelligence directly to the source of data. As the industrial internet of things (IIoT) matures, edge-based AI is no longer an optional enhancement; it’s the foundation for the next generation of productivity, quality, and safety in industrial environments.
This shift is driven by the need for real-time, contextually aware intelligence—systems that can see, hear, and even “feel” their surroundings, analyze sensor data instantly, and make split-second decisions without relying on distant cloud servers. From predictive maintenance and automated inspection to security monitoring and logistics optimization, edge AI is redefining how machines think and act.
Why industrial AI belongs at the edgeTraditional industrial systems rely heavily on centralized processing. Data from machines, sensors, and cameras is transmitted to the cloud for analysis before insights are sent back to the factory floor. While effective in some cases, this model is increasingly impractical and inefficient for modern, latency-sensitive operations.
Implementing at the edge addresses that. Instead of sending vast streams of data off-site, intelligence is brought closer to where data is created, within or around the machine, gateway, or local controller itself. This local processing offers three primary advantages:
- Low latency and real-time decision-making: In production lines, milliseconds matter. Edge-based AI can detect anomalies or safety hazards and trigger corrective actions instantly without waiting for a network round-trip.
- Enhanced security and privacy: Industrial environments often involve proprietary or sensitive operational data. Processing locally minimizes data exposure and vulnerability to network threats.
- Reduced power and connectivity costs: By limiting cloud dependency, edge systems conserve bandwidth and energy, a crucial benefit in large, distributed deployments such as logistics hubs or complex manufacturing centers.
These benefits have sparked a wave of innovation in AI-native embedded systems, designed to deliver high performance, low power consumption, and robust environmental resilience—all within compact, cost-optimized footprints.
Edge-based AI is the foundation for the next generation of productivity, quality, and safety in industrial environments, delivering low latency, real-time decision-making, enhanced security and privacy, and reduced power and connectivity costs. (Source: Adobe AI Generated)
Localized intelligence for industrial applications
Edge AI’s success in IIoT is largely based on contextual awareness, which can be defined as the ability to interpret local conditions and act intelligently based on situational data. This requires multimodal sensing and inference across vision, audio, and even haptic inputs. In manufacturing, for example:
- Vision-based inspection systems equipped with local AI can detect surface defects or assembly misalignments in real time, reducing scrap rates and downtime.
- Audio-based diagnostics can identify early signs of mechanical failure by recognizing subtle deviations in sound signatures.
- Touch or vibration sensors help assess machine wear, contributing to predictive maintenance strategies that reduce unplanned outages.
In logistics and security, edge AI cameras provide real-time monitoring, object detection, and identity verification, enabling autonomous access control or safety compliance without constant cloud connectivity. A practical example of this approach is a smart license-plate-recognition system deployed in industrial zones, a compact unit capable of processing high-resolution imagery locally to grant or deny vehicle access in milliseconds.
In all of these scenarios, AI inference happens on-site, reducing latency and power consumption while maintaining operational autonomy even in network-constrained environments.
Low power, low latency, and local learningIndustrial environments are unforgiving. Devices must operate continuously, often in high-temperature or high-vibration conditions, while consuming minimal power. This has made energy-efficient AI accelerators and domain-specific system-on-chips (SoCs) critical to edge computing.
A good example of this trend is the early adoption of the Synaptics Astra SL2610 SoC platform by Grinn, which has already resulted in a production-ready system-on-module (SOM), Grinn AstraSOM-261x, and a single-board computer (SBC). By offering a compact, industrial-grade module with full software support, Grinn enables OEMs to accelerate the design of new edge AI devices and shorten time to market. This approach helps bridge the gap between advanced silicon capabilities and practical system deployment, ensuring that innovations can quickly translate into deployable industrial solutions.
The Grinn–Synaptics collaboration demonstrates how industrial AI systems can now run advanced vision, voice, and sensor fusion models within compact, thermally optimized modules.
These platforms combine:
- Embedded quad-core Arm processors for general compute tasks
- Dedicated neural processing units (NPUs) delivering multi-trillion operations per second for inference
- Comprehensive I/O for camera, sensor, and audio input
- Industrial-grade security
Equally important is support for custom small language models (SLMs) and on-device training capabilities. Industrial environments are unique. Each factory line, conveyor system, or inspection station may generate distinct datasets. Edge devices that can perform localized retraining or fine-tuning on new sensor patterns can adapt faster and maintain high accuracy without cloud retraining cycles.
The Grinn OneBox AI-enabled industrial SBC, designed for embedded edge AI applications, leverages a Grinn AstraSOM compute module and the Synaptics SL1680 processor. (Source: Grinn Global)
Emergence of compact multimodal platforms
The recent introduction of next-generation SoCs such as Synaptics’ SL2610 underscores the evolution of edge AI hardware. Built for embedded and industrial systems, these platforms offer integrated NPUs, vision digital-signal processors, and sensor fusion engines that allow devices to perceive multiple inputs simultaneously, such as camera feeds, audio signals, or even environmental readings.
Such capabilities enable richer human-machine interaction in industrial contexts. For instance, a line operator can use voice commands and gestures to control inspection equipment, while the system responds with real-time feedback through both visual indicators and audio prompts.
Because the processing happens on-device, latency is minimal, and the system remains responsive even if external networks are congested. Low-power design and adaptive performance scaling also make these platforms suitable for battery-powered or fanless industrial devices.
From the cloud to the floor: practical examplesCollaborations like the Grinn–Synaptics development have produced compact, power-efficient edge computing modules for industrial and smart city deployments. These modules integrate high-performance neural processing, customized AI implementations, and ruggedized packaging suitable for manufacturing and outdoor environments.
Deployed in use cases such as automated access control and vision-guided robotics, these systems demonstrate how localized AI can replace bulky servers and external GPUs. All inference, from image recognition to object tracking, is performed on a module the size of a matchbox, using only a few watts of power.
The results:
- Reduced latency from hundreds of milliseconds to under 10 ms
- Lower total system cost by eliminating cloud compute dependencies
- Improved reliability in areas with limited connectivity or strict privacy requirements
The same architecture supports multimodal sensing, enabling combined visual, auditory, and contextual awareness—key for applications such as worker safety systems that must recognize both spoken alerts and visual cues in noisy and complex factory environments.
Toward self-learning, sustainable intelligenceThe evolution of edge AI is about more than just performance; it’s about autonomy and adaptability. With support for custom, domain-specific SLMs, industrial systems can evolve through continual learning. For example, an inspection model might retrain locally as lighting conditions or material types change, maintaining precision without manual recalibration.
Moreover, the combination of low-power processing and localized AI aligns with growing sustainability goals in industrial operations. Reducing data transmission, cooling needs, and cloud dependencies contributes directly to lower carbon footprints and energy costs, critical as industrial AI deployments scale globally.
Edge AI as the engine of industrial transformationThe rise of AI at the edge marks a turning point for IIoT. By merging context-aware intelligence with efficient, scalable compute, organizations can unlock new levels of operational visibility, flexibility, and resilience.
Edge AI is no longer about supplementing the cloud; it’s about bringing intelligence where it’s most needed, empowering machines and operators alike to act faster, safer, and smarter.
From the shop floor to the supply chain, localized, multimodal, and energy-efficient AI systems are redefining the digital factory. With continued innovation from technology partnerships that blend high-performance silicon with real-world design expertise, the industrial world is moving toward a future where every device is an intelligent, self-aware contributor to production excellence.
The post Edge AI powers the next wave of industrial intelligence appeared first on EDN.
The ecosystem view around an embedded system development

Like in nature, development tools for embedded systems form “ecosystems.” Some ecosystems are very self-contained, with little overlap on others, while other ecosystems are very open and broad with support for everything but the kitchen sink. Moreover, developers and engineers have strong opinions (to put it mildly) about this subject.
So, we developed a greenhouse that sustains multiple ecosystems; the greenhouse demo we built shows multiple microcontrollers (MCUs) and their associated ecosystems working together.
The greenhouse demo
The greenhouse demo is a simplified version of a greenhouse controller. The core premise of this implementation is to intelligently open/close the roof to allow rainwater into the greenhouse. This is implemented using a motorized canvas tarp mechanism. The canvas tarp was created from old promotional canvas tote bags and sewn into the required shape.
The mechanical guides and lead screw for the roof are repurposed from a 3D printer with a stepper motor drive. An evaluation board is used as a rain sensor. Finally, a user interface panel enables a manual override of the automatic (rain) controls.

Figure 1 The greenhouse demo is mounted on a tradeshow wedge. Source: Microchip
It’s implemented as four function blocks:
- A user interface, capacitive touch controller with the PIC32CM GC Curiosity Pro (EA36K74A) in VS Code
- A smart stepper motor controller reference design built on the AVR EB family of MCUs in MPLAB Code Configurator Melody
- A main application processor with SAM E54 on the Xplained Pro development kit (ATSAME54-XPRO), running Zephyr RTOS
- A liquid detector using the MTCH9010 evaluation kit
The greenhouse demo outlined in in this article is based on a retractable roof developed by Microchip’s application engineering team in Romania. This reference design is implemented in a slightly different fashion to the greenhouse, with the smart stepper motor controller interfacing directly with the MTCH9010 evaluation board to control the roof position. This configuration is ideal for applications where the application processor does not need to be aware of the current state of the roof.

Figure 2 This retractable roof demo was developed by a design team in Romania. Source: Microchip
User interface controller
Since the control panel for this greenhouse normally would be in an area where water should be expected, it was important to take this into account when designing the user interface. Capacitive touch panels are attractive as they have no moving parts and can be sealed under a panel easily. However, capacitive touch can be vulnerable to false triggers from water.
To minimize these effects, an MCU with an enhanced peripheral touch controller (PTC) was used to contain the effects of any moisture present. Development of the capacitive touch interface was aided with MPLAB Harmony and the capacitive touch libraries, which greatly reduce the difficulty in developing touch applications.
The user interface for this demo is composed of a PIC32CM GC Curiosity Pro (EA36K74A) development kit connected to a QT7 XPlained Pro Extension (ATQT7-XPRO) kit to provide a (capacitive) slider and two touch buttons.

Figure 3 The QT7 Xplained extension kit comes with self-capacitance slider and two self-capacitance buttons alongside 8 LEDs to enable button state and slider position feedback. Source: Microchip
The two buttons allow the user to fully open or close the tarp, while the slider enables partial open or closed configurations. When the user interface is idle for 30 seconds or more, the demo switches back to the MTCH9010 rain sensor to automatically determine whether the tarp should be opened or closed.
Smart stepper motor controller
The smart stepper motor controller is a reference design that utilizes the AVR EB family of MCUs to generate the waveforms required to perform stepping/half-stepping/microstepping of a stepper motor. By having the MCU generate the waveforms, the motor can behave independently, rather than requiring logic or interaction from the main application processor(s) elsewhere in the system. This is useful for signals such as limit switches, mechanical stops, quadrature encoders, or other signals to monitor.

Figure 4 Smart stepper motor reference design uses core independent peripherals (CIPs) inside the MCUs to microstep a bipolar winding stepper motor. Source: Microchip
The MCU receives commands from the application processor and executes them to move the tarp to a specified location. One of the nice things about this being a “smart” stepper motor controller is that the functionality can be adjusted in software. For instance, if analog signals or limit switches are added, the firmware can be modified to account for these signals.
While the PCB attached to the motor is custom, this function block can be replicated with the multi-phase power board (EV35Z86A), the AVR EB Curiosity Nano adapter (EV88N31A) and the AVR EB Curiosity Nano (EV73J36A).
Application processor and other ecosystems
The application processor in this demo is a SAM E54 MCU that runs Zephyr real-time operating system (RTOS). One of the biggest advantages of Zephyr over other RTOSes and toolchains is the way that the application programming interface (API) is kept uniform with clean divisions between the vendor-specific code and the abstracted, higher-level APIs. This allows developers to write code that works across multiple MCUs with minimal headaches.
Zephyr also has robust networking support and an ever-expanding list of capabilities that make it a must-have for complex applications. Zephyr is open source (Apache 2.0 licensing) with a very active user base and support for multiple different programming tools such as—but not limited to—OpenOCD, Segger J-Link and gdb.
Beyond the ecosystems used directly in the greenhouse demo, there are several other options. Some of the more popular examples include IAR Embedded Workbench, Arm Keil, MikroE’s Necto Studio and SEGGER Embedded Studio. These tools are premium offerings with advanced features and high-quality support to match.
For instance, I recently had an issue with booting Zephyr on an MCU where I could not access the usual debuggers and printf was not an option. I used SEGGER Ozone with a J-Link+ to troubleshoot this complex issue. Ozone is a special debug environment that eschews the usual IDE tabs to provide the developer with more specialized windows and screens.
In my case, the issue occurred where the MCU would start up correctly from the debugger, but not from a cold start. After some troubleshooting and testing, I eventually determined one of the faults was a RAM initialization error in my code. I patched the issue with a tiny piece of startup assembly that ran before the main kernel started up. The snippet of assembly that I wrote is attached below for anyone interested.

The moral of the story is that development environments offer unique advantages. An example of this is IAR adding support for Zephyr to its IDE solution. In many ways, the choice of what ecosystem to develop in is up to personal preference.
There isn’t really a wrong answer, if it does what you need to make your design work. The greenhouse demo embodies this by showing multiple ecosystems and toolchains working together in a single system.
Robert Perkel is an application engineer at Microchip Technology. In this role, he develops technical content such as application notes, contributed articles, and design videos. He is also responsible for analyzing use-cases of peripherals and the development of code examples and demonstrations. Perkel is a graduate of Virginia Tech where he earned a Bachelor of Science degree in Computer Engineering.
Related Content
- Just What is an Embedded System?
- Making an embedded system safe and secure
- Developing Energy-Efficient Embedded Systems
- Building Embedded Systems that Survive the Edge
- Next Gen Embedded System Hardware, Software, Tools, and Operating
The post The ecosystem view around an embedded system development appeared first on EDN.
The role of motion sensors in the industrial market

The future of the industrial market is being established by groundbreaking technologies that promise to reveal unique potential and redefine what is possible. These innovations range from collaborative robots (cobots) and artificial intelligence to the internet of things, digital twins, and cloud computing.
Cobots are not just tools but partners, empowering human workers to achieve greater creativity and productivity together. AI is ushering industries into a new era of intelligence, where data-driven insights accelerate innovation and transform challenges into opportunities.
The IoT is weaving vast, interconnected machines and systems that enable seamless communication and real-time responsiveness like never before. Digital twins bring imagination to life by creating virtual environments where ideas can be tested, refined, and perfected before they touch reality. Cloud computing serves as the backbone of this revolution, offering limitless power and connectivity to drive brave visions forward.
Together, these technologies are inspiring a new industrial renaissance, where innovation, sustainability, and human initiative converge to build a smarter, more resilient world.
The role of sensorsSensors are the silent leaders driving the industrial market’s transformation into a realm of intelligence and possibility. Serving as the “eyes and ears” of smart factories, these devices unlock the power of real-time data, enabling industries to look beyond the surface and anticipate the future. By continuously sensing pressure, temperature, position, vibration, and more, sensors enable workers to be continuously monitored and bring machines to life, turning them into connected, responsive entities within the industrial IoT (IIoT).
This flow of information accelerates innovation, enables predictive maintenance, and enhances safety. Sensors do not just monitor; they usher in a new era where efficiency meets sustainability, where every process is optimized, and where industries embrace change with confidence. In this industrial landscape, sensors are the catalysts that transform raw data into insights for smarter, faster, and more resilient industries.
Challenges for industrial motion sensing applicationsSensors in industrial environments face several significant challenges. They must operate continuously for years on battery power without failure. Additionally, it is crucial that they capture every critical event to ensure no incidents are missed. Sensors must provide accurate and precise tracking to manage processes effectively. Simultaneously, they need to be compact yet powerful, integrating multiple functions into a small device.
Most importantly, sensors must deliver reliable tracking and data collection in any environment—whether harsh, noisy, or complex—ensuring consistent performance regardless of external conditions. Overcoming these challenges is essential to making factories smarter and more efficient through connected technologies, such as the IIoT and MEMS motion sensors.
MEMS inertial sensors are essential devices that detect motion by measuring accelerations, vibrations, and angular rates, ensuring important events are never missed in an industrial environment. Customers need these motion sensors to work efficiently while saving power and to keep performing reliably even in tough conditions, such as high temperatures.
However, there are challenges to overcome. Sometimes sensors can become overwhelmed, causing them to miss important impact or vibration details. Using multiple sensors to cover different motion ranges can be complicated, and managing power consumption in an IIoT node is also a concern.
There is a tradeoff between accuracy and range: Sensors that measure small movements are very precise but can’t handle strong impacts, while those that detect strong impacts are less accurate. In industrial settings, sensors must be tough enough to handle harsh environments while still providing reliable and accurate data. Solving these challenges is key to making MEMS sensors more effective in many applications.
How the new ST industrial IMU can helpInertial measurement units (IMUs) typically integrate accelerometers to measure linear acceleration and gyroscopes to detect angular velocity. These devices often deliver space and cost savings while reducing design complexity.
One example is ST’s new ISM6HG256X intelligent IMU. This MEMS sensor is the industry’s first IMU for the industrial market to integrate high-g and low-g sensing into a single package with advanced features such as sensor fusion and edge processing.
The ISM6HG256X addresses key industrial market challenges by integrating a single mechanical structure for an accelerometer with a wide dynamic range capable of capturing both low-g vibrations (16 g) and high-g shocks (256 g) and a gyroscope, effectively eliminating the need for multiple sensors and simplifying system architecture. This compact device leverages embedded edge processing and adaptive self-configurability to optimize performance while significantly reducing power consumption, thereby extending battery life.
Engineered to withstand harsh industrial environments, the IMU reliably operates at temperatures up to 105°C, ensuring consistent accuracy and durability under demanding conditions. Supporting Industry 5.0 initiatives, the sensor’s advanced sensing architecture and edge processing capabilities enable smarter, more autonomous industrial systems that drive innovation.
Unlocking smarter tracking and safety, this integrated MEMS motion sensor is designed to meet the demanding needs of the industrial sector. It enables real-time asset tracking for logistics and shipping, providing up-to-the-minute information on location, status, and potential damage. It also enhances worker safety through wearable devices that detect falls and impacts, instantly triggering emergency alerts to protect personnel.
Additionally, it supports condition monitoring by accurately tracking vibration, shock, and precise motion of industrial equipment, helping to prevent downtime and costly failures. In factory automation, the solution detects unusual vibrations or impacts in robotic systems instantly, ensuring smooth and reliable operation. By combining tracking, monitoring, and protection into one component, industrial operations can achieve higher efficiency, safety, and reliability with streamlined system design.
The ISM6HG256X IMU sensor combines simultaneous low-g (±16 g) and high-g (±256 g) acceleration detection with a high-performance precision gyroscope for angular rate measurement. (Source: STMicroelectronics)
As the industrial market landscape evolves toward greater flexibility, sustainability, and human-centered innovation, industrial IMU solutions are aligned with the key drivers shaping the future of the industrial market. IMUs can enable precise motion tracking, reliable condition monitoring, and energy-efficient edge processing while supporting the decentralization of production and enhancing resilience and agility within supply chains.
Additionally, the integration of advanced sensing technologies contributes to sustainability goals by optimizing resource use and minimizing waste. As manufacturers increasingly adopt AI-driven collaboration and advanced technology integration, IMU solutions provide the critical data and reliability needed to drive innovation, customization, and continuous improvement across the industry.
The post The role of motion sensors in the industrial market appeared first on EDN.
Lightning and trees

We’ve looked at lightning issues before. Please see “Ground strikes and lightning protection of buried cables.”
This headline below was found online at the URL hyperlinked here.

Recent headline from the local paper. Source: ABC7NY
This ABC NY article describes how a teenage boy tried to take refuge from the rain in a thunderstorm by getting under the canopy of a tree. In that article, we find this quote: “The teen had no way of knowing that the tree would be hit by lightning.”
This quote, apparently the opinion of the article’s author, is absolutely incorrect. It is total and unforgivable rubbish.
Even when I was knee-high to Jiminy Cricket, I was told over and over and over by my parents NEVER to try to get away from rain by hiding under a tree. Any tree that you come across will have its leaves reaching way up into the air, and those wet leaves are a prime target for a lightning strike, as illustrated in this screenshot:

Conceptual image of lightning striking tree. Source: Stockvault
Somebody didn’t impart this basic safety lesson to this teenager. It is miraculous that this teenager survived the event. The above article cites second-degree burns, but a radio item that I heard about this incident also cites nerve damage and a great deal of lingering pain.
Recovery is expected.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Ground strikes and lightning protection of buried cables
- Lightning rod ball
- Teardown: Zapped weather station
- No floating nodes
- Why do you never see birds on high-tension power lines?
- Birds on power lines, another look
- A tale about loose cables and power lines
- Shock hazard: filtering on input power lines
- Misplaced insulator proves fatal
The post Lightning and trees appeared first on EDN.
10BASE-T1S endpoints simplify zonal networks

Microchip’s LAN866x 10BASE-T1S endpoint devices use the Remote Control Protocol (RCP) to extend Ethernet connectivity in in-vehicle networks. The endpoints enable centralized control of edge nodes for data streaming and device management, while the 10BASE-T1S multidrop topology supports an all-Ethernet zonal architecture.

LAN866X endpoints serve as bridges that translate Ethernet packets directly to local interfaces for lighting control, audio transmission, and sensor or actuator management over the network. This approach eliminates node-specific software programming, simplifying system architecture and reducing both hardware and engineering costs.
The RCP-enabled endpoint devices join Microchip’s Single Pair Ethernet (SPE) line of transceivers, bridges, switches, and development tools. These components enable reliable, high-speed data transmission over a single twisted pair cable supporting 10BASE-T1S, 100BASE-T1, and 1000BASE-T1.
The LAN8660 control, LAN8661 lighting, and LAN8662 audio endpoints are available in limited sampling. For more information about Microchip’s automotive Ethernet products, including these endpoints, click here.
The post 10BASE-T1S endpoints simplify zonal networks appeared first on EDN.
Development kit enables low-power presence detection

SPARK’s Presence Detection Kit (PDK), powered by the SR1120 LE-UWB transceiver, delivers low-power, robust sensing for connected devices. Its low-energy ultra-wideband (LE-UWB) technology helps designers overcome the high power consumption and interference challenges of Bluetooth, Wi-Fi, and conventional UWB.

LE-UWB supports unidirectional and bidirectional communication, ultra-low-power beaconing with configurable detection zones, and line-of-sight Time-of-Flight (ToF) measurement for precise proximity and distance sensing. SPARK reports that its LE-UWB technology consumes over 10× less power (30 µW at 4 Hz) than Bluetooth/BLE beaconing and delivers more than 20× higher power efficiency than standard UWB.
SPARK provides an energy-optimized firmware stack for presence detection, including APIs for beaconing, ranging, data transmission, and OTA firmware updates. Reference hardware kits, demo applications, and GUIs allow engineers to evaluate detection performance, adjust detection zones, and accelerate prototyping. The PDK hardware is selected to optimize performance, power, and cost, and integrates across a broad range of MCUs and software architectures.
Presence detection kits are available now. For details on board and kit configurations, contact NA_sales@sparkmicro.com.
The post Development kit enables low-power presence detection appeared first on EDN.
High-density ATE supplies boost test capabilities

Keysight has expanded its power test portfolio with three families of ATE system power supplies spanning 1.5 kW to 12 kW. The RP5900 series of regenerative DC power supplies, EL4900 series of regenerative DC electronic loads, and DP5700 series of system DC power supplies deliver high density, bidirectional, and regenerative capabilities, paired with intelligent automation software for efficient design validation.

This range enables engineers to validate complex, multi-kilowatt devices with greater precision and repeatability while using less space and energy. Supplies deliver up to 6 kW in 1U or 12 kW in 2U, with full regenerative capability, offering two to three times more channels in the same footprint as previous systems.
Keysight’s automated power suite lets engineers run complex tests—such as long-duration cycling, state-of-charge battery emulation, and transient replication—consistently and efficiently. It includes removable SD memory for secure workflows between classified and open labs, software that complies with NIST SP800-171 SSDF standards, and regenerative operation that returns energy to the grid, reducing waste and supporting sustainability.
For more information about each of the three high-density power series, click here.
The post High-density ATE supplies boost test capabilities appeared first on EDN.
Partners bring centimeter-level GNSS to IoT

Quectel is bundling its Real-Time Kinematic (RTK)-capable GNSS modules and antennas with Swift Navigation’s Skylark RTK correction service. Together, the hardware and service enable centimeter-level positioning accuracy for mass-market IoT applications and streamline RTK adoption.

Partnering with Swift allows Quectel to deliver optimized solutions for specific applications, helping equipment manufacturers navigate the complexities of RTK adoption. The Quectel RTK Correction Solution supports a wide range of use cases, including robotics, automotive, micro-mobility, precision agriculture, surveying, and mining. Swift’s Skylark provides multi-constellation, multi-frequency RTK corrections with broad geographic coverage across North America, Europe, and Asia-Pacific.
The RTK global offering ensures consistent compatibility and performance across regions, supporting quad–band GNSS RTK modules such as the LG290P, LG580P, and LG680P, as well as the dual-band LC29H series. These modules maintain exceptional RTK accuracy even in challenging environments. Quectel complements its hardware with full-stack services, including engineering support, precision antenna provisioning, and tuning.
The post Partners bring centimeter-level GNSS to IoT appeared first on EDN.
Multiprotocol firmware streamlines LoRa IoT design

Semtech’s Unified Software Platform (USP) for its LoRa Plus transceivers enables multiprotocol IoT deployments on a single hardware platform. It manages LoRaWAN, Wireless M-Bus, Wi-SUN FSK, and proprietary protocols, eliminating the need for protocol-specific hardware variants.

LoRa Plus LR20xx transceivers integrate 4th-generation LoRa IP that supports both terrestrial and non-terrestrial networks across sub-GHz, 2.4-GHz ISM, and licensed S-bands. The LoRa USP provides a unified firmware ecosystem for multiprotocol operation on various MCU platforms through open-source environments such as Zephyr. It also offers backward-compatible build options for Gen 2 SX126x and Gen 3 LR11xx devices.
LoRa USP succeeds LoRa Basics Modem as Semtech’s multiprotocol firmware platform. Both platforms share the same set of APIs, ensuring a seamless transition to the USP version. USP supports both bare-metal and Zephyr OS implementations.
The post Multiprotocol firmware streamlines LoRa IoT design appeared first on EDN.
Designer’s guide: PMICs for industrial applications

Power management integrated circuits (PMICs) are an essential component in the design of any power supply. Their main function is to integrate several complex features, such as switching and linear power regulators, electrical protection circuits, battery monitoring and charging circuits, energy-harvesting systems, and communication interfaces, into a single chip.
Compared with a solution based on discrete components, PMICs greatly simplify the development of the power stage, reducing the number of components required, accelerating validation and therefore the design’s time to market. In addition, PMICs qualified for specific applications, such as automotive or industrial, are commercially available.
In industrial and industrial IoT (IIoT) applications, PMICs address key power challenges such as high efficiency, robustness, scalability, and flexibility. The use of AI techniques is being investigated to improve PMIC performance, with the aim of reducing power losses, increasing energy efficiency, and reducing heat dissipation.
Achieving high efficiencyIndustrial and IIoT applications require multiple power lines with different voltage and current requirements. Logic processing components, such as microcontrollers (MCUs) and FPGAs, require very low voltages, while peripherals, such as GPIOs and communication interfaces, require voltages of 3.3 V, 5 V, or higher.
These requirements are now met by multichannel PMICs, which integrate switching buck, boost, or buck-boost regulators, as well as one or more linear regulators, typically of the low-dropout (LDO) type, and power switches, very useful for motor control. Switching regulators offer very high efficiency but generate electromagnetic noise related to the charging and discharging process of the inductor.
LDO regulators, which achieve high efficiency only when the output voltage differs slightly from the input voltage to the converter, are instead suitable for low-noise applications such as sensors and, more generally, where analog voltages with very low amplitude need to be managed.
Besides multiple power rails, industrial and IIoT applications require solutions with high efficiency. This requirement is essential for prolonging battery life, reducing heat dissipation, and saving space on the printed-circuit board (PCB) using fewer components.
To achieve high efficiency, one of the first parameters to consider is the quiescent current (IQ), which is the current that the PMIC draws when it is not supplying any load, while keeping the regulators and other internal functions active. A low IQ value reduces power losses and is essential for battery-powered applications, enabling longer battery operation.
PMICs are now commercially available that integrate regulators with very low IQ values, in the order of microseconds or less. However, a low IQ value should not compromise transient response, another parameter to consider for efficiency. Transient response, or response time, indicates the time required by the PMIC to adapt to sudden load changes, such as when switching from no load to active load. In general, depending on the specific application, it is advisable to find the right compromise between these two parameters.
Nordic Semiconductor’s nPM2100 (Figure 1) is an example of a low-power PMIC. Integrating an ultra-efficient boost regulator, the nPM2100 provides a very low IQ, addressing the needs of various battery-powered applications, including Bluetooth asset tracking, remote controls, and smart sensors.
The boost regulator can be powered from an input range of 0.7 to 3.4 V and provides an output voltage in the range of 1.8 V to 3.3 V, with a maximum output current of 150 mA. It also integrates an LDO/load switch that provides up to 50-mA output current with an output voltage in the range of 0.8 V to 3.0 V.
The nPM2100’s regulator offers an IQ of 150 nA and achieves up to 95% power conversion efficiency at 50 mA and 90.5% efficiency at 10 µA. The device also has a low-current ship mode of 35 nA that allows it to be transported without removing the battery inserted. Multiple options are available for waking up the device from this low-power state.
An ultra-low-power wakeup timer is also available. This is suitable for timed wakeups, such as Bluetooth LE advertising performed by a sensor that remains in an idle state for most of the time. In this hibernate state, the maximum current absorbed by the device is 200 nA.
Another relevant parameter that helps to increase efficiency is dynamic voltage and frequency scaling (DVFS).
When powering logic devices built with CMOS technology, such as common MCUs, processors, and FPGAs, a distinction can be made between static and dynamic power consumption. While the former is simply the product of the supply voltage by the current in idle conditions, dynamic power is expressed by the following formula:
Pdynamic = C × Vcc2 × fsw
where C is the load capacity, VCC is the voltage applied to the device, and fSW is the switching frequency. This formula shows that the power dissipated has a quadratic relationship with voltage and a linear relationship with frequency. The DVFS technique works by reducing these two electrical parameters and adapting them to the dynamic requirements of the load.
Consider now a sensor that transmits data sporadically and for short intervals, or an industrial application, such as a data center’s board running AI models. By reducing both voltage and frequency when they are not needed, DVFS can optimize power management, enabling significant improvements in energy efficiency.
NXP Semiconductors’ PCA9460 is a 13-channel PMIC specifically designed for low-power applications. It supports the i.MX 8ULP ultra-low-power family processor, providing four high-efficiency 1-A step-down regulators, four VLDOs, one SVVS LDO, and four 150-mΩ load switches, all enclosed in a 7 × 6-bump-array, 0.4-mm-pitch WSCSP42 package.
The four buck regulators offer an ultra-low IQ of 1.5 μA at low-power mode and 5.5 μA at normal mode, while the four LDOs achieve an IQ of 300 nA. Two buck regulators support smart DVFS, enabling the PMIC to always set the right voltage on the processors it is powering. This feature, enabled through specific pins of the PMIC, minimizes the overall power consumption and increases energy efficiency.
Energy harvestingThe latest generation of PMICs has introduced the possibility of obtaining energy from various sources such as light, heat, vibrations, and radio waves, opening up new scenarios for systems used in IIoT and industrial environments. This feature is particularly important in IIoT and wireless devices, where maintaining a continuous power source for long periods of time is a significant challenge.
Nexperia’s NEH71x0 low-power PMIC (Figure 2) is a full power management solution integrating advanced energy-harvesting features. Harvesting energy from ambient power sources, such as indoor and outdoor PV cells, kinetic (movement and vibrations), piezo, or a temperature gradient, this device allows designers to extend battery life or recharge batteries and supercapacitors.
With an input power range from 15 μW to 100 mW, the PMIC achieves an efficiency up to 95%, features an advanced maximum power-point tracking block that uses a proprietary algorithm to deliver the highest output to the storage element, and integrates an LDO/load switch with a configurable output voltage from 1.2 V to 3.6 V.
Reducing the bill of materials and PCB space, the NEH71x0 eliminates the need for an external inductor, offering a compact footprint in a 4 × 4-mm QFN28 package. Typical applications include remote controls, smart tags, asset trackers, industrial sensors, environmental monitors, tire pressure monitors, and any other IIoT application.
Figure 2: Nexperia’s NEH71x0 energy-harvesting PMIC can convert energy with an efficiency of up to 95%. (Source: Nexperia)
PMICs for AI and AI in PMICs
To meet the growing demand for power in the industrial sector and data centers, Microchip Technology Inc. has introduced the MCP16701, a PMIC specifically designed to power high-performance logic devices, such as Microchip’s PIC64GX microprocessors and PolarFire FPGAs. The device integrates eight 1.5-A buck converters that can be connected in parallel, four 300-mA LDOs, and a controller for driving external MOSFETs.
The MCP16701 offers a small footprint of 8 × 8 mm in a VQFN package (Figure 3), enabling a 48% reduction in PCB area and a 60% reduction in the number of components compared with a discrete solution. All converters, which can be connected in parallel to achieve a higher output current, share the same inductor.
A unique feature of this PMIC is its ability to dynamically adjust the output voltage on all converters in steps of 12.5 mV or 25 mV, with an accuracy of ±0.8% over the temperature range. This flexibility allows designers to precisely adjust the voltage supplied to loads, optimizing energy efficiency and system performance.
Figure 3: Microchip’s MCP16701 enables engineers to fine-tune power delivery, improving system efficiency and performance. (Source: Microchip Technology Inc.)
As in many areas of modern electronics, AI techniques are also being studied and introduced in the power management sector. This area of study is referred to as cognitive power management. PMICs, for example, can use machine-learning techniques to predict load evolution over time, adjusting the output voltage value in real time.
Tools such as PMIC.AI, developed by AnDAPT, use AI to optimize PMIC architecture and component selection, while Alif Semiconductor’s autonomous intelligent power management (aiPM) tool dynamically manages power based on AI workloads. These solutions enable voltage scaling, increasing system efficiency and extending battery life.
The post Designer’s guide: PMICs for industrial applications appeared first on EDN.
Basic design equations for three precision current sources

A frequently encountered category of analog system component is the precision current source. Many good designs are available, but concise and simple arithmetic for choosing the component values necessary to tailor them to specific applications isn’t always provided. I guess some designers feel such tedious details are just too trivially obvious to merit mentioning. But I sometimes don’t feel that.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Here are some examples I think some folks might find useful. I hope they won’t feel too terribly obvious, trivial, or tedious.
The circuit in Figure 1 is versatile and capable of high performance.
Figure 1 A simple high-accuracy current source that can source current with better than 1% accuracy.
With suitable component choices, this circuit can: source current with better than 1% accuracy and have Q1 drain currents ranging from < 1mA to > 10 A, while working with power supply voltages (Vps) from < 5V to > 100 V.
Here are some helpful hints for resistor values, resistor wattages, and safety zener D1. First note
- Vps = power supply voltage
- R1(W), Q1(W), and R2(W) = respective component power dissipation
- Id = Q1 drain current in amps
Adequate heat sinking for Q1(W). Another thing assumed is:
Vps > Q1 (Vgs ON voltage) + 1.24 + R1*100µA
The design equations are as follows:
- R1 = (Vps – 1.24)/1mA
- R1(W) = R1/1E6
- Q1(W) = (Vps – Vload – 1.24)*Id
- R2 = 1.24/Id
- R2(W) = 1.24 Id
- R2 precision 1% or better at the temperature produced by #5 heat dissipation
- D1 is needed only if Vps > 15V
Figure 2 substitutes an N-channel MOSFET for Figure 1’s Q1 and an anode-referenced 431 regulator chip in place of the cathode-referenced 4041 to produce a very similar current sink. Its design equations are identical.

Figure 2 A simple, high-accuracy current sink uses identical design math.
Okay, okay, I can almost hear the (very reasonable) objection that, for these simple circuits, the design math really was pretty much tedious, trivial, and obvious.
So I’ll finish with a very less obvious and more creative example from frequent contributor Christopher Paul’s DI “Precision, voltage-compliant current source.”
Taking parts parameters from Christopher Paul’s Figure 3, we can define:
- Vs = chosen voltage across the R3R4 divider
- V5 = voltage across R5
- Id = chosen application-specific M1 drain current
Then:
- Vs = 5V
- V5 = 5V – 0.65V = 4.35V
- R5 = 4.35V/150µA = 30kΩ
- I4 = Id – 290µA
- R3 = 1.24/I4
- R4 = (Vs – 1.24)/I4 = 3.76/I4
- R3(W) = 1.24 I4
- R4(W) = 3.76 I4
- M1(W) = Id(Vs – Vd)
For example, if Id = 50 mA and Vps = 15 V, then:
- I4 = 49.7 mA
- R5 = 30 kΩ
- R4 = 75.7 Ω
- R3 = 25.2 Ω
- R3(W) = 1.24 I4 = 100 mW
- R4(W) = 3.76 I4 = 200 mW
- M1(W) = 500 mW
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A precision, voltage-compliant current source
- LM4041 voltage regulator impersonates precision current source
- Simple, precise, bi-directional current source
- A high-performance current source
- Precision programmable current sink
The post Basic design equations for three precision current sources appeared first on EDN.
How to limit TCP/IP RAM usage on STM32 microcontrollers

The TCP/IP functionality of a connected device uses dynamic RAM allocation because of the unpredictable nature of network behavior. For example, if a device serves a web dashboard, we cannot control how many clients might connect at the same time. Likewise, if a device communicates with a cloud server, we may not know in advance how large the exchanged messages will be.
Therefore, limiting the amount of RAM used by the TCP/IP stack improves the device’s security and reliability, ensuring it remains responsive and does not crash due to insufficient memory.
Microcontroller RAM overview
It’s common that on microcontrollers, available memory resides in several non-contiguous regions. Each of these regions can have different cache characteristics, performance levels, or power properties, and certain peripheral controllers may only support DMA operations to specific memory areas.
Let’s take the STM32H723ZG microcontroller as an example. Its datasheet, in section 3.3.2, defines embedded SRAM regions:

Here is an example linker script snippet for this microcontroller generated by the CubeMX:

Ethernet DMA memory
We can clearly see that RAM is split into several regions. The STM32H723ZG device includes a built-in Ethernet MAC controller that uses DMA for its operation. It’s important to note that the DMA controller is in domain D2, meaning it cannot directly access memory in domain D1. Therefore, the linker script and source code must ensure that Ethernet DMA data structures are placed in domain D2; for example, in RAM_D2.
To achieve this, first define a section in the linker script and place it in the RAM_D2 region:

Second, the Ethernet driver source code must put respective data into that section. It may look like this:

Heap memory
The next important part is the microcontroller’s heap memory. The standard C library provides two basic functions for dynamic memory allocation:

Typically, ARM-based microcontroller SDKs are shipped with the ARM GCC compiler, which includes the Newlib C library. This library, like many others, has a concept of so-called “syscalls” featuring low level routines that user can override, and which are called by the standard C functions. In our case, the malloc() and free() standard C routines call the _sbrk() syscall, which firmware code can override.
It’s typically done in the sycalls.c or sysmem.c file, and may look this:

As we can see, the _sbrk() operates on a single memory region:

That means that such implementation cannot be used in several RAM regions. There are more advanced implementations, like FreeRTOS’s heap4.c, which can use multiple RAM regions and provides pvPortMalloc() and pvPortFree() functions.
In any case, standard C functions malloc() and free() provide heap memory as a shared resource. If several subsystems in a device’s firmware use dynamic memory and their memory usage is not limited by code, any of them can potentially exhaust the available memory. This can leave the device in an out-of-memory state, which typically causes it to stop operating.
Therefore, the solution is to have every subsystem that uses dynamic memory allocation operate within a bounded memory pool. This approach protects the entire device from running out of memory.
Memory pools
The idea behind a memory pool is to split a single shared heap—with a single malloc and free—into multiple “heaps” or memory pools, each with its own malloc and free. The pseudo-code might look like this:

The next step is to make each firmware subsystem use its own memory pool. This can be achieved by creating a separate memory pool for each subsystem and using the pool’s malloc and free functions instead of the standard ones.
In the case of a TCP/IP stack, this would require all parts of the networking code—driver, HTTP/MQTT library, TLS stack, and application code—to use a dedicated memory pool. This can be tedious to implement manually.
RTOS memory pool API
Some RTOSes provide a memory pool API. For example, Zephyr provides memory heaps:

The other example of an RTOS that provides memory pools is ThreadX:

Using external allocator
The other alternative is to use an external allocator. There are many implementations available. Here are some notable ones:
- umm_malloc is specifically designed to work with the ARM7 embedded processor, but it should work on many other 32-bit processors, as well as 16- and 8-bit processors.
- o1heap is a highly deterministic constant-complexity memory allocator designed for hard real-time high-integrity embedded systems. The name stands for O(1) heap.
Example: Mongoose and O1Heap
The Mongoose embedded TCP/IP stack makes it easy to limit its memory usage, because Mongoose uses its own functions mg_calloc() and mg_free() to allocate and release memory. The default implementation uses the C standard library functions calloc() and free(), but Mongoose allows user to override these functions with their own implementations.
We can pre-allocate memory for Mongoose at firmware startup, for example 50 Kb, and use o1heap library to use that preallocated block and implement mg_calloc() and mg_free() using o1heap. Here are the exact steps:
- Fetch o1heap.c and o1heap.h into your source tree
- Add o1heap.c to the list of your source files
- Preallocate memory chunk at the firmware startup

- Implement mg_calloc() and mg_free() using o1heap and preallocated memory chunk

You can see the full implementation procedure in the video linked at the end of this article.
Avoid memory exhaustion
This article provides information on the following design aspects:
- Understand STM32’s complex RAM layout
- Ensure Ethernet DMA buffers reside in accessible memory
- Avoid memory exhaustion by using bounded memory pools
- Integrate the o1heap allocator with Mongoose to enforce TCP/IP RAM limits
By isolating the network stack’s memory usage, you make your firmware more stable, deterministic, and secure, especially in real-time or resource-constrained systems.
If you would like to see a practical application of these principles, see the complete tutorial, including a video with a real-world example, which describes how RAM limiting is implemented in practice using the Mongoose embedded TCP/IP stack. This video tutorial provides a step-by-step guide on how to use Mongoose Wizard to restrict TCP/IP networking on a microcontroller to a preallocated memory pool.
As part of this tutorial, a real-time web dashboard is created to show memory usage in real time. The demo uses an STM32 Nucleo-F756ZG board with built-in Ethernet, but the same approach works seamlessly on other architectures too.
Sergey Lyubka is the co-founder and technical director of Cesanta Software Ltd. He is known as the author of the open-source Mongoose Embedded Web Server and Networking Library, which has been on the market since 2004 and has over 12K stars on GitHub. Sergey tackles the issue of making embedded networking simpler to access for all developers.
Related Content
- Developing Energy-Efficient Embedded Systems
- Can MRAM Get EU Back in the Memory Game?
- An MCU test chip embeds 10.8 Mbit STT-MRAM memory
- How MCU memory dictates zone and domain ECU architectures
- Breaking Through Memory Bottlenecks: The Next Frontier for AI Performance
The post How to limit TCP/IP RAM usage on STM32 microcontrollers appeared first on EDN.
Predictive maintenance at the heart of Industry 4.0

In the era of Industry 4.0, manufacturing is no longer defined solely by mechanical precision; it’s now driven by data, connectivity, and intelligence. Yet downtime remains one of the most persistent threats to productivity. When a machine unexpectedly fails, the impact ripples across the entire digital supply chain: Production lines stop, delivery schedules are missed, and teams scramble to diagnose the issue. For connected factories running lean operations, even a short interruption can disrupt synchronized workflows and compromise overall efficiency.
For decades, scheduled maintenance has been the industry’s primary safeguard against unplanned downtime. Maintenance was rarely data-driven but rather scheduled at rigid intervals based on estimates (in essence, educated guesses). Now that manufacturing is data-driven, maintenance should be data-driven as well.
Time-based, or ISO-guided, maintenance can’t fully account for the complexity of today’s connected equipment because machine behaviors vary by environment, workload, and process context. The timing is almost never precisely correct. This approach risks failing to detect problems that flare up before scheduled maintenance, often leading to unexpected downtime.
In addition, scheduled maintenance can never account for faulty replacement parts or unexpected environmental impacts. Performing maintenance before it is necessary is inefficient as well, leading to unnecessary downtime, expenses, and resource allocations. Maintenance should be performed only when the data says maintenance is necessary and not before; predictive maintenance ensures that it will.
To realize the promise of smart manufacturing, maintenance must evolve from a reactive (or static) task into an intelligent, autonomous capability, which is where Industry 4.0 becomes extremely important.
From scheduled service to smart systemsIndustry 4.0 is defined by convergence: the merging of physical assets with digital intelligence. Predictive maintenance represents this convergence in action. Moving beyond condition-based monitoring, AI-enabled predictive maintenance systems use active AI models and continuous machine learning (ML) to recognize and alert stakeholders as early indicators of equipment failure before they trigger costly downtime.
The most advanced implementations deploy edge AI directly to the individual asset on the factory floor. Rather than sending massive data streams to the cloud for processing, these AI models analyze sensor data locally, where it’s generated. This not only reduces latency and bandwidth use but also ensures real-time insight and operational resilience, even in low-connectivity environments. In an Industry 4.0 context, edge intelligence is critical for achieving the speed, autonomy, and adaptability that smart factories demand.
AI-enabled predictive maintenance systems use AI models and continuous ML to detect early indicators of equipment failure before they trigger costly downtime. (Source: Adobe AI Generated)
Edge intelligence in Industry 4.0
Traditional monitoring solutions often struggle to keep pace with the volume and velocity of modern industrial data. Edge AI addresses this by embedding trained ML models directly into sensors and devices. These models continuously analyze vibration, temperature, and motion signals, identifying patterns that precede failure, all without relying on cloud connectivity.
Because the AI operates locally, insights are delivered instantly, enabling a near-zero-latency response. Over time, the models adapt and improve, distinguishing between harmless deviations and genuine fault signatures. This self-learning capability not only reduces false alarms but also provides precise fault localization, guiding maintenance teams directly to the source of a potential issue. The result is a smarter, more autonomous maintenance ecosystem aligned with Industry 4.0 principles of self-optimization and continuous learning.
Building a future-ready predictive maintenance frameworkTo be truly future-ready for Industry 4.0, a predictive maintenance platform must seamlessly integrate advanced intelligence with intuitive usability. It should offer effortless deployment, compatibility with existing infrastructure, and scalability across diverse equipment and facilities. Features such as plug-and-play setup and automated model deployment minimize the load on IT and operations teams. Customizable sensitivity settings and severity-based analytics empower tailored alerting aligned with the criticality of each asset.
Scalability is equally vital. As manufacturers add or reconfigure production assets, predictive maintenance systems must seamlessly adapt, transferring models across machines, lines, or even entire facilities. Hardware-agnostic solutions offer the flexibility required for evolving, multivendor industrial environments. The goal is not just predictive accuracy but a networked intelligence layer that connects all assets under a unified maintenance framework.
Real-world impact across smart industriesPredictive maintenance is a cornerstone of digital transformation across manufacturing, energy, and infrastructure. In smart factories, predictive maintenance monitors robotic arms, elevators, lift motors, conveyors, CNC machines, and more, targeting the most critical assets in connected production lines. In energy and utilities, it safeguards turbines, transformers, and storage systems, preventing performance degradation and ensuring safety. In smart buildings, predictive maintenance monitors HVAC systems and elevators for advanced notice of needed maintenance or replacement of assets that are often hard to monitor and cause great discomfort and loss of productivity during unexpected downtime.
The diversity of these applications underscores an Industry 4.0 truth: Interoperability and adaptability are as important as intelligence. Predictive maintenance must be able to integrate into any operational environment, providing actionable insights regardless of equipment age, vendor, or data format.
Intelligence at the industrial edgeThe edgeRX platform from TDK SensEI, for example, embodies the next generation of Industry 4.0 machine-health solutions. Combining industrial-grade sensors, gateways, dashboards, and cloud interfaces into a unified system, edgeRX delivers immediate visibility into machine-health conditions. Deployed in minutes, it immediately begins collecting data to build ML models for deployment from the cloud back to the sensor device for real-time inference on the sensor at the edge.
By processing data directly on-device, edgeRX eliminates the latency and energy costs of cloud-based analytics. Its ruggedized, IP67-rated hardware and long-life batteries make it ideal for demanding industrial environments. Most importantly, edgeRX learns continuously from each machine’s unique operational profile, providing precise, actionable insights that support smarter, faster decision-making.
TDK SensEI’s edgeRX advanced machine-health-monitoring platform (Source: TDK SensEI)
The road to autonomous maintenance
As Industry 4.0 continues to redefine manufacturing, predictive maintenance is emerging as a key enabler of self-healing, data-driven operations. EdgeRX transforms maintenance from a scheduled obligation into a strategic function—one that is integrated, adaptive, and intelligent.
Manufacturers evaluating their digital strategies should ask:
- Am I able to remotely and simultaneously monitor and alert on all my assets?
- Are our automated systems capturing early, subtle indicators of failure?
- Can our current solutions scale with our operations?
- Are insights available in real time, where decisions are made?
If the answer is no, it’s time to rethink what maintenance means in the context of Industry 4.0. Predictive, edge-enabled AI solutions don’t just prevent downtime; they drive the autonomy, efficiency, and continuous improvement that define the next industrial revolution.
The post Predictive maintenance at the heart of Industry 4.0 appeared first on EDN.
A non-finicky, mass-producible audio frequency white noise generator

This project made me feel a kind of kinship with Diogenes, although I was searching for the item described in the title rather than for an honest man.
Figure 1 “Diogenes Looking for an Honest Man,” a painting attributed to Johann Heinrich Wilhelm Tischbein (1751-1829). The author of this DI has a more modest goal.
Wow the engineering world with your unique design: Design Ideas Submission Guide
I wanted a design that did not require the evaluation and selection of one out of a group of components. I’d tolerate (though not welcome) the use of frequency compensation and even an automatic gain control (AGC) to achieve predictable performance characteristics. Let’s call my desired design “reliably repeatable.”
Standard MLS digital circuitInitially, I thought none of the listed accommodations would be necessary, and that a simple well-known digital circuit—a maximal length sequence (MLS) Generator [1]—would fit the bill. This circuit produces a pseudorandom sequence whose spectral characteristics are white. A general example of such is shown in Figure 2.

Figure 2 The general form of an MLS generator. A reference lists a table of 2 to 5 specific taps for register lengths from N = 2 to 32 to produce repeating sequences of length 2N-1. Register initialization must include at least one non-zero value. The author first listened to a version using only one exclusive or gate with N = 31 registers, in which the outputs of only the 28th and 31st registers were sampled.
It was simple to code up the one described in the Figure 2 caption with an ATtiny13A microprocessor and obtain a 1.35 µs clock period. Of course, validation is in the listening. And indeed, the predominant sound is the “shush” of white noise.
But there are also audible pops, clicks, and other unwanted artifacts in the background. I had a friend with hearing better than mine listen to confirm my audition’s disappointing conclusion. And so, I picked up my lantern and moved on to the next candidate.
Reverse-biased NPNI was intrigued by reverse-biasing a transistor’s base-emitter junction with the collector floating (see Figure 3).

Figure 3 Jig for testing the noise characteristics of NPN transistors with reverse-biased base-emitter junctions.
I tested ten 2N3906 transistors with values of R equal to 103, 104, 105, and 106 ohms. Both DC voltages and frequency sweeps (of voltage per square-root spectral densities in units of dBVrms / Hz1/2) were collected.
It was evident that as R decreased, average noise decreased and DC voltages rose slightly, remaining in the range between 7.2 and 8.3 volts. This gave me hope that a simple AGC scheme in which the transistor bias current was varied might satisfy my requirements.
Alas, it was not to be. Figure 4a, Figure 4b, Figure 4c, and Figure 4d show spectral noise in the lower frequency range. (Additional filtering of the 18-V supply had no effect on the 60 Hz power line fundamental or harmonics—these were being picked up from my test environment. The 60-Hz fundamental’s level was about 10 µV rms.)
Figure 4a Note the power line harmonics “hum” problem that the “quiet orange” transistor in particular introduces.

Figure 4b Biasing the “orange” transistor at a lower current raised the noise and hid the power line harmonics, but not the fundamental.

Figure 4c As the bias current is reduced, some but not all transistors’ noises mask the 60 Hz fundamental.

Figure 4d Regardless of whether the power line noise can be masked or eliminated, it’s clear for all resistor R values that there is no consistent shape to the frequency response.
I’ve tried other transistors with similar results. Being unable to depend on a specific frequency response shape, the reverse-biased base-emitter transistor is not a suitable signal source for a reliably predictable design. It’s time to pick up the lantern again and continue the search.
A shunt regulatorWithin several datasheets of components in the ‘431 family and in the TLVH431B’s in particular, there is a figure showing the devices’ equivalent input noise. See Figure 5.
Figure 5 The equivalent input noise and test circuit for the TLVH431B (Figure 5-9 in the part’s datasheet). Source: Texas Instruments
The almost 3 dB of rise in noise from 300 Hz down to 10 Hz could be compensated for if it were repeatable from device to device. I looked at the cathode of ten devices using the test jig of Figure 6. The spectral responses are presented in Figure 7.

Figure 6 The test jig for TLVH431B spectral noise. There was no significant difference in the results shown in Figure 7 when values of 1kΩ and 10 kΩ were used for R. 100kΩ and 1MΩresistances supplied insufficient currents for the devices’ operation.

Figure 7 The TVH431B spectral noise, 10 samples with the same date code.
Although the TLVH431B is a better choice than the 2N3904, there are still variations in its noise levels, necessitating some sort of AGC. And the power line signals were still present, with no mitigation available from different values of R. The tested parts all have the same date code, and there are no numerical specs available for limits on noise amplitudes or frequency responses.
Who knows how other date codes would behave? I certainly can’t claim from the data that this component could be part of a “reliably repeatable” design as I defined the term. But you know what? Carrying this lantern around is getting to be pretty annoying.
Xorshift32I kept thinking that there had to be a digital solution to this problem, even if it couldn’t be the one that produces an MLS. I did some research, and the option of what is called “xorshift” came up, specifically xorshift32 [2].
Xorshift32 starts by initializing a 32-bit variable to a non-zero value. A copy of this variable is created, and 13 zeros are left-shifted into the copy, eliminating the 13 left-most original register values.
The original and the shifted copy are bit-for-bit exclusive-OR’d and stored in the original variable. A copy of this result is made. 17 zeros are then right-shifted into the copy, eliminating the 17 right-most copy’s values. The shifted copy is again exclusive-OR’d bit-by-bit with the updated original register and stored in that register.
Again, a copy of the original’s latest update is made. 5 zeroes are left-shifted into the newest copy, which is then exclusive-OR’d with the latest original update and stored in that original. As this three-step process is repeated, a random sequence of length 232-1 consisting of unique 32-bit integers is generated.
This algorithm was coded into an ATtiny13A microprocessor running at a clock speed of 9.6 MHz, yielding a bit shift period of 5.8 µs. (Assembly source code and hex file are available upon request.) The least significant register bit was routed to bit 0 of the device’s portb (pin 5 of the eight-pin PDIP package.)
This pin was AC-coupled to a power amplifier driving a Polk Audio bookshelf speaker. My friend and I agreed that all we heard was white noise; the pops and clicks of the MLS sequence were absent.
Figure 8 and Figure 9 display frequency sweeps of the voltage per square-root spectral densities of the MLS and the xorshift sequences.

Figure 8 Noise spectral densities from 4 to 1550 Hz of the two auditioned digital sequences produced with 5V-powered ATtiny13A microprocessors.

Figure 9 Noise spectral densities from 63 to 25000 Hz of the two auditioned digital sequences produced with 5V-powered ATtiny13A microprocessors.
There are a few takeaways from Figures 8 and 9.
The white noises of the sequences are at high enough levels to mask my testing environment’s power line fundamental and harmonics that are apparent when evaluating the 2N3904 and the TLVH431B.
The difference in levels of the two digital sequences is due to the higher clock rate of the MLS, which spreads the same total energy as the xorshift over a wider bandwidth and results in a lower energy density within any given band of frequencies in the audible range.
Finally, the xorshift32 has a dip of perhaps 0.1 dBVrms per root Hz at 25 kHz. If the ATtiny13A were clocked from an external 20-MHz source, even this small response dip would disappear.
Audibly pure white noise sourceAn audibly pure white noise source for the band from sub-sonic frequencies to 20 kHz can be had by implementing the xorshift32 algorithm on an inexpensive microprocessor.
The result is reliably repeatable, precluding the need to select an optimal component from a group. The voltage over the audio range is:
10 (-39dBVrms/20 ) / √Hz · (200000.5 √Hz),
which evaluates to a 1.6-Vrms signal. This method has none of the disadvantages of the analog noise sources investigated. There is no need to deal with low values and uncertainties of signal level, necessitating the application of a large amount of gain and an AGC, frequency-shaping below 300 Hz or elsewhere, and environmental power line noise at levels comparable to the intentional noise.
I can finally put that darn lantern down. I wonder how Diogenes made out.
Related Content
- Earplugs ready? Let’s make some noise!
- A Portable White Noise Generator Circuit
- Pocket-Size White Noise Generator for Quickly Testing Circuit Signal Response
- Simple White Noise Generator
- White noise source flat from 1Hz to 100kHz
References
- https://liquidsdr.org/doc/msequence/. In the table, the exponents of the polynomials in x are the outputs of the shift registers numbered so that the first (input) register is assigned the number 1.
- https://en.wikipedia.org/wiki/Xorshift
The post A non-finicky, mass-producible audio frequency white noise generator appeared first on EDN.
Compact DIN-rail power supplies deliver high efficiency

TDK Corp. adds a new single-phase series of DIN-rail-mount power supplies to the TDK-Lambda range of products for industrial and automation applications. The cost-effective D1SE entry-range series provides an AC and DC input and is rated for continuous operation at 120 W, 240 W or 480 W with a 24-V output. These power supplies deliver an efficiency of up to 95.1%, reducing energy consumption and internal losses, which lower the internal component temperatures and improve long-term product reliability.
(Source: TDK Corp.)
Thanks to the push-in wire terminations, the D1SE series can be quickly mounted, reducing installation time in a variety of control cabinets, machinery, and industrial production systems. In addition to a conventional 100 to 240-VAC nominal input, the D1SE is safety certified for operation from a 93 to 300-VDC supply. Designed to meet growing customer demand, the DC input addresses applications where the energy supply is coming from a common DC bus voltage or a battery.
The 120-W rated model can deliver a boost power of 156 W for 80 seconds; the 240-W rated model offers a boost of 312 W for 10 seconds; and the 480-W rated model provides a boost of 552 W for an extended 200 seconds. The 24-V output can be adjusted from 22.5 V to 29 V to allow compensation for cable drops, redundancy modules, or setting to non-standard output voltages.
All three power supplies are available with or without a DC-OK contact. For applications in challenging environments, a printed-circuit-board coating option is available, and all models feature a high-quality electrolytic capacitor which extends lifetime, according to TDK.
The DIN-rail-mount power supplies are housed in a rugged metal enclosure with a width of 38 mm for the 120-W models, 44 mm for the 240 W, and 60 mm for the 480 W. The narrow design saves space on the DIN rail for other components, the company said.
Other key specs include input-to-output isolation of 5,000 VDC, input-to-ground at 3,100 VDC, and output-to-ground at 750 VDC. The D1SE models are convection-cooled and rated for operation in the -25°C to 70°C ambient temperature range, with derating above 55°C.
Series certifications include IEC/EN/UL/CSA 61010-1, 61010-2-201, 62368-1 (Ed.3), and IS 13252-1 standards. The power supplies also are CE and UKCA marked to the Low Voltage, EMC, and RoHS Directives, and meet EN 55011-B and CISPR11-B radiated and conducted emissions.
The series also complies with EN 61000-3-2 (Class A) harmonic currents and IEC/EN 61000-6-2 immunity standards. The power supplies come with a three-year warranty.
The post Compact DIN-rail power supplies deliver high efficiency appeared first on EDN.
ABLIC upgrades battery-less water leak detection sensor

ABLIC Inc. upgrades its CLEAN-Boost energy-harvesting technology for the U.S. and EU markets. The battery-less drip-level water leak sensor now offers a communication range that is approximately 2× that of its predecessor and an expanded operating temperature range of up to 85°C from 60°C.
ABLIC said it first launched the CLEAN-Boost energy harvesting technology in 2019 to generate, store, boost, and transform microwatt-level energy into electricity for wireless data transmission. Since that launch, the Japan-market model earned positive evaluations from over 80 customers, and given increased inquiries from U.S. and European customers, the company obtained the necessary certifications from the U.S. Federal Communications Commission and the EU’s Conformité Européenne, confirming compliance with key standards.
CLEAN-Boost can be used in any facility where a water leak poses a potential risk. It uses microwatt energy sources to generate electricity from leaking water and transmits water signals wirelessly. The latest enhancements enable the sensor’s use in a wider range of applications and high-temperature environments, the company said.
Applications where addressing water leaks is critical include automotive parts factories with stamping processes, chemical and pharmaceutical plants, and food processing facilities as well as in aging buildings where pipes may have weakened or in high-temperature operations such as data centers and server rooms.
(Source: ABLIC Inc.)
ABLIC claims the water leak sensor is the industry’s first sensor capable of detecting minute drops of water. It can detect as little as three drops of water (150 μl minimum). In addition, operating without an external power source eliminates the need for major installation work or battery replacement, making it suited for retrofitting into existing infrastructures.
The water leak sensor also helps reduce environmental impact by eliminating the need to replace or dispose of a battery. For example, the sensor has been certified as a MinebeaMitsumi Group “Green Product” for outstanding contribution to the environment.
ABLIC’s CLEAN-Boost technology works by capturing and amplifying microwatt-level environmental energy previously considered too minimal to use. It combines energy storage and boosting components, designed for ultra-low power consumption. The boost circuit operates at 0.35 V for the efficient use of 1 μW of input power. It incorporates a low-power data transmission method that optimizes the timing between power generation and signal transmission, ensuring maximum efficiency and stable operation even under extremely limited power.
(Source: ABLIC Inc.)
The sensor features simple add-on installation for easy integration and sends wireless alerts to safeguard against catastrophic water damage.
The sensor technology is available as a wireless tag (134 × 10 × 18 mm with the main body measuring 65 × 10 × 18 mm), or sensor ribbons (sensor ribbon 0.5 m, sensor ribbon 2.0 m, and sensor ribbon 5.0 m), measuring 700 ×13 × 8 mm, 2200 × 13 × 8 mm, and 5200 × 13 × 8 mm, respectively. They can be connected up to 15 m.
The post ABLIC upgrades battery-less water leak detection sensor appeared first on EDN.
My 100-MHz VFC – the hardware version
“Facts are stubborn things” (John Adams, et al).
I added two 50-ohm outputs to the schematic of my published voltage-to-frequency converter (VFC) circuit (Figure 1). Then, I designed a PCB, purchased the (mostly) surface-mount components, loaded and re-flow soldered them onto the PCB, and then tested the design.
Figure 1 VFC design that operates from 100 kHz to beyond 100 MHz with a single 5.25-V supply, providing square wave outputs at 1/2 and 1/4 the main oscillator frequency.
The hardware implementation of the circuit can be seen in Figure 2.
Figure 2 The hardware implementation of the 100MHz VFC was created in order to root out the facts that can only be obtained after it was built.
My objective was to get the facts about the operation of the circuit.
Theory and simulation are important, but the facts are known only after the circuit is built and tested. That is when the unintended/unexpected consequences are seen.
The circuit mostly performed as expected, but there were some significant issues that had to be addressed in order to get the circuit performing well.
Sensitivity of the v-to-fMy first concern was the high sensitivity of the circuit to minute changes in the input voltage. The sensitivity is 100 MHz per 5 volts, i.e., 20 MHz per volt. That means a 1-mV change on the input results in a 20-kHz change in the output frequency!
So, how do you supply an input voltage that is almost totally devoid of noise and/or ripple, which will cause jitter on the oscillator signal? To deal with this problem, I used a battery supply, four alkaline batteries in series, connected to a 10-turn, 100-kΩ potentiometer to drive the input of the circuit with about 0 to 6 V. This worked quite well. I added a 10 kΩ resistor in series with the non-inverting input of U1 for protection against overvoltage.
Problems and fixesThe first unexpected problem was that the NE555 timer did not provide sufficient drive to the voltage inverter circuit and the voltage doubler circuit. This one is on me; I didn’t look carefully at the datasheet, which says it can supply a lot of output current, but at high current, the output voltage drops so much that the inverter and the doubler circuits don’t provide enough output voltage. And the LTspice model I used for simulation was a very unrealistic model. I recommend that it not be used!
I fixed this by using a 74HC14 Schmitt trigger chip to replace the NE555 timer chip. The 74HC14 provides plenty of current and voltage to drive the two circuits. I implemented the 74HC14 circuitry as an outboard attachment to the main PCB.
I changed the output of the voltage doubler circuit to a regulated 6 V (R16 changed to 274 Ω and R18 to 3.74 kΩ, and D8, D9 changed to SD103). This allows U1 to operate with an input voltage of up to about 5.9 V. Also, I substituted a TLV9162 dual op-amp for U1/U2 because the cost of the TLV9162 is much less than that of the LT1797.
With the correct voltages supplied to U1/U2, I began testing the circuit, and I found that the oscillator would hang at a frequency of about 2 MHz. This was caused by the paralleled Schmitt trigger inverters. One inverter would switch before the other one, which would then sink the current from the inverter that had switched to the HIGH output state, and the oscillator would stop functioning. Paralleling inverters, which are driven by a relatively slowly falling (or rising) input signal, is definitely not a viable idea!
To fix the problem, I removed U4 from the circuit and put a 22-Ω resistor in series with the output of inverter U3 to lessen the current load on it, and the oscillator operated as expected.
I made some changes to the current-to-voltage converter circuit to provide more adjustment range and to use the optimum values for the 5-V supply. I changed R8 to 3.09 kΩ, potentiometer R9 to 1 kΩ, and R13 to 2.5 kΩ.
AdjustmentsThere are two adjustments provided: R9 is an adjustment for the current-to-voltage converter U2, and R11 is an offset current adjustment.
I adjusted R9 to set the oscillator frequency to 100 MHz with the input voltage set to 5.00 V, and then adjusted R11 at 2 MHz.
The percent error of the circuit increases at the lower frequencies; possibly due to diode leakage currents, or nonlinear behavior of the frequency to voltage converter consisting of D2 – D4 and C8 – C11?
Test resultsWith the noted changes implemented, I began testing the VFC. The problem of jitter on the output signal was apparent, especially at the lower frequencies.
I realized that ripple and noise on the 5-V supply would cause jitter on the output signal. As noted on the schematic, the oscillator frequency is a function of the supply voltage.
To avoid this problem, I once again opted to use batteries to provide the supply voltage. I used six alkaline batteries to supply about +9 V and regulated the voltage down to +5 V with an LM317T regulator and a few other components.
This setup achieves about the minimum ripple and noise on the supply and the minimum oscillator jitter. The remaining possible sources of noise/jitter are the switching supplies for U1, the feedback voltage to U1, and the switching on and off of the counters and the inverters, which can cause noise on the +5-V supply.
The frequency versus input voltage plot is not as linear as expected, but it is pretty good over a wide range of input voltage from 50 mV to 5.00 V for a corresponding frequency range of 1.07 MHz to 103.0 MHz (Figure 3 and Figure 4). The percent error versus frequency is shown in Figure 5.

Figure 3 The frequency from 1.07 MHz to 103.0 MHz versus input voltage from 50 mV to 5.00 V.

Figure 4 The frequency (up to 2 MHz) versus input voltage when Vin < 0.1 V.

Figure 5 The percent error versus frequency.
WaveformsSome waveforms are shown in Figure 6, Figure 7, Figure 8, and Figure 9. Most are from the divide-by-2 output because it is more visually interesting than the 3.4-ns output from the oscillator (multiply the divide-by-2 frequency by 2 to get the oscillator frequency).
The input voltage ranges from 10 mV to 5 V to produce the 200 kHz to 100 MHz oscillator/inverter output.
Figure 6 Oscilloscope waveform with a divide-by-two output at 100 kHz.

Figure 7 Oscilloscope waveform with a divide-by-two output at 500 kHz.

Figure 8 Oscilloscope waveform with a divide-by-two output at 5 MHz.

Figure 9 Oscilloscope waveform with a divide-by-two output at 50 MHz.
Figure 10 displays the output of the oscillator/inverter at 100 MHz. Figure 11 shows the 3.4 ns oscillator/inverter output pulse.

Figure 10 Oscilloscope waveform with the oscillator output at 100 MHz.

Figure 11 Oscilloscope waveform with a 3.4-ns oscillator pulse.
The factsSo, here are the facts.
The two inverters in parallel did not work in this application. This was fixed by eliminating one of them and putting a larger resistor in series with the output of the remaining one to reduce the current load on it.
The high sensitivity of the circuit to the input voltage presents a challenge in practice. Generating a sufficiently quiet input voltage is difficult.
Battery operation provides some help, but this presents its own challenges in practice. Noise on the 5-V supply is a related problem. The supply for the second divide-by-two circuit, U7, must be tightly regulated and extremely free of noise and ripple to minimize jitter on the oscillator signal.
And, as noted above, some changes in the values of several components were necessary to get acceptable operation.
Finally, more accurate voltage-versus-frequency operation at lower frequencies will require more careful engineering, if desired. I leave this to the user to work this out, if necessary.
At this point, I am satisfied with the circuit as it is (I feel that it is time to take a break!).
Some suggestions for improved resultsThe circuit is compromised by the challenge to make it work with a single 5-V supply. It would be less challenging if separate, well-regulated, well-filtered supplies were used for U1/U2, for example, a 14 V regulated down to 11 V for the positive supply, and a negative 5 V regulated down to -2.5 V (use linear regulators for both supplies!)
The input could then range from 0 to 10 V, which would reduce the input sensitivity by a factor of two and make it easier to design quieter supplies for the input amplifier and current-to-voltage circuits, U1/U2.
At the lower frequencies, some investigation should be done to expose the causes of the nonlinearity in that frequency range, and to indicate changes that would improve the circuit operation.
Another option would be to split the operation into two ranges, such as 100 kHz to 1 MHz and 1 MHz to 100 MHz.
Final factThe operation of the circuit is pretty impressive when the circuit is modified as suggested. I think actualizing an oscillator that provides an output from 200 kHz to 113 MHz is quite a remarkable result. Thanks to the late Jim Williams [2] and to the lively Stephen Woodward [3] for leading the way to the implementation of this circuit!
Jim McLucas retired from Hewlett-Packard Company after 30 years working in production engineering and on the design and test of analog and digital circuits.
References/Related Content
- A simulated 100-MHz VFC
- 1-Hz to 100-MHz VFC features 160-dB dynamic range
- 100-MHz VFC with TBH current pump
- Take-Back-Half precision diode charge pump
The post My 100-MHz VFC – the hardware version appeared first on EDN.
Protecting precision DACs against industrial overvoltage events

In industrial applications using digital-to-analog converters (DACs), programmable logic controllers (PLCs) set an analog output voltage to control actuators, motors, and valves. PLCs can also regulate manufacturing parameters such as temperature, pressure, and flow.
In these environments, the DAC output may require overvoltage protection from accidental shorts to higher-voltage power supplies and other sustained high-voltage miswired connections. You can protect precision DAC outputs in two different ways, depending on whether the DAC output buffer has an external feedback pin.
Overvoltage damage
There are two potential consequences should an accidental sustained overvoltage event occur at the DAC output.
First, if the DAC output can drive an unsustainable current limit, then damage may occur as the output buffer drives an excess of current. This current limit may also occur if the output voltage is shorted to ground or to another voltage within the supply range of the DAC.
Second, electrostatic discharge (ESD) diodes latched to the supply and ground can source and sink current during sustained overvoltage events, as shown in Figure 1 and Figure 2. In many DACs, a pair of internal ESD diodes that shunt any momentary ESD current away from the device can help protect the output pin. In Figure 1, a large positive voltage causes an overvoltage event in the output and forward-biases the positive AVDD ESD diode. The VOUT pin sinks current from the overvoltage event into the positive supply.

Figure 1 Current is shunted to positive supply during a positive overvoltage event. Source: Texas Instruments
In Figure 2, the negative overvoltage sources current from the negative supply through the AVSS ESD diode to VOUT.

Figure 2 Current is shunted to positive supply during a negative overvoltage event. Source: Texas Instruments
In Figure 1 and Figure 2, internal ESD diodes are not designed to sink or source current associated with a sustained overvoltage event, which will typically damage the ESD diodes and voltage output. Any protection should limit this current during an overvoltage event.
Overvoltage protection
While two basic components will protect precision DAC outputs from an overvoltage event, the protection topology for the DAC depends on the internal or external feedback connection for the DAC output buffer.
If the DAC output does not have an external voltage feedback pin, you can set up protection as a basic buffer using an operational amplifier (op amp) and a current protection device at its output. If the DAC has an external voltage feedback pin, then you would place the current protection device at the output of the DAC, with the op amp driving the feedback sense pin.
Let’s explore both topologies.
Figure 3 shows protection for a DAC without a feedback sense pin, with the op amp set up as a unity gain buffer. Inside the op amp feedback, an eFuse opens the circuit if the op amp output current exceeds a set level.

Figure 3 Output protection for a DAC works without a feedback pin. Source: Texas Instruments
Again, if the output terminal voltage is within the supplies of the op amp, the output current comes from the short-circuit current limit. An output terminal set beyond the supplies of the op amp, as in a positive or negative overvoltage, will cause the supply rails to source or sink additional current, as previously shown in Figure 1 and Figure 2.
Because the output terminal connects to the op amp’s negative input, the op amp input must have some sort of overvoltage protection. For this protection circuit, an op amp with internal overvoltage protection that extends far beyond the op amp supply voltage is selected. When using a different op amp, series resistance that limits the input current can help protect the inputs.
The circuit shown in Figure 3 will also work for a precision DAC with a feedback sense pin. The DAC feedback sense pin would simply connect to the DAC VOUT pin, using the same protection buffer circuit. If you want to use the DAC feedback to reduce errors from long output and feedback sense wire resistances, you need to use a different topology for the protection circuit.
If the DAC has an external feedback sense pin, changing the protection preserves the sense connection. In Figure 4, the eFuse connects directly to the DAC output. The eFuse opens if the DAC output current exceeds a set level. Here, the op amp acts as a unity gain buffer to drive the DAC sense feedback pin.

Figure 4 This output protection for a DAC works with a feedback pin. Source: Texas Instruments
In both topologies, shown in Figure 3 and Figure 4, the two protection devices have the same requirements. For the eFuse, the break current must be lower than the current level that might damage the device it’s protecting. For the op amp, input protection is required, as the output overvoltage may significantly exceed the rail voltage. In operation, the offset voltage must be lower than the intended error, and the bandwidth must be high enough to satisfy the system requirements.
Overvoltage protection component selection
To help you select the required components, here are the system requirements for operation and protection:
- Supply range: ±15 V
- Sustained overvoltage protection: ±32 V
- Current at sustained overvoltage: approximately 30 mA
- Output protection should introduce as little error as possible, based on offset or bandwidth
The primary criteria for op amp selection were overvoltage protection of the inputs. For instance, the super-beta inputs of the OPA206 precision op amp have an integrated input overvoltage protection that extends up to ±40 V beyond the op amp supply voltage. Figure 5 shows the input bias current relative to the input common-mode voltage powering OPA206 with ±15 V supplies. Within the ±32 V range of overvoltage protection, the input bias current stays below ±5 mA of input current.

Figure 5 Input bias current for the OPA206 is shown versus the input common-mode voltage. Source: Texas Instruments
The OPA206 offset voltage is very low (typically ±4 µV at 25°C and ±55 µV from –40°C to 125°C) and the buffer contributes little error to the DAC output. When using a different op amp without integrated input overvoltage protection, adding series resistance at the inputs will limit the input current.
The TPS2661 eFuse was originally intended as a current-loop protector with input and output miswiring protection. If its output voltage exceeds the rail supplies, TPS2661 detects miswiring and cuts off the current path, restoring the current path when the output overvoltage returns below the supply.
If the output current exceeds TPS2661’s 32-mA current-limit protection, the device breaks the connection and retests the current path for 100 ms periodically every 800 ms. The equivalent resistance of the device is a maximum 12.5 Ω, which enables a high-current transmission output without large voltage headroom and footroom loss at the output.
Beyond the op amp and eFuse protection, applying an optional transient voltage suppression (TVS) diode will provide additional surge protection as long as the chosen breakdown voltage is higher than any sustained overvoltage. If the breakdown voltage is less than the sustained overvoltage, then an overvoltage can damage the TVS diode. In this circuit, the expected sustained overvoltage is ±32 V, with an optional TVS3301 device that has a bidirectional 33-V breakdown for surge protection.
Another TVS3301 added to the ±15-V supplies is an additional option. An overvoltage on the terminal will direct any fault current into the power supplies. If the supply cannot sink the current or is not fast enough to respond to the overvoltage, then the TVS diode absorbs excess current as the overvoltage occurs.
Constructed circuit: Precision DAC without a feedback sense pin
You can build and test the overvoltage protection buffer from Figure 3 with the DAC81416-08 evaluation module (EVM). This multichannel DAC doesn’t have an external feedback sense pin. Figure 6 shows the constructed protection buffer tested on one of the DAC channels.

Figure 6 The constructed overvoltage protection circuit employs the DAC81416-08 evaluation module. Source: Texas Instruments
Ramping the output of DAC from –10 V to 10 V drives the buffer input. Figure 7 shows that the measured offset of the buffer is less than 10 µV over the full range.

Figure 7 Protection buffer output offset error is shown versus buffer input voltage. Source: Texas Instruments
Connecting the output to a variable supply tests the output overvoltage connection, driving the output voltage and then recording the current at the output. The measurement starts at –32 V, increases to +32 V, then changes back from +32 V down to –32 V. Figure 8 shows the output current set to overvoltage and its recovery from overvoltage.

Figure 8 Protection buffer output current is shown versus buffer output overvoltage. Source: Texas Instruments
The measurements show hysteresis in both the positive and negative overvoltage of the protection buffer that comes from extra voltage across the series resistor at the output of the TPS26611. During normal operation (without an overvoltage), the TPS26611 current path turns off when the output rises and is driven above 17.2 V, at which point the remaining output current comes from the overvoltage of the OPA206 input. As the output voltage decreases, the TPS26611 current path conducts current again when the output drops below 15 V.
When driving the output to a negative overvoltage, the current path turns off at –17.5 V and turns on again when the output returns above –15 V.
Constructed circuit: Protection for a DAC with output feedback
Like the previous circuit, you can test the overvoltage protection from Figure 4. This test attaches an overvoltage protection buffer to the output of a DAC with an external feedback sense pin. The DAC8760 EVM tests for an output overvoltage event. As shown in Figure 9, a 1-kΩ resistor placed between VOUT and +VSENSE prevents the output buffer feedback loop of the DAC from breaking if the feedback sense signal is cut.

Figure 9 This constructed overvoltage protection circuit is used with the DAC8760 evaluation module. Source: Texas Instruments
Ramping the output of the DAC from –10 V to +10 V drives the feedback buffer input. Shown in Figure 10, the offset of the feedback to +VSENSE is again <10 μV over the full range.

Figure 10 Feedback buffer offset error is shown versus buffer input voltage. Source: Texas Instruments
The DAC is again set to 0 V, with the output connected to a variable supply to check the output current against output overvoltage. Figure 11 shows the output current as the output voltage increases from –32 V to +32 V and decreases to –32 V.

Figure 11 Protection buffer output current is shown versus buffer output overvoltage. Source: Texas Instruments
As before, there is current path hysteresis. The TPS26611 current path shuts off when the output goes above 16.5 V and turns on when the output returns to about 15 V. For the negative overvoltage, the current path turns off when the output is below –16.8 V and turns on again when the output returns above –15 V.
Two overvoltage protection topologies
Industrial control applications for analog outputs require specialized protection from harsh conditions. This article presented two topologies for precision DAC protection against sustained overvoltage events:
- DAC without external feedback: Protecting the output from an overvoltage by using an op amp buffer with an eFuse in the op amp output.
- DAC with external feedback: Protecting the output from overvoltage by using an eFuse to limit the DAC output current and with an op amp acting as a unity gain buffer for sense feedback.
In both cases, the tested circuits show a limited offset error (<10 µV) through the range of operation (±10-V output) and protection from sustained overvoltage of ±32 V.
Joseph Wu is applications engineer for digital-to-analog converters (DACs) at Texas Instruments.
Art Kay is applications engineer for precision signal conditioning products at Texas Instruments.
Related Content
- Pressures grow for circuit protection
- Overvoltage-protection circuit saves the day
- Do You Have the Right Power Supply Protections?
- How to prevent overvoltage conditions during prototyping
- Adding over-voltage protection to your mobile/portable embedded design
The post Protecting precision DACs against industrial overvoltage events appeared first on EDN.
EcoFlow’s DELTA 3 Plus and Smart Extra Battery: Product line impermanence curiosity

Earlier this summer, I detailed my travails struggling with (and ultimately recovering from) buggy firmware updates I’d been “pushed” on my combo of EcoFlow’s DELTA 2 portable power station and its Smart Extra Battery supplemental capacity companion:

Toward the end of that earlier writeup, I mentioned that I’d subsequently been offered a further firmware update, which (for, I think, understandable reasons) I was going to hold off on tackling for a while, until I saw whether other, braver souls had encountered issues of their own with it:
DELTA 2 firmware update success(es)
In late August, I eventually decided to take the upgrade plunge, after enduring the latest in an occasional but enduring series of connectivity glitches. Although I could still communicate with the device “stack” via Bluetooth, its Wi-Fi connection had dropped and needed to be reinstated within the app. The firmware update’s documentation indicated it’d deal with this issue:

The upgrade attempt was thankfully successful this time, although candidly, I can’t say that the Wi-Fi connectivity is noticeably more robust now than it had been previously:

I was then immediately offered another firmware upgrade, which I’d heard on Facebook’s “EcoFlow Official Club“ group had just been released. Tempting fate, I plunged ahead again:

Thankfully, this one completed uneventfully as well:

As did another offered to me in early September (gotta love that descriptive “Fixes some known issues” phrasing, eh? I’m being sarcastic, if it wasn’t already obvious…):

There have been no more firmware upgrades in the subsequent ~1.5 months. More generally, since the DELTA 2 line is mature and EcoFlow has moved on to the DELTA 3 series, I’m hopeful for ongoing software stability (accompanied by no more functional misbehavior) at this point.
Initial impressions of DELTA 3 devicesSpeaking of which, what about the DELTA 3 Plus and its accompanying Smart Extra Battery, mentioned at the end of my earlier write-up, which EcoFlow support had sent as replacements for the DELTA 2-generation predecessors prior to my successful resurrection of them?

Here again is what the new DELTA 3 stack (left) looks like next to its DELTA 2 precursors (right):

The stored-charge capacity of the DELTA 2 is 1024Wh, which matches that of the DELTA 3 Plus. I’d mentioned in my earlier DELTA 2 coverage that the DELTA 3 Plus was based on newer, denser (but still LiFePO₄ aka LFP) 40135 batteries. Why then do the two portable power stations have nearly the same sizes? The answer, of course, is that there’s more than just batteries inside ‘em:
The (presumed) varying battery generation-induced size differential is much more evident with the two generations of Smart Extra Batteries…which are (essentially) just batteries.
Despite their 1,024-Wh capacity commonality, the DELTA 3 version (again on top of the stack at left in the earlier photo) has dimensions of 15.7 x 8 x 7.8 in (398 x 200 x 198 mm) and weighs 21.1 lbs. (9.6 kg).
Its DELTA 2-generation predecessor at top right weighs essentially the same (21 lbs./9.5 kg), and it’s nearly 50% taller (15.7 × 8.3 × 11.1 in./40 × 21.1 × 28.1 cm).
By the way, back when I was fearing that the base DELTA 2 unit was “toast” but hoping that its Smart Extra Battery might still be saved, I confirmed EcoFlow’s claim that the DELTA 3 Plus worked not only with multiple capacity variants of the DELTA 3-generation Smart Extra Battery, for capacity expansion up to 5 KWh, but also with my prior-generation storage capacity expansion solution:


Aside from the height-therefore-volume differential, the most visually obvious other difference between the two portable power stations is the relocation of AC power outlets to the front panel in the DELTA 3 Plus case. Other generational improvements include:
- Faster sub-10-ms switchover from wall outlet-sourced to inverter-generated AC for more robust (albeit not comprehensive…no integrated surge protection support, for example) UPS functional emulation
- Improved airflow, leading to claimed 30-dB noise levels in normal operation
- A newer-generation battery-induced boosted recharge cycle count to 4,000
- Inverter-generated AC output power up to 3600 W (X-Boost surge)
- Higher power, albeit fewer, USB-A ports (two, each 36 W, compared to two 12 W and two 18 W)
- Higher power USB-C ports (two, each 140 W, versus two 100 W)
- And faster charging (sub-1-hour to 100%), enabled by factors such as:
- AC input power up to 1500 W
- Solar input power up to 1000 W (two 500-W-max XT60i connectors) with maximum power point tracking (MPPT) support
- And simultaneous multi-charging capabilities from solar and AC when both are available, prioritizing the former to save money.
Speaking of solar, I haven’t forgotten about the two 220W panels:

And a more recently acquired 400W one:

For which I’m admittedly belated in translating testing aspiration into reality. The issue at the moment isn’t snow on the deck, although that’ll be back soon enough. It’s high winds:

That said, my procrastination has had at least one upside: a larger number of interesting options (and combinations) to evaluate than before. Now, I can tether either the two parallel-connected 220-W panels or the single 400-W one to the DELTA 2’s single XT60i input.
And for the DELTA 3 Plus, thanks to the aforementioned dual XT60i inputs and 1000-W peak input support, I can hook up all three panels simultaneously, although doing so will likely take up a notable chunk of my deck real estate in the process. Please remain on standby for observations and results to come!
More on charging and firmware upgradingTwo other comments to note, in closing:
Speaking of the XT60i input, how do I charge the DELTA 3 Plus (or the DELTA 2, for that matter) in-vehicle using EcoFlow’s 800W Alternator Charger (which, yes, I already realize that I’m also overdue in installing and then testing!):

Specifically, when the portable power station is simultaneously connected to its Smart Extended Battery companion? Ordinarily, the Alternator Charger would tether to the portable power station over the XT150 connector-equipped cable that comes bundled with the former:

But, in this particular case, the portable power station’s XT150 interface is already in use (and for that matter, isn’t even an available option for lower-end devices such as my RIVER 2):

The trick is to instead use one of the two orange-color XT60i connectors also shown at the bottom left of the DELTA 3 stack setup photo.
EcoFlow alternatively bundles an XT60 connector-equipped cable with the 500-W version of the Alternator Charger, intended for use with smaller vehicles and/or more modest portable power stations, but that same cable is also available for standalone purchase:

It’ll be lower power (therefore slower) than the XT150 alternative, but it’s better than nothing! And it’ll recharge both the portable power station and (via the separate XT150-to-XT150 cable) the tethered Smart Extended Battery. Just be sure to secure the stack so it doesn’t tip over!
Also, regarding firmware upgrades, I’d been pleasantly surprised to not receive any DELTA 3 Plus update notifications since late April when it and its Smart Extra Battery companion had come into my possession. Software stability nirvana ended, in late August, alas, and since the update documentation specifically mentioned a “Better experience when using the device with an extra battery,” I decided to proceed. Unfortunately, my first several subsequent upgrade attempts terminated prematurely, at random percentage-complete points, after slower-than-usual progress, and with worrying failure status messages:

Eventually, I crossed my fingers and followed the guidance to restart the device, a process which, I eventually realized after several frustrating, unsuccessful initial attempts, can only be accomplished with the portable power station disconnected from AC. The device was stuck in a partially updated state post-reboot, albeit thankfully still accessible over Bluetooth:

And doubly thankfully, this time the upgrade completed successfully to both the DELTA 3 Plus:


And its tethered Smart Extra Battery:

Phew! As before with the DELTA 2, I think I’ll delay my next update (which hasn’t been offered yet) until I wait an appropriate amount of time and then check in with the user community first for feedback on their experiences. And with that, I await your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Firmware-upgrade functional defection and resurrection
- EcoFlow’s Delta 2: Abundant Stored Energy (and Charging Options) for You
- Portable power station battery capacity extension: Curious coordination
- EcoFlow’s RIVER 2: Svelte portable power with lithium iron phosphate fuel
The post EcoFlow’s DELTA 3 Plus and Smart Extra Battery: Product line impermanence curiosity appeared first on EDN.
Mastering multi-physics effects in 3D IC design

The semiconductor industry is at a pivotal moment as the limits of Moore’s Law motivate a transition to three-dimensional integrated circuit (3D IC) technology. By vertically integrating multiple chiplets, 3D ICs enable advances in performance, functionality, and power efficiency. However, stacking dies introduces layers of complexity driven by multi-physics interactions—thermal, mechanical, and electrical—which must be addressed at the start of design.
This shift from two-dimensional (2D) system-on-chips (SoC) to stacked 3D ICs fundamentally alters the design environment. 2D SoCs benefit from well-established process design kits (PDKs) and predictable workflows.

Figure 1 The 3D IC technology takes IC design to another dimension. Source: Siemens EDA
In contrast, 3D integration often means combining heterogeneous dies that use different process nodes and new interconnection technologies, presenting additional variables throughout the design and verification flow. Multi-physics phenomena are no longer isolated concerns—they are integral to the design’s overall success.
Multi-physics: a new design imperative
The vertical structure of 3D ICs—interconnected by through-silicon vias and micro-bumps and enclosed in advanced packaging materials—creates a tightly coupled environment where heat dissipation, mechanical integrity, and electrical behavior interact in complex ways.
For 2D chips, thermal and mechanical checks were often deferred until late in the cycle, with manageable impact. For 3D ICs, postponing these analyses risks costly redesigns or performance and reliability failures.
Traditional SoC design often relies on high-level RTL descriptions, where many physical optimizations are fixed early and are hard to change later. On the other hand, 3D IC’s complexity and physical coupling require earlier feedback from physics-driven analysis during RTL and floorplanning, enabling designers to make informed choices before costly constraints are locked in.
A chiplet may operate within specifications in isolation, yet face degraded reliability and performance once subjected to the real-world conditions of a 3D stack. Only early, predictive, multi-physics analysis can reveal—and enable cost-effective mitigation of—these risks.
Continuous multi-physics evaluation must begin at floorplanning and continue through every design iteration. Each change to layout, interfaces, or materials can introduce new thermal or mechanical stress concerns, which must be re-evaluated to maintain system reliability and yield.
Moving IC design to the system-level
3D ICs require close coordination among specialized teams: die designers, interposer experts, packaging engineers, and, increasingly, electronic system architects and RTL developers. Each group has its own toolchains and data standards, often with differing net naming conventions, component orientations, and functional definitions, leading to communication and integration challenges.
Adding to the internal challenges, 3D IC design often involves chiplets from multiple vendors, foundries and OSAT providers, each with different methodologies and data formats. While using off-the-shelf chiplets offers flexibility and accelerates development, integration can expose previously hidden multi-physics issues. A chiplet that works in isolation may fail specification after stacking, emphasizing the need for tighter industry collaboration.
Addressing these disparities requires a system-level owner, supported by comprehensive EDA platforms that unify methodologies and aggregate data across domains. This ensures consistency and reduces errors inherent to siloed workflows. For EDA vendors, developing inclusive environments and tools that enable such collaboration is essential.
Inter-company collaboration now also depends on more robust data exchange tools and methodologies. Here, EDA vendors play a central role by providing platforms and standards for seamless communication and data aggregation between fabless houses, foundries, and OSATs.
At the industry level, new standards and 3D IC design kits—such as those developed by the CDX working group and industry partners—are emerging to address these challenges, forging a common language for describing 3D IC components, interfaces, and package architectures. These standards are vital for enabling reliable data exchanges and integration across diverse teams and supply chain partners.

Figure 2 Here is a view of a chiplet design kit (CDK) as per JEDEC JEP30 part model. Source: Siemens EDA
Programs such as TSMC’s 3Dblox initiative provide upfront placement and interconnection definitions, reducing ambiguity and fostering tool interoperability.
Digital twin and predictive multi-physics
The digital twin concept extends multi-physics analysis throughout the entire product lifecycle. Maintaining an accurate digital representation—from transistor-level detail up to full system integration—enables predictive simulation and optimization, accounting for interactions down to the package, board, or even system level. By transferring multi-physics results between levels of abstraction, teams can verify that chiplet behavior under thermal and mechanical loads accurately predicts final product reliability.

Figure 3 A digital twin extends multi-physics analysis throughout the entire product lifecycle. Source: Siemens EDA
For 3D ICs, chiplet electrical models must be augmented by multi-physics data captured from stack-level simulations. Back-annotating temperature and stress outcomes from package-level analysis into chiplet netlists provides the foundation for more accurate system-level electrical simulations. This feedback loop is becoming a critical part of sign-off, ensuring that each chiplet performs within its operational window in the assembled system.
Keeping it cool
Thermal management is the single most important consideration for die-to-die interfaces in 3D ICs. The vertical proximity of active dies can lead to rapid heat accumulation and risks, such as thermal runaway, where ongoing heat generation further degrades electrical performance and creates mechanical stress from varying thermal expansion rates in different materials. Differential expansion between materials can even warp dies and threaten the reliability of interconnects.
To enable predictive design, the industry needs standardized “multi-physics Liberty files” that define temperature and stress dependencies of chiplet blocks, akin to the Liberty files used for place-and-route in 2D design. These files will allow designers to evaluate whether a chiplet within the stack stays within its safe operating range under expected thermal conditions.
Multi-physics analysis must also support back-annotation of temperature and stress information to individual chiplets, ensuring electrical models reflect real operating environments. While toolchains for this process are evolving, the trajectory is clear: comprehensive, physics-aware simulation and data exchange will be integral to sign-off for 3D IC design, ensuring reliable operation and optimal system performance.
Shaping the future of 3D IC design
The journey into 3D IC technology marks a transformative period for the semiconductor industry, fundamentally reshaping how complex systems are designed, verified, and manufactured. 3D IC technology marks a leap forward for semiconductor innovation.
Its success hinges on predictive, early multi-physics analysis and collaboration across the supply chain. Establishing common standards, enabling system-level optimization, and adopting the digital twin concept will drive superior performance, reliability, and time-to-market.
Pioneers in 3D IC design—across EDA, semiconductor and system developers—are moving toward unified, system-level platforms that allow designers to iterate and optimize multi-physics analyses within a “single cockpit” environment that allows designers to optimize and iterate across different types of multi-physics analyses.

Figure 4 The Innovator3D IC solution provides the single, integrated cockpit 3D IC designers need. Source: Siemens EDA
With continued advances in EDA tools, methodologies and collaboration, the semiconductor industry can unlock the full promise of 3D integration, delivering the next generation of electronic systems that push the boundaries of capability, efficiency, and innovation.
Todd Burkholder is a senior editor at Siemens DISW. For over 30 years, he has worked as editor, author, and ghost writer with internal and external customers to create print and digital content across a broad range of high-tech and EDA technologies. Todd began his career in marketing for high-technology and other industries in 1992 after earning a Bachelor of Science at Portland State University and a Master of Science degree from the University of Arizona.
Tarek Ramadan is applications engineering manager for the 3D-IC Technical Solutions Sales (TSS) organization at Siemens EDA. He drives EDA solutions for 2.5D-IC, 3D-IC, and wafer level packaging applications. Prior to that, Tarek was a technical product manager in the Siemens Calibre design solutions organization. Ramadan holds BS and MS degrees in electrical engineering from Ain Shams University, Cairo, Egypt.
John Ferguson brings over 25 years of experience at Siemens EDA to his role as senior director of product management for Caliber 3D IC solutions. With a background in physics and deep expertise in design rule checking (DRC), John has been at the forefront of 3D IC technology development for more than 15 years, witnessing its evolution from early experimental approaches to today’s production-ready solutions.
Related Content
- Putting 3D IC to work for you
- Making your architecture ready for 3D IC
- The multiphysics challenges of 3D IC designs
- Advanced IC Packaging: The Roadmap to 3D IC Semiconductor Scaling
- Automating FOWLP design: A comprehensive framework for next-generation integration
The post Mastering multi-physics effects in 3D IC design appeared first on EDN.




