EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 26 min ago

Designer’s guide: PMICs for industrial applications

Thu, 11/13/2025 - 16:00
Nexperia’s NEH71x0 energy-harvesting PMIC.

Power management integrated circuits (PMICs) are an essential component in the design of any power supply. Their main function is to integrate several complex features, such as switching and linear power regulators, electrical protection circuits, battery monitoring and charging circuits, energy-harvesting systems, and communication interfaces, into a single chip.

Compared with a solution based on discrete components, PMICs greatly simplify the development of the power stage, reducing the number of components required, accelerating validation and therefore the design’s time to market. In addition, PMICs qualified for specific applications, such as automotive or industrial, are commercially available.

In industrial and industrial IoT (IIoT) applications, PMICs address key power challenges such as high efficiency, robustness, scalability, and flexibility. The use of AI techniques is being investigated to improve PMIC performance, with the aim of reducing power losses, increasing energy efficiency, and reducing heat dissipation.

Achieving high efficiency

Industrial and IIoT applications require multiple power lines with different voltage and current requirements. Logic processing components, such as microcontrollers (MCUs) and FPGAs, require very low voltages, while peripherals, such as GPIOs and communication interfaces, require voltages of 3.3 V, 5 V, or higher.

These requirements are now met by multichannel PMICs, which integrate switching buck, boost, or buck-boost regulators, as well as one or more linear regulators, typically of the low-dropout (LDO) type, and power switches, very useful for motor control. Switching regulators offer very high efficiency but generate electromagnetic noise related to the charging and discharging process of the inductor.

LDO regulators, which achieve high efficiency only when the output voltage differs slightly from the input voltage to the converter, are instead suitable for low-noise applications such as sensors and, more generally, where analog voltages with very low amplitude need to be managed.

Besides multiple power rails, industrial and IIoT applications require solutions with high efficiency. This requirement is essential for prolonging battery life, reducing heat dissipation, and saving space on the printed-circuit board (PCB) using fewer components.

To achieve high efficiency, one of the first parameters to consider is the quiescent current (IQ), which is the current that the PMIC draws when it is not supplying any load, while keeping the regulators and other internal functions active. A low IQ value reduces power losses and is essential for battery-powered applications, enabling longer battery operation.

PMICs are now commercially available that integrate regulators with very low IQ values, in the order of microseconds or less. However, a low IQ value should not compromise transient response, another parameter to consider for efficiency. Transient response, or response time, indicates the time required by the PMIC to adapt to sudden load changes, such as when switching from no load to active load. In general, depending on the specific application, it is advisable to find the right compromise between these two parameters.

Nordic Semiconductor’s nPM2100 (Figure 1) is an example of a low-power PMIC. Integrating an ultra-efficient boost regulator, the nPM2100 provides a very low IQ, addressing the needs of various battery-powered applications, including Bluetooth asset tracking, remote controls, and smart sensors.

The boost regulator can be powered from an input range of 0.7 to 3.4 V and provides an output voltage in the range of 1.8 V to 3.3 V, with a maximum output current of 150 mA. It also integrates an LDO/load switch that provides up to 50-mA output current with an output voltage in the range of 0.8 V to 3.0 V.

The nPM2100’s regulator offers an IQ of 150 nA and achieves up to 95% power conversion efficiency at 50 mA and 90.5% efficiency at 10 µA. The device also has a low-current ship mode of 35 nA that allows it to be transported without removing the battery inserted. Multiple options are available for waking up the device from this low-power state.

An ultra-low-power wakeup timer is also available. This is suitable for timed wakeups, such as Bluetooth LE advertising performed by a sensor that remains in an idle state for most of the time. In this hibernate state, the maximum current absorbed by the device is 200 nA.

Nordic Semiconductor’s nPM2100 PMIC.Figure 1: Nordic Semiconductor’s nPM2100 PMIC can be easily interfaced to low-power system-on-chips or MCUs, such as Nordic’s nRF52, nRF53, and nRF54 Series. (Source: Nordic Semiconductor)

Another relevant parameter that helps to increase efficiency is dynamic voltage and frequency scaling (DVFS).

When powering logic devices built with CMOS technology, such as common MCUs, processors, and FPGAs, a distinction can be made between static and dynamic power consumption. While the former is simply the product of the supply voltage by the current in idle conditions, dynamic power is expressed by the following formula:

Pdynamic = C × Vcc2 × fsw

where C is the load capacity, VCC is the voltage applied to the device, and fSW is the switching frequency. This formula shows that the power dissipated has a quadratic relationship with voltage and a linear relationship with frequency. The DVFS technique works by reducing these two electrical parameters and adapting them to the dynamic requirements of the load.

Consider now a sensor that transmits data sporadically and for short intervals, or an industrial application, such as a data center’s board running AI models. By reducing both voltage and frequency when they are not needed, DVFS can optimize power management, enabling significant improvements in energy efficiency.

NXP Semiconductors’ PCA9460 is a 13-channel PMIC specifically designed for low-power applications. It supports the i.MX 8ULP ultra-low-power family processor, providing four high-efficiency 1-A step-down regulators, four VLDOs, one SVVS LDO, and four 150-mΩ load switches, all enclosed in a 7 × 6-bump-array, 0.4-mm-pitch WSCSP42 package.

The four buck regulators offer an ultra-low IQ of 1.5 μA at low-power mode and 5.5 μA at normal mode, while the four LDOs achieve an IQ of 300 nA. Two buck regulators support smart DVFS, enabling the PMIC to always set the right voltage on the processors it is powering. This feature, enabled through specific pins of the PMIC, minimizes the overall power consumption and increases energy efficiency.

Energy harvesting

The latest generation of PMICs has introduced the possibility of obtaining energy from various sources such as light, heat, vibrations, and radio waves, opening up new scenarios for systems used in IIoT and industrial environments. This feature is particularly important in IIoT and wireless devices, where maintaining a continuous power source for long periods of time is a significant challenge.

Nexperia’s NEH71x0 low-power PMIC (Figure 2) is a full power management solution integrating advanced energy-harvesting features. Harvesting energy from ambient power sources, such as indoor and outdoor PV cells, kinetic (movement and vibrations), piezo, or a temperature gradient, this device allows designers to extend battery life or recharge batteries and supercapacitors.

With an input power range from 15 μW to 100 mW, the PMIC achieves an efficiency up to 95%, features an advanced maximum power-point tracking block that uses a proprietary algorithm to deliver the highest output to the storage element, and integrates an LDO/load switch with a configurable output voltage from 1.2 V to 3.6 V.

Reducing the bill of materials and PCB space, the NEH71x0 eliminates the need for an external inductor, offering a compact footprint in a 4 × 4-mm QFN28 package. Typical applications include remote controls, smart tags, asset trackers, industrial sensors, environmental monitors, tire pressure monitors, and any other IIoT application.

Nexperia’s NEH71x0 energy-harvesting PMIC.Figure 2: Nexperia’s NEH71x0 energy-harvesting PMIC can convert energy with an efficiency of up to 95%. (Source: Nexperia) PMICs for AI and AI in PMICs

To meet the growing demand for power in the industrial sector and data centers, Microchip Technology Inc. has introduced the MCP16701, a PMIC specifically designed to power high-performance logic devices, such as Microchip’s PIC64GX microprocessors and PolarFire FPGAs. The device integrates eight 1.5-A buck converters that can be connected in parallel, four 300-mA LDOs, and a controller for driving external MOSFETs.

The MCP16701 offers a small footprint of 8 × 8 mm in a VQFN package (Figure 3), enabling a 48% reduction in PCB area and a 60% reduction in the number of components compared with a discrete solution. All converters, which can be connected in parallel to achieve a higher output current, share the same inductor.

A unique feature of this PMIC is its ability to dynamically adjust the output voltage on all converters in steps of 12.5 mV or 25 mV, with an accuracy of ±0.8% over the temperature range. This flexibility allows designers to precisely adjust the voltage supplied to loads, optimizing energy efficiency and system performance.

Microchip’s MCP16701.Figure 3: Microchip’s MCP16701 enables engineers to fine-tune power delivery, improving system efficiency and performance. (Source: Microchip Technology Inc.)

As in many areas of modern electronics, AI techniques are also being studied and introduced in the power management sector. This area of study is referred to as cognitive power management. PMICs, for example, can use machine-learning techniques to predict load evolution over time, adjusting the output voltage value in real time.

Tools such as PMIC.AI, developed by AnDAPT, use AI to optimize PMIC architecture and component selection, while Alif Semiconductor’s autonomous intelligent power management (aiPM) tool dynamically manages power based on AI workloads. These solutions enable voltage scaling, increasing system efficiency and extending battery life.

The post Designer’s guide: PMICs for industrial applications appeared first on EDN.

Basic design equations for three precision current sources

Thu, 11/13/2025 - 15:00

A frequently encountered category of analog system component is the precision current source. Many good designs are available, but concise and simple arithmetic for choosing the component values necessary to tailor them to specific applications isn’t always provided. I guess some designers feel such tedious details are just too trivially obvious to merit mentioning. But I sometimes don’t feel that. 

Wow the engineering world with your unique design: Design Ideas Submission Guide

Here are some examples I think some folks might find useful. I hope they won’t feel too terribly obvious, trivial, or tedious.

The circuit in Figure 1 is versatile and capable of high performance.

Figure 1 A simple high-accuracy current source that can source current with better than 1% accuracy.

With suitable component choices, this circuit can: source current with better than 1% accuracy and have Q1 drain currents ranging from < 1mA to > 10 A, while working with power supply voltages (Vps) from < 5V to > 100 V.

Here are some helpful hints for resistor values, resistor wattages, and safety zener D1. First note

  • Vps = power supply voltage
  • R1(W), Q1(W), and R2(W) = respective component power dissipation
  • Id = Q1 drain current in amps

Adequate heat sinking for Q1(W). Another thing assumed is:

Vps > Q1 (Vgs ON voltage) + 1.24 + R1*100µA

The design equations are as follows:

  1. R1 = (Vps – 1.24)/1mA
  2. R1(W) = R1/1E6
  3. Q1(W) = (Vps – Vload – 1.24)*Id
  4. R2 = 1.24/Id
  5. R2(W) = 1.24 Id
  6. R2 precision 1% or better at the temperature produced by #5 heat dissipation
  7. D1 is needed only if Vps > 15V

Figure 2 substitutes an N-channel MOSFET for Figure 1’s Q1 and an anode-referenced 431 regulator chip in place of the cathode-referenced 4041 to produce a very similar current sink. Its design equations are identical.

Figure 2 A simple, high-accuracy current sink uses identical design math.

Okay, okay, I can almost hear the (very reasonable) objection that, for these simple circuits, the design math really was pretty much tedious, trivial, and obvious. 

So I’ll finish with a very less obvious and more creative example from frequent contributor Christopher Paul’s DI “Precision, voltage-compliant current source.”

Taking parts parameters from Christopher Paul’s Figure 3, we can define:

  1. Vs = chosen voltage across the R3R4 divider
  2. V5 = voltage across R5
  3. Id = chosen application-specific M1 drain current

Then:

  1. Vs = 5V
  2. V5 = 5V – 0.65V = 4.35V
  3. R5 = 4.35V/150µA = 30kΩ
  4. I4 = Id – 290µA
  5. R3 = 1.24/I4
  6. R4 = (Vs – 1.24)/I4 = 3.76/I4
  7. R3(W) = 1.24 I4
  8. R4(W) = 3.76 I4
  9. M1(W) = Id(Vs – Vd)

For example, if Id = 50 mA and Vps = 15 V, then:

  •  I4 = 49.7 mA
  • R5 = 30 kΩ
  • R4 = 75.7 Ω
  • R3 = 25.2 Ω
  • R3(W) = 1.24 I4 = 100 mW
  • R4(W) = 3.76 I4 = 200 mW
  • M1(W) = 500 mW

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Basic design equations for three precision current sources appeared first on EDN.

How to limit TCP/IP RAM usage on STM32 microcontrollers

Thu, 11/13/2025 - 09:14

The TCP/IP functionality of a connected device uses dynamic RAM allocation because of the unpredictable nature of network behavior. For example, if a device serves a web dashboard, we cannot control how many clients might connect at the same time. Likewise, if a device communicates with a cloud server, we may not know in advance how large the exchanged messages will be.

Therefore, limiting the amount of RAM used by the TCP/IP stack improves the device’s security and reliability, ensuring it remains responsive and does not crash due to insufficient memory.

Microcontroller RAM overview

It’s common that on microcontrollers, available memory resides in several non-contiguous regions. Each of these regions can have different cache characteristics, performance levels, or power properties, and certain peripheral controllers may only support DMA operations to specific memory areas.

Let’s take the STM32H723ZG microcontroller as an example. Its datasheet, in section 3.3.2, defines embedded SRAM regions:

Here is an example linker script snippet for this microcontroller generated by the CubeMX:

Ethernet DMA memory

We can clearly see that RAM is split into several regions. The STM32H723ZG device includes a built-in Ethernet MAC controller that uses DMA for its operation. It’s important to note that the DMA controller is in domain D2, meaning it cannot directly access memory in domain D1. Therefore, the linker script and source code must ensure that Ethernet DMA data structures are placed in domain D2; for example, in RAM_D2.

To achieve this, first define a section in the linker script and place it in the RAM_D2 region:

Second, the Ethernet driver source code must put respective data into that section. It may look like this:

Heap memory

The next important part is the microcontroller’s heap memory. The standard C library provides two basic functions for dynamic memory allocation:

Typically, ARM-based microcontroller SDKs are shipped with the ARM GCC compiler, which includes the Newlib C library. This library, like many others, has a concept of so-called “syscalls” featuring low level routines that user can override, and which are called by the standard C functions. In our case, the malloc() and free() standard C routines call the _sbrk() syscall, which firmware code can override.

It’s typically done in the sycalls.c or sysmem.c file, and may look this:

As we can see, the _sbrk() operates on a single memory region:

That means that such implementation cannot be used in several RAM regions. There are more advanced implementations, like FreeRTOS’s heap4.c, which can use multiple RAM regions and provides pvPortMalloc() and pvPortFree() functions.

In any case, standard C functions malloc() and free() provide heap memory as a shared resource. If several subsystems in a device’s firmware use dynamic memory and their memory usage is not limited by code, any of them can potentially exhaust the available memory. This can leave the device in an out-of-memory state, which typically causes it to stop operating.

Therefore, the solution is to have every subsystem that uses dynamic memory allocation operate within a bounded memory pool. This approach protects the entire device from running out of memory.

Memory pools

The idea behind a memory pool is to split a single shared heap—with a single malloc and free—into multiple “heaps” or memory pools, each with its own malloc and free. The pseudo-code might look like this:

The next step is to make each firmware subsystem use its own memory pool. This can be achieved by creating a separate memory pool for each subsystem and using the pool’s malloc and free functions instead of the standard ones.

In the case of a TCP/IP stack, this would require all parts of the networking code—driver, HTTP/MQTT library, TLS stack, and application code—to use a dedicated memory pool. This can be tedious to implement manually.

RTOS memory pool API

Some RTOSes provide a memory pool API. For example, Zephyr provides memory heaps:

The other example of an RTOS that provides memory pools is ThreadX:

Using external allocator

The other alternative is to use an external allocator. There are many implementations available. Here are some notable ones:

  • umm_malloc is specifically designed to work with the ARM7 embedded processor, but it should work on many other 32-bit processors, as well as 16- and 8-bit processors.
  • o1heap is a highly deterministic constant-complexity memory allocator designed for hard real-time high-integrity embedded systems. The name stands for O(1) heap.

Example: Mongoose and O1Heap

The Mongoose embedded TCP/IP stack makes it easy to limit its memory usage, because Mongoose uses its own functions mg_calloc() and mg_free() to allocate and release memory. The default implementation uses the C standard library functions calloc() and free(), but Mongoose allows user to override these functions with their own implementations.

We can pre-allocate memory for Mongoose at firmware startup, for example 50 Kb, and use o1heap library to use that preallocated block and implement mg_calloc() and mg_free() using o1heap. Here are the exact steps:

  1. Fetch o1heap.c and o1heap.h into your source tree
  2. Add o1heap.c to the list of your source files
  3. Preallocate memory chunk at the firmware startup

  1. Implement mg_calloc() and mg_free() using o1heap and preallocated memory chunk

You can see the full implementation procedure in the video linked at the end of this article.

Avoid memory exhaustion

This article provides information on the following design aspects:

  • Understand STM32’s complex RAM layout
  • Ensure Ethernet DMA buffers reside in accessible memory
  • Avoid memory exhaustion by using bounded memory pools
  • Integrate the o1heap allocator with Mongoose to enforce TCP/IP RAM limits

By isolating the network stack’s memory usage, you make your firmware more stable, deterministic, and secure, especially in real-time or resource-constrained systems.

If you would like to see a practical application of these principles, see the complete tutorial, including a video with a real-world example, which describes how RAM limiting is implemented in practice using the Mongoose embedded TCP/IP stack. This video tutorial provides a step-by-step guide on how to use Mongoose Wizard to restrict TCP/IP networking on a microcontroller to a preallocated memory pool.

As part of this tutorial, a real-time web dashboard is created to show memory usage in real time. The demo uses an STM32 Nucleo-F756ZG board with built-in Ethernet, but the same approach works seamlessly on other architectures too.

Sergey Lyubka is the co-founder and technical director of Cesanta Software Ltd. He is known as the author of the open-source Mongoose Embedded Web Server and Networking Library, which has been on the market since 2004 and has over 12K stars on GitHub. Sergey tackles the issue of making embedded networking simpler to access for all developers.

Related Content

The post How to limit TCP/IP RAM usage on STM32 microcontrollers appeared first on EDN.

Predictive maintenance at the heart of Industry 4.0

Wed, 11/12/2025 - 15:42
Predictive maintenance.

In the era of Industry 4.0, manufacturing is no longer defined solely by mechanical precision; it’s now driven by data, connectivity, and intelligence. Yet downtime remains one of the most persistent threats to productivity. When a machine unexpectedly fails, the impact ripples across the entire digital supply chain: Production lines stop, delivery schedules are missed, and teams scramble to diagnose the issue. For connected factories running lean operations, even a short interruption can disrupt synchronized workflows and compromise overall efficiency.

For decades, scheduled maintenance has been the industry’s primary safeguard against unplanned downtime. Maintenance was rarely data-driven but rather scheduled at rigid intervals based on estimates (in essence, educated guesses). Now that manufacturing is data-driven, maintenance should be data-driven as well.

Time-based, or ISO-guided, maintenance can’t fully account for the complexity of today’s connected equipment because machine behaviors vary by environment, workload, and process context. The timing is almost never precisely correct. This approach risks failing to detect problems that flare up before scheduled maintenance, often leading to unexpected downtime.

In addition, scheduled maintenance can never account for faulty replacement parts or unexpected environmental impacts. Performing maintenance before it is necessary is inefficient as well, leading to unnecessary downtime, expenses, and resource allocations. Maintenance should be performed only when the data says maintenance is necessary and not before; predictive maintenance ensures that it will.

To realize the promise of smart manufacturing, maintenance must evolve from a reactive (or static) task into an intelligent, autonomous capability, which is where Industry 4.0 becomes extremely important.

From scheduled service to smart systems

Industry 4.0 is defined by convergence: the merging of physical assets with digital intelligence. Predictive maintenance represents this convergence in action. Moving beyond condition-based monitoring, AI-enabled predictive maintenance systems use active AI models and continuous machine learning (ML) to recognize and alert stakeholders as early indicators of equipment failure before they trigger costly downtime.

The most advanced implementations deploy edge AI directly to the individual asset on the factory floor. Rather than sending massive data streams to the cloud for processing, these AI models analyze sensor data locally, where it’s generated. This not only reduces latency and bandwidth use but also ensures real-time insight and operational resilience, even in low-connectivity environments. In an Industry 4.0 context, edge intelligence is critical for achieving the speed, autonomy, and adaptability that smart factories demand.

Predictive maintenance.AI-enabled predictive maintenance systems use AI models and continuous ML to detect early indicators of equipment failure before they trigger costly downtime. (Source: Adobe AI Generated) Edge intelligence in Industry 4.0

Traditional monitoring solutions often struggle to keep pace with the volume and velocity of modern industrial data. Edge AI addresses this by embedding trained ML models directly into sensors and devices. These models continuously analyze vibration, temperature, and motion signals, identifying patterns that precede failure, all without relying on cloud connectivity.

Because the AI operates locally, insights are delivered instantly, enabling a near-zero-latency response. Over time, the models adapt and improve, distinguishing between harmless deviations and genuine fault signatures. This self-learning capability not only reduces false alarms but also provides precise fault localization, guiding maintenance teams directly to the source of a potential issue. The result is a smarter, more autonomous maintenance ecosystem aligned with Industry 4.0 principles of self-optimization and continuous learning.

Building a future-ready predictive maintenance framework

To be truly future-ready for Industry 4.0, a predictive maintenance platform must seamlessly integrate advanced intelligence with intuitive usability. It should offer effortless deployment, compatibility with existing infrastructure, and scalability across diverse equipment and facilities. Features such as plug-and-play setup and automated model deployment minimize the load on IT and operations teams. Customizable sensitivity settings and severity-based analytics empower tailored alerting aligned with the criticality of each asset.

Scalability is equally vital. As manufacturers add or reconfigure production assets, predictive maintenance systems must seamlessly adapt, transferring models across machines, lines, or even entire facilities. Hardware-agnostic solutions offer the flexibility required for evolving, multivendor industrial environments. The goal is not just predictive accuracy but a networked intelligence layer that connects all assets under a unified maintenance framework.

Real-world impact across smart industries

Predictive maintenance is a cornerstone of digital transformation across manufacturing, energy, and infrastructure. In smart factories, predictive maintenance monitors robotic arms, elevators, lift motors, conveyors, CNC machines, and more, targeting the most critical assets in connected production lines. In energy and utilities, it safeguards turbines, transformers, and storage systems, preventing performance degradation and ensuring safety. In smart buildings, predictive maintenance monitors HVAC systems and elevators for advanced notice of needed maintenance or replacement of assets that are often hard to monitor and cause great discomfort and loss of productivity during unexpected downtime.

The diversity of these applications underscores an Industry 4.0 truth: Interoperability and adaptability are as important as intelligence. Predictive maintenance must be able to integrate into any operational environment, providing actionable insights regardless of equipment age, vendor, or data format.

Intelligence at the industrial edge

The edgeRX platform from TDK SensEI, for example, embodies the next generation of Industry 4.0 machine-health solutions. Combining industrial-grade sensors, gateways, dashboards, and cloud interfaces into a unified system, edgeRX delivers immediate visibility into machine-health conditions. Deployed in minutes, it immediately begins collecting data to build ML models for deployment from the cloud back to the sensor device for real-time inference on the sensor at the edge.

By processing data directly on-device, edgeRX eliminates the latency and energy costs of cloud-based analytics. Its ruggedized, IP67-rated hardware and long-life batteries make it ideal for demanding industrial environments. Most importantly, edgeRX learns continuously from each machine’s unique operational profile, providing precise, actionable insights that support smarter, faster decision-making.

TDK SensEI’s edgeRX advanced machine-health-monitoring platform.TDK SensEI’s edgeRX advanced machine-health-monitoring platform (Source: TDK SensEI) The road to autonomous maintenance

As Industry 4.0 continues to redefine manufacturing, predictive maintenance is emerging as a key enabler of self-healing, data-driven operations. EdgeRX transforms maintenance from a scheduled obligation into a strategic function—one that is integrated, adaptive, and intelligent.

Manufacturers evaluating their digital strategies should ask:

  • Am I able to remotely and simultaneously monitor and alert on all my assets?
  • Are our automated systems capturing early, subtle indicators of failure?
  • Can our current solutions scale with our operations?
  • Are insights available in real time, where decisions are made?

If the answer is no, it’s time to rethink what maintenance means in the context of Industry 4.0. Predictive, edge-enabled AI solutions don’t just prevent downtime; they drive the autonomy, efficiency, and continuous improvement that define the next industrial revolution.

The post Predictive maintenance at the heart of Industry 4.0 appeared first on EDN.

A non-finicky, mass-producible audio frequency white noise generator

Wed, 11/12/2025 - 15:00

This project made me feel a kind of kinship with Diogenes, although I was searching for the item described in the title rather than for an honest man.

Figure 1 “Diogenes Looking for an Honest Man,” a painting attributed to Johann Heinrich Wilhelm Tischbein (1751-1829). The author of this DI has a more modest goal.

Wow the engineering world with your unique design: Design Ideas Submission Guide

I wanted a design that did not require the evaluation and selection of one out of a group of components. I’d tolerate (though not welcome) the use of frequency compensation and even an automatic gain control (AGC) to achieve predictable performance characteristics. Let’s call my desired design “reliably repeatable.”

Standard MLS digital circuit

Initially, I thought none of the listed accommodations would be necessary, and that a simple well-known digital circuit—a maximal length sequence (MLS) Generator [1]—would fit the bill. This circuit produces a pseudorandom sequence whose spectral characteristics are white. A general example of such is shown in Figure 2.

Figure 2 The general form of an MLS generator. A reference lists a table of 2 to 5 specific taps for register lengths from N = 2 to 32 to produce repeating sequences of length 2N-1. Register initialization must include at least one non-zero value. The author first listened to a version using only one exclusive or gate with N = 31 registers, in which the outputs of only the 28th and 31st registers were sampled.

It was simple to code up the one described in the Figure 2 caption with an ATtiny13A microprocessor and obtain a 1.35 µs clock period. Of course, validation is in the listening. And indeed, the predominant sound is the “shush” of white noise.

But there are also audible pops, clicks, and other unwanted artifacts in the background. I had a friend with hearing better than mine listen to confirm my audition’s disappointing conclusion. And so, I picked up my lantern and moved on to the next candidate.

Reverse-biased NPN

I was intrigued by reverse-biasing a transistor’s base-emitter junction with the collector floating (see Figure 3).

Figure 3 Jig for testing the noise characteristics of NPN transistors with reverse-biased base-emitter junctions.

I tested ten 2N3906 transistors with values of R equal to 103, 104, 105, and 106 ohms. Both DC voltages and frequency sweeps (of voltage per square-root spectral densities in units of dBVrms / Hz1/2) were collected.

It was evident that as R decreased, average noise decreased and DC voltages rose slightly, remaining in the range between 7.2 and 8.3 volts. This gave me hope that a simple AGC scheme in which the transistor bias current was varied might satisfy my requirements.

Alas, it was not to be. Figure 4a, Figure 4b, Figure 4c, and Figure 4d show spectral noise in the lower frequency range. (Additional filtering of the 18-V supply had no effect on the 60 Hz power line fundamental or harmonics—these were being picked up from my test environment. The 60-Hz fundamental’s level was about 10 µV rms.)

Figure 4a Note the power line harmonics “hum” problem that the “quiet orange” transistor in particular introduces.

Figure 4b Biasing the “orange” transistor at a lower current raised the noise and hid the power line harmonics, but not the fundamental.

Figure 4c As the bias current is reduced, some but not all transistors’ noises mask the 60 Hz fundamental.

Figure 4d Regardless of whether the power line noise can be masked or eliminated, it’s clear for all resistor R values that there is no consistent shape to the frequency response.

I’ve tried other transistors with similar results. Being unable to depend on a specific frequency response shape, the reverse-biased base-emitter transistor is not a suitable signal source for a reliably predictable design. It’s time to pick up the lantern again and continue the search.

A shunt regulator

Within several datasheets of components in the ‘431 family and in the TLVH431B’s in particular, there is a figure showing the devices’ equivalent input noise. See Figure 5.

Figure 5 The equivalent input noise and test circuit for the TLVH431B (Figure 5-9 in the part’s datasheet). Source: Texas Instruments

The almost 3 dB of rise in noise from 300 Hz down to 10 Hz could be compensated for if it were repeatable from device to device. I looked at the cathode of ten devices using the test jig of Figure 6. The spectral responses are presented in Figure 7.

Figure 6 The test jig for TLVH431B spectral noise. There was no significant difference in the results shown in Figure 7 when values of 1kΩ and 10 kΩ were used for R. 100kΩ and  1MΩresistances supplied insufficient currents for the devices’ operation.

Figure 7 The TVH431B spectral noise, 10 samples with the same date code.

Although the TLVH431B is a better choice than the 2N3904, there are still variations in its noise levels, necessitating some sort of AGC. And the power line signals were still present, with no mitigation available from different values of R. The tested parts all have the same date code, and there are no numerical specs available for limits on noise amplitudes or frequency responses.

Who knows how other date codes would behave? I certainly can’t claim from the data that this component could be part of a “reliably repeatable” design as I defined the term. But you know what? Carrying this lantern around is getting to be pretty annoying.

Xorshift32

I kept thinking that there had to be a digital solution to this problem, even if it couldn’t be the one that produces an MLS. I did some research, and the option of what is called “xorshift” came up, specifically xorshift32 [2].

Xorshift32 starts by initializing a 32-bit variable to a non-zero value. A copy of this variable is created, and 13 zeros are left-shifted into the copy, eliminating the 13 left-most original register values.

The original and the shifted copy are bit-for-bit exclusive-OR’d and stored in the original variable. A copy of this result is made. 17 zeros are then right-shifted into the copy, eliminating the 17 right-most copy’s values. The shifted copy is again exclusive-OR’d bit-by-bit with the updated original register and stored in that register.

Again, a copy of the original’s latest update is made. 5 zeroes are left-shifted into the newest copy, which is then exclusive-OR’d with the latest original update and stored in that original. As this three-step process is repeated, a random sequence of length 232-1 consisting of unique 32-bit integers is generated.

This algorithm was coded into an ATtiny13A microprocessor running at a clock speed of 9.6 MHz, yielding a bit shift period of 5.8 µs. (Assembly source code and hex file are available upon request.) The least significant register bit was routed to bit 0 of the device’s portb (pin 5 of the eight-pin PDIP package.)

This pin was AC-coupled to a power amplifier driving a Polk Audio bookshelf speaker. My friend and I agreed that all we heard was white noise; the pops and clicks of the MLS sequence were absent.

Figure 8 and Figure 9 display frequency sweeps of the voltage per square-root spectral densities of the MLS and the xorshift sequences.

Figure 8 Noise spectral densities from 4 to 1550 Hz of the two auditioned digital sequences produced with 5V-powered ATtiny13A microprocessors.

Figure 9 Noise spectral densities from 63 to 25000 Hz of the two auditioned digital sequences produced with 5V-powered ATtiny13A microprocessors.

There are a few takeaways from Figures 8 and 9.

The white noises of the sequences are at high enough levels to mask my testing environment’s power line fundamental and harmonics that are apparent when evaluating the 2N3904 and the TLVH431B.

The difference in levels of the two digital sequences is due to the higher clock rate of the MLS, which spreads the same total energy as the xorshift over a wider bandwidth and results in a lower energy density within any given band of frequencies in the audible range.

Finally, the xorshift32 has a dip of perhaps 0.1 dBVrms per root Hz at 25 kHz. If the ATtiny13A were clocked from an external 20-MHz source, even this small response dip would disappear.

Audibly pure white noise source

 An audibly pure white noise source for the band from sub-sonic frequencies to 20 kHz can be had by implementing the xorshift32 algorithm on an inexpensive microprocessor.

The result is reliably repeatable, precluding the need to select an optimal component from a group. The voltage over the audio range is:

10 (-39dBVrms/20 ) / √Hz · (200000.5 √Hz),

which evaluates to a 1.6-Vrms signal. This method has none of the disadvantages of the analog noise sources investigated. There is no need to deal with low values and uncertainties of signal level, necessitating the application of a large amount of gain and an AGC, frequency-shaping below 300 Hz or elsewhere, and environmental power line noise at levels comparable to the intentional noise.

I can finally put that darn lantern down. I wonder how Diogenes made out.

Related Content

References

  1. https://liquidsdr.org/doc/msequence/. In the table, the exponents of the polynomials in x are the outputs of the shift registers numbered so that the first (input) register is assigned the number 1.
  2. https://en.wikipedia.org/wiki/Xorshift

The post A non-finicky, mass-producible audio frequency white noise generator appeared first on EDN.

Compact DIN-rail power supplies deliver high efficiency

Tue, 11/11/2025 - 20:59
TDK's D1SE series of DIN-rail-mount power supplies.

TDK Corp. adds a new single-phase series of DIN-rail-mount power supplies to the TDK-Lambda range of products for industrial and automation applications. The cost-effective  D1SE entry-range series provides an AC and DC input and is rated for continuous operation at 120 W, 240 W or 480 W with a 24-V output. These power supplies deliver an efficiency of up to 95.1%, reducing energy consumption and internal losses, which lower the internal component temperatures and improve long-term product reliability.

TDK's D1SE series of DIN-rail-mount power supplies.(Source: TDK Corp.)

Thanks to the push-in wire terminations, the D1SE series can be quickly mounted, reducing installation time in a variety of control cabinets, machinery, and industrial production systems. In addition to a conventional 100 to 240-VAC nominal input, the D1SE is safety certified for operation from a 93 to 300-VDC supply. Designed to meet growing customer demand, the DC input addresses applications where the energy supply is coming from a common DC bus voltage or a battery.

The 120-W rated model can deliver a boost power of 156 W for 80 seconds; the 240-W rated model offers a boost of 312 W for 10 seconds; and the 480-W rated model provides a boost of 552 W for an extended 200 seconds. The 24-V output can be adjusted from 22.5 V to 29 V to allow compensation for cable drops, redundancy modules, or setting to non-standard output voltages.

All three power supplies are available with or without a DC-OK contact. For applications in challenging environments, a printed-circuit-board coating option is available, and all models feature a high-quality electrolytic capacitor which extends lifetime, according to TDK.

The DIN-rail-mount power supplies are housed in a rugged metal enclosure with a width of 38 mm for the 120-W models, 44 mm for the 240 W, and 60 mm for the 480 W. The narrow design saves space on the DIN rail for other components, the company said.

Other key specs include input-to-output isolation of 5,000 VDC, input-to-ground at 3,100 VDC, and output-to-ground at 750 VDC. The D1SE models are convection-cooled and rated for operation in the -25°C to 70°C ambient temperature range, with derating above 55°C.

Series certifications include IEC/EN/UL/CSA 61010-1, 61010-2-201, 62368-1 (Ed.3), and IS 13252-1 standards. The power supplies also are CE and UKCA marked to the Low Voltage, EMC, and RoHS Directives, and meet EN 55011-B and CISPR11-B radiated and conducted emissions.

The series also complies with EN 61000-3-2 (Class A) harmonic currents and IEC/EN 61000-6-2 immunity standards. The power supplies come with a three-year warranty.

The post Compact DIN-rail power supplies deliver high efficiency appeared first on EDN.

ABLIC upgrades battery-less water leak detection sensor

Tue, 11/11/2025 - 19:56
Diagram showing how the ABLIC battery-less drip-level water leak sensor works.

ABLIC Inc. upgrades its CLEAN-Boost energy-harvesting technology for the U.S. and EU markets. The battery-less drip-level water leak sensor now offers a communication range that is approximately 2× that of its predecessor and an expanded operating temperature range of up to 85°C from 60°C. 

ABLIC said it first launched the CLEAN-Boost energy harvesting technology in 2019 to generate, store, boost, and transform microwatt-level energy into electricity for wireless data transmission. Since that launch, the Japan-market model earned positive evaluations from over 80 customers, and given increased inquiries from U.S. and European customers, the company  obtained the necessary certifications from the U.S. Federal Communications Commission and the EU’s Conformité Européenne, confirming compliance with key standards.

CLEAN-Boost can be used in any facility where a water leak poses a potential risk. It uses microwatt energy sources to generate electricity from leaking water and transmits water signals wirelessly. The latest enhancements enable the sensor’s use in a wider range of applications and high-temperature environments, the company said.

Applications where addressing water leaks is critical include automotive parts factories with stamping processes, chemical and pharmaceutical plants, and food processing facilities as well as in aging buildings where pipes may have weakened or in high-temperature operations such as data centers and server rooms.

Diagram showing application of ABLIC's CLEAN-Boost-technology-based battery-less drip-level water leak sensor.(Source: ABLIC Inc.)

ABLIC claims the water leak sensor is the industry’s first sensor capable of detecting minute drops of water. It can detect as little as three drops of water (150 μl minimum). In addition, operating without an external power source eliminates the need for major installation work or battery replacement, making it suited for retrofitting into existing infrastructures.

The water leak sensor also helps reduce environmental impact by eliminating the need to replace or dispose of a battery. For example, the sensor has been certified as a MinebeaMitsumi Group “Green Product” for outstanding contribution to the environment.

ABLIC’s CLEAN-Boost technology works by capturing and amplifying microwatt-level environmental energy previously considered too minimal to use. It combines energy storage and boosting components, designed for ultra-low power consumption. The boost circuit operates at 0.35 V for the efficient use of 1 μW of input power. It incorporates a low-power data transmission method that optimizes the timing between power generation and signal transmission, ensuring maximum efficiency and stable operation even under extremely limited power.

Diagram showing how the ABLIC battery-less drip-level water leak sensor works.(Source: ABLIC Inc.)

The sensor features simple add-on installation for easy integration and sends wireless alerts to safeguard against catastrophic water damage.

The sensor technology is available as a wireless tag (134 × 10 × 18 mm with the main body measuring 65 × 10 × 18 mm), or sensor ribbons (sensor ribbon 0.5 m, sensor ribbon 2.0 m, and sensor ribbon 5.0 m), measuring 700 ×13 × 8 mm, 2200 × 13 × 8 mm, and 5200 × 13 × 8 mm, respectively. They can be connected up to 15 m.

The post ABLIC upgrades battery-less water leak detection sensor appeared first on EDN.

My 100-MHz VFC – the hardware version

Tue, 11/11/2025 - 15:00

“Facts are stubborn things” (John Adams, et al).

I added two 50-ohm outputs to the schematic of my published voltage-to-frequency converter (VFC) circuit (Figure 1). Then, I designed a PCB, purchased the (mostly) surface-mount components, loaded and re-flow soldered them onto the PCB, and then tested the design.

Figure 1 VFC design that operates from 100 kHz to beyond 100 MHz with a single 5.25-V supply, providing square wave outputs at 1/2 and 1/4 the main oscillator frequency.  

The hardware implementation of the circuit can be seen in Figure 2.

Figure 2  The hardware implementation of the 100MHz VFC was created in order to root out the facts that can only be obtained after it was built.

My objective was to get the facts about the operation of the circuit. 

Theory and simulation are important, but the facts are known only after the circuit is built and tested. That is when the unintended/unexpected consequences are seen.

The circuit mostly performed as expected, but there were some significant issues that had to be addressed in order to get the circuit performing well.

Sensitivity of the v-to-f

My first concern was the high sensitivity of the circuit to minute changes in the input voltage.  The sensitivity is 100 MHz per 5 volts, i.e., 20 MHz per volt. That means a 1-mV change on the input results in a 20-kHz change in the output frequency!

So, how do you supply an input voltage that is almost totally devoid of noise and/or ripple, which will cause jitter on the oscillator signal? To deal with this problem, I used a battery supply, four alkaline batteries in series, connected to a 10-turn, 100-kΩ potentiometer to drive the input of the circuit with about 0 to 6 V. This worked quite well. I added a 10 kΩ resistor in series with the non-inverting input of U1 for protection against overvoltage.

Problems and fixes

The first unexpected problem was that the NE555 timer did not provide sufficient drive to the voltage inverter circuit and the voltage doubler circuit. This one is on me; I didn’t look carefully at the datasheet, which says it can supply a lot of output current, but at high current, the output voltage drops so much that the inverter and the doubler circuits don’t provide enough output voltage. And the LTspice model I used for simulation was a very unrealistic model. I recommend that it not be used!

I fixed this by using a 74HC14 Schmitt trigger chip to replace the NE555 timer chip. The 74HC14 provides plenty of current and voltage to drive the two circuits. I implemented the 74HC14 circuitry as an outboard attachment to the main PCB. 

I changed the output of the voltage doubler circuit to a regulated 6 V (R16 changed to 274 Ω and R18 to 3.74 kΩ, and D8, D9 changed to SD103). This allows U1 to operate with an input voltage of up to about 5.9 V. Also, I substituted a TLV9162 dual op-amp for U1/U2 because the cost of the TLV9162 is much less than that of the LT1797. 

With the correct voltages supplied to U1/U2, I began testing the circuit, and I found that the oscillator would hang at a frequency of about 2 MHz. This was caused by the paralleled Schmitt trigger inverters. One inverter would switch before the other one, which would then sink the current from the inverter that had switched to the HIGH output state, and the oscillator would stop functioning. Paralleling inverters, which are driven by a relatively slowly falling (or rising) input signal, is definitely not a viable idea!

To fix the problem, I removed U4 from the circuit and put a 22-Ω resistor in series with the output of inverter U3 to lessen the current load on it, and the oscillator operated as expected.

I made some changes to the current-to-voltage converter circuit to provide more adjustment range and to use the optimum values for the 5-V supply. I changed R8 to 3.09 kΩ, potentiometer R9 to 1 kΩ, and R13 to 2.5 kΩ.

Adjustments

There are two adjustments provided: R9 is an adjustment for the current-to-voltage converter U2, and R11 is an offset current adjustment. 

I adjusted R9 to set the oscillator frequency to 100 MHz with the input voltage set to 5.00 V, and then adjusted R11 at 2 MHz.

The percent error of the circuit increases at the lower frequencies; possibly due to diode leakage currents, or nonlinear behavior of the frequency to voltage converter consisting of D2 – D4 and C8 – C11?

Test results

With the noted changes implemented, I began testing the VFC. The problem of jitter on the output signal was apparent, especially at the lower frequencies. 

I realized that ripple and noise on the 5-V supply would cause jitter on the output signal. As noted on the schematic, the oscillator frequency is a function of the supply voltage.

To avoid this problem, I once again opted to use batteries to provide the supply voltage. I used six alkaline batteries to supply about +9 V and regulated the voltage down to +5 V with an LM317T regulator and a few other components. 

This setup achieves about the minimum ripple and noise on the supply and the minimum oscillator jitter. The remaining possible sources of noise/jitter are the switching supplies for U1, the feedback voltage to U1, and the switching on and off of the counters and the inverters, which can cause noise on the +5-V supply.

The frequency versus input voltage plot is not as linear as expected, but it is pretty good over a wide range of input voltage from 50 mV to 5.00 V for a corresponding frequency range of 1.07 MHz to 103.0 MHz (Figure 3 and Figure 4). The percent error versus frequency is shown in Figure 5.

Figure 3 The frequency from 1.07 MHz to 103.0 MHz versus input voltage from 50 mV to 5.00 V.

Figure 4 The frequency (up to 2 MHz) versus input voltage when Vin < 0.1 V.

Figure 5 The percent error versus frequency.

Waveforms

Some waveforms are shown in Figure 6, Figure 7, Figure 8, and Figure 9. Most are from the divide-by-2 output because it is more visually interesting than the 3.4-ns output from the oscillator (multiply the divide-by-2 frequency by 2 to get the oscillator frequency). 

The input voltage ranges from 10 mV to 5 V to produce the 200 kHz to 100 MHz oscillator/inverter output.

Figure 6 Oscilloscope waveform with a divide-by-two output at 100 kHz.

Figure 7 Oscilloscope waveform with a divide-by-two output at 500 kHz.

Figure 8 Oscilloscope waveform with a divide-by-two output at 5 MHz.

Figure 9 Oscilloscope waveform with a divide-by-two output at 50 MHz.

Figure 10 displays the output of the oscillator/inverter at 100 MHz.  Figure 11 shows the 3.4 ns oscillator/inverter output pulse. 

Figure 10 Oscilloscope waveform with the oscillator output at 100 MHz.

Figure 11 Oscilloscope waveform with a 3.4-ns oscillator pulse.

The facts

So, here are the facts. 

The two inverters in parallel did not work in this application. This was fixed by eliminating one of them and putting a larger resistor in series with the output of the remaining one to reduce the current load on it.

The high sensitivity of the circuit to the input voltage presents a challenge in practice. Generating a sufficiently quiet input voltage is difficult.

Battery operation provides some help, but this presents its own challenges in practice. Noise on the 5-V supply is a related problem. The supply for the second divide-by-two circuit, U7, must be tightly regulated and extremely free of noise and ripple to minimize jitter on the oscillator signal.

And, as noted above, some changes in the values of several components were necessary to get acceptable operation.

Finally, more accurate voltage-versus-frequency operation at lower frequencies will require more careful engineering, if desired. I leave this to the user to work this out, if necessary. 

At this point, I am satisfied with the circuit as it is (I feel that it is time to take a break!).

Some suggestions for improved results

The circuit is compromised by the challenge to make it work with a single 5-V supply. It would be less challenging if separate, well-regulated, well-filtered supplies were used for U1/U2, for example, a 14 V regulated down to 11 V for the positive supply, and a negative 5 V regulated down to -2.5 V (use linear regulators for both supplies!) 

The input could then range from 0 to 10 V, which would reduce the input sensitivity by a factor of two and make it easier to design quieter supplies for the input amplifier and current-to-voltage circuits, U1/U2.

At the lower frequencies, some investigation should be done to expose the causes of the nonlinearity in that frequency range, and to indicate changes that would improve the circuit operation.

Another option would be to split the operation into two ranges, such as 100 kHz to 1 MHz and 1 MHz to 100 MHz.

Final fact

The operation of the circuit is pretty impressive when the circuit is modified as suggested. I think actualizing an oscillator that provides an output from 200 kHz to 113 MHz is quite a remarkable result. Thanks to the late Jim Williams [2] and to the lively Stephen Woodward [3] for leading the way to the implementation of this circuit!

Jim McLucas retired from Hewlett-Packard Company after 30 years working in production engineering and on the design and test of analog and digital circuits.

References/Related Content

  1. A simulated 100-MHz VFC
  2. 1-Hz to 100-MHz VFC features 160-dB dynamic range
  3. 100-MHz VFC with TBH current pump
  4. Take-Back-Half precision diode charge pump

The post My 100-MHz VFC – the hardware version appeared first on EDN.

Protecting precision DACs against industrial overvoltage events

Tue, 11/11/2025 - 09:30

In industrial applications using digital-to-analog converters (DACs), programmable logic controllers (PLCs) set an analog output voltage to control actuators, motors, and valves. PLCs can also regulate manufacturing parameters such as temperature, pressure, and flow.

In these environments, the DAC output may require overvoltage protection from accidental shorts to higher-voltage power supplies and other sustained high-voltage miswired connections. You can protect precision DAC outputs in two different ways, depending on whether the DAC output buffer has an external feedback pin.

Overvoltage damage

There are two potential consequences should an accidental sustained overvoltage event occur at the DAC output.

First, if the DAC output can drive an unsustainable current limit, then damage may occur as the output buffer drives an excess of current. This current limit may also occur if the output voltage is shorted to ground or to another voltage within the supply range of the DAC.

Second, electrostatic discharge (ESD) diodes latched to the supply and ground can source and sink current during sustained overvoltage events, as shown in Figure 1 and Figure 2. In many DACs, a pair of internal ESD diodes that shunt any momentary ESD current away from the device can help protect the output pin. In Figure 1, a large positive voltage causes an overvoltage event in the output and forward-biases the positive AVDD ESD diode. The VOUT pin sinks current from the overvoltage event into the positive supply.

Figure 1 Current is shunted to positive supply during a positive overvoltage event. Source: Texas Instruments

In Figure 2, the negative overvoltage sources current from the negative supply through the AVSS ESD diode to VOUT.

Figure 2 Current is shunted to positive supply during a negative overvoltage event. Source: Texas Instruments

In Figure 1 and Figure 2, internal ESD diodes are not designed to sink or source current associated with a sustained overvoltage event, which will typically damage the ESD diodes and voltage output. Any protection should limit this current during an overvoltage event.

Overvoltage protection

While two basic components will protect precision DAC outputs from an overvoltage event, the protection topology for the DAC depends on the internal or external feedback connection for the DAC output buffer.

If the DAC output does not have an external voltage feedback pin, you can set up protection as a basic buffer using an operational amplifier (op amp) and a current protection device at its output. If the DAC has an external voltage feedback pin, then you would place the current protection device at the output of the DAC, with the op amp driving the feedback sense pin.

Let’s explore both topologies.

Figure 3 shows protection for a DAC without a feedback sense pin, with the op amp set up as a unity gain buffer. Inside the op amp feedback, an eFuse opens the circuit if the op amp output current exceeds a set level.

Figure 3 Output protection for a DAC works without a feedback pin. Source: Texas Instruments

Again, if the output terminal voltage is within the supplies of the op amp, the output current comes from the short-circuit current limit. An output terminal set beyond the supplies of the op amp, as in a positive or negative overvoltage, will cause the supply rails to source or sink additional current, as previously shown in Figure 1 and Figure 2.

Because the output terminal connects to the op amp’s negative input, the op amp input must have some sort of overvoltage protection. For this protection circuit, an op amp with internal overvoltage protection that extends far beyond the op amp supply voltage is selected. When using a different op amp, series resistance that limits the input current can help protect the inputs.

The circuit shown in Figure 3 will also work for a precision DAC with a feedback sense pin. The DAC feedback sense pin would simply connect to the DAC VOUT pin, using the same protection buffer circuit. If you want to use the DAC feedback to reduce errors from long output and feedback sense wire resistances, you need to use a different topology for the protection circuit.

If the DAC has an external feedback sense pin, changing the protection preserves the sense connection. In Figure 4, the eFuse connects directly to the DAC output. The eFuse opens if the DAC output current exceeds a set level. Here, the op amp acts as a unity gain buffer to drive the DAC sense feedback pin.

Figure 4 This output protection for a DAC works with a feedback pin. Source: Texas Instruments

In both topologies, shown in Figure 3 and Figure 4, the two protection devices have the same requirements. For the eFuse, the break current must be lower than the current level that might damage the device it’s protecting. For the op amp, input protection is required, as the output overvoltage may significantly exceed the rail voltage. In operation, the offset voltage must be lower than the intended error, and the bandwidth must be high enough to satisfy the system requirements.

Overvoltage protection component selection

To help you select the required components, here are the system requirements for operation and protection:

  • Supply range: ±15 V
  • Sustained overvoltage protection: ±32 V
  • Current at sustained overvoltage: approximately 30 mA
  • Output protection should introduce as little error as possible, based on offset or bandwidth

The primary criteria for op amp selection were overvoltage protection of the inputs. For instance, the super-beta inputs of the OPA206 precision op amp have an integrated input overvoltage protection that extends up to ±40 V beyond the op amp supply voltage. Figure 5 shows the input bias current relative to the input common-mode voltage powering OPA206 with ±15 V supplies. Within the ±32 V range of overvoltage protection, the input bias current stays below ±5 mA of input current.

Figure 5 Input bias current for the OPA206 is shown versus the input common-mode voltage. Source: Texas Instruments

The OPA206 offset voltage is very low (typically ±4 µV at 25°C and ±55 µV from –40°C to 125°C) and the buffer contributes little error to the DAC output. When using a different op amp without integrated input overvoltage protection, adding series resistance at the inputs will limit the input current.

The TPS2661 eFuse was originally intended as a current-loop protector with input and output miswiring protection. If its output voltage exceeds the rail supplies, TPS2661 detects miswiring and cuts off the current path, restoring the current path when the output overvoltage returns below the supply.

If the output current exceeds TPS2661’s 32-mA current-limit protection, the device breaks the connection and retests the current path for 100 ms periodically every 800 ms. The equivalent resistance of the device is a maximum 12.5 Ω, which enables a high-current transmission output without large voltage headroom and footroom loss at the output.

Beyond the op amp and eFuse protection, applying an optional transient voltage suppression (TVS) diode will provide additional surge protection as long as the chosen breakdown voltage is higher than any sustained overvoltage. If the breakdown voltage is less than the sustained overvoltage, then an overvoltage can damage the TVS diode. In this circuit, the expected sustained overvoltage is ±32 V, with an optional TVS3301 device that has a bidirectional 33-V breakdown for surge protection.

Another TVS3301 added to the ±15-V supplies is an additional option. An overvoltage on the terminal will direct any fault current into the power supplies. If the supply cannot sink the current or is not fast enough to respond to the overvoltage, then the TVS diode absorbs excess current as the overvoltage occurs.

Constructed circuit: Precision DAC without a feedback sense pin

You can build and test the overvoltage protection buffer from Figure 3 with the DAC81416-08 evaluation module (EVM). This multichannel DAC doesn’t have an external feedback sense pin. Figure 6 shows the constructed protection buffer tested on one of the DAC channels.

Figure 6 The constructed overvoltage protection circuit employs the DAC81416-08 evaluation module. Source: Texas Instruments

Ramping the output of DAC from –10 V to 10 V drives the buffer input. Figure 7 shows that the measured offset of the buffer is less than 10 µV over the full range.

Figure 7 Protection buffer output offset error is shown versus buffer input voltage. Source: Texas Instruments

Connecting the output to a variable supply tests the output overvoltage connection, driving the output voltage and then recording the current at the output. The measurement starts at –32 V, increases to +32 V, then changes back from +32 V down to –32 V. Figure 8 shows the output current set to overvoltage and its recovery from overvoltage.

Figure 8 Protection buffer output current is shown versus buffer output overvoltage. Source: Texas Instruments

The measurements show hysteresis in both the positive and negative overvoltage of the protection buffer that comes from extra voltage across the series resistor at the output of the TPS26611. During normal operation (without an overvoltage), the TPS26611 current path turns off when the output rises and is driven above 17.2 V, at which point the remaining output current comes from the overvoltage of the OPA206 input. As the output voltage decreases, the TPS26611 current path conducts current again when the output drops below 15 V.

When driving the output to a negative overvoltage, the current path turns off at –17.5 V and turns on again when the output returns above –15 V.

Constructed circuit: Protection for a DAC with output feedback

Like the previous circuit, you can test the overvoltage protection from Figure 4. This test attaches an overvoltage protection buffer to the output of a DAC with an external feedback sense pin. The DAC8760 EVM tests for an output overvoltage event. As shown in Figure 9, a 1-kΩ resistor placed between VOUT and +VSENSE prevents the output buffer feedback loop of the DAC from breaking if the feedback sense signal is cut.

Figure 9 This constructed overvoltage protection circuit is used with the DAC8760 evaluation module. Source: Texas Instruments

Ramping the output of the DAC from –10 V to +10 V drives the feedback buffer input. Shown in Figure 10, the offset of the feedback to +VSENSE is again <10 μV over the full range.

Figure 10 Feedback buffer offset error is shown versus buffer input voltage. Source: Texas Instruments

The DAC is again set to 0 V, with the output connected to a variable supply to check the output current against output overvoltage. Figure 11 shows the output current as the output voltage increases from –32 V to +32 V and decreases to –32 V.

Figure 11 Protection buffer output current is shown versus buffer output overvoltage. Source: Texas Instruments

As before, there is current path hysteresis. The TPS26611 current path shuts off when the output goes above 16.5 V and turns on when the output returns to about 15 V. For the negative overvoltage, the current path turns off when the output is below –16.8 V and turns on again when the output returns above –15 V.

Two overvoltage protection topologies

Industrial control applications for analog outputs require specialized protection from harsh conditions. This article presented two topologies for precision DAC protection against sustained overvoltage events:

  • DAC without external feedback: Protecting the output from an overvoltage by using an op amp buffer with an eFuse in the op amp output.
  • DAC with external feedback: Protecting the output from overvoltage by using an eFuse to limit the DAC output current and with an op amp acting as a unity gain buffer for sense feedback.

In both cases, the tested circuits show a limited offset error (<10 µV) through the range of operation (±10-V output) and protection from sustained overvoltage of ±32 V.

Joseph Wu is applications engineer for digital-to-analog converters (DACs) at Texas Instruments.

 

 

Art Kay is applications engineer for precision signal conditioning products at Texas Instruments.

 

 

Related Content

The post Protecting precision DACs against industrial overvoltage events appeared first on EDN.

EcoFlow’s DELTA 3 Plus and Smart Extra Battery: Product line impermanence curiosity

Mon, 11/10/2025 - 16:13

Earlier this summer, I detailed my travails struggling with (and ultimately recovering from) buggy firmware updates I’d been “pushed” on my combo of EcoFlow’s DELTA 2 portable power station and its Smart Extra Battery supplemental capacity companion:

Toward the end of that earlier writeup, I mentioned that I’d subsequently been offered a further firmware update, which (for, I think, understandable reasons) I was going to hold off on tackling for a while, until I saw whether other, braver souls had encountered issues of their own with it:

DELTA 2 firmware update success(es)

In late August, I eventually decided to take the upgrade plunge, after enduring the latest in an occasional but enduring series of connectivity glitches. Although I could still communicate with the device “stack” via Bluetooth, its Wi-Fi connection had dropped and needed to be reinstated within the app. The firmware update’s documentation indicated it’d deal with this issue:

The upgrade attempt was thankfully successful this time, although candidly, I can’t say that the Wi-Fi connectivity is noticeably more robust now than it had been previously:

I was then immediately offered another firmware upgrade, which I’d heard on Facebook’s “EcoFlow Official Club“ group had just been released. Tempting fate, I plunged ahead again:

Thankfully, this one completed uneventfully as well:

As did another offered to me in early September (gotta love that descriptive “Fixes some known issues” phrasing, eh? I’m being sarcastic, if it wasn’t already obvious…):

There have been no more firmware upgrades in the subsequent ~1.5 months. More generally, since the DELTA 2 line is mature and EcoFlow has moved on to the DELTA 3 series, I’m hopeful for ongoing software stability (accompanied by no more functional misbehavior) at this point.

Initial impressions of DELTA 3 devices

Speaking of which, what about the DELTA 3 Plus and its accompanying Smart Extra Battery, mentioned at the end of my earlier write-up, which EcoFlow support had sent as replacements for the DELTA 2-generation predecessors prior to my successful resurrection of them?

Here again is what the new DELTA 3 stack (left) looks like next to its DELTA 2 precursors (right):

The stored-charge capacity of the DELTA 2 is 1024Wh, which matches that of the DELTA 3 Plus. I’d mentioned in my earlier DELTA 2 coverage that the DELTA 3 Plus was based on newer,  denser (but still LiFePO₄ aka LFP) 40135 batteries. Why then do the two portable power stations have nearly the same sizes? The answer, of course, is that there’s more than just batteries inside ‘em:

The (presumed) varying battery generation-induced size differential is much more evident with the two generations of Smart Extra Batteries…which are (essentially) just batteries.

Despite their 1,024-Wh capacity commonality, the DELTA 3 version (again on top of the stack at left in the earlier photo) has dimensions of 15.7 x 8 x 7.8 in (398 x 200 x 198 mm) and weighs 21.1 lbs. (9.6 kg).

Its DELTA 2-generation predecessor at top right weighs essentially the same (21 lbs./9.5 kg), and it’s nearly 50% taller (15.7 × 8.3 × 11.1 in./40 × 21.1 × 28.1 cm).

By the way, back when I was fearing that the base DELTA 2 unit was “toast” but hoping that its Smart Extra Battery might still be saved, I confirmed EcoFlow’s claim that the DELTA 3 Plus worked not only with multiple capacity variants of the DELTA 3-generation Smart Extra Battery, for capacity expansion up to 5 KWh, but also with my prior-generation storage capacity expansion solution:

Charging (and other) enhancements

Aside from the height-therefore-volume differential, the most visually obvious other difference between the two portable power stations is the relocation of AC power outlets to the front panel in the DELTA 3 Plus case. Other generational improvements include:

  • Faster sub-10-ms switchover from wall outlet-sourced to inverter-generated AC for more robust (albeit not comprehensive…no integrated surge protection support, for example) UPS functional emulation
  • Improved airflow, leading to claimed 30-dB noise levels in normal operation
  • A newer-generation battery-induced boosted recharge cycle count to 4,000
  • Inverter-generated AC output power up to 3600 W (X-Boost surge)
  • Higher power, albeit fewer, USB-A ports (two, each 36 W, compared to two 12 W and two 18 W)
  • Higher power USB-C ports (two, each 140 W, versus two 100 W)
  • And faster charging (sub-1-hour to 100%), enabled by factors such as:

Speaking of solar, I haven’t forgotten about the two 220W panels:

And a more recently acquired 400W one:

For which I’m admittedly belated in translating testing aspiration into reality. The issue at the moment isn’t snow on the deck, although that’ll be back soon enough. It’s high winds:

That said, my procrastination has had at least one upside: a larger number of interesting options (and combinations) to evaluate than before. Now, I can tether either the two parallel-connected 220-W panels or the single 400-W one to the DELTA 2’s single XT60i input.

And for the DELTA 3 Plus, thanks to the aforementioned dual XT60i inputs and 1000-W peak input support, I can hook up all three panels simultaneously, although doing so will likely take up a notable chunk of my deck real estate in the process. Please remain on standby for observations and results to come!

More on charging and firmware upgrading

Two other comments to note, in closing:

Speaking of the XT60i input, how do I charge the DELTA 3 Plus (or the DELTA 2, for that matter) in-vehicle using EcoFlow’s 800W Alternator Charger (which, yes, I already realize that I’m also overdue in installing and then testing!):

Specifically, when the portable power station is simultaneously connected to its Smart Extended Battery companion? Ordinarily, the Alternator Charger would tether to the portable power station over the XT150 connector-equipped cable that comes bundled with the former:

But, in this particular case, the portable power station’s XT150 interface is already in use (and for that matter, isn’t even an available option for lower-end devices such as my RIVER 2):

The trick is to instead use one of the two orange-color XT60i connectors also shown at the bottom left of the DELTA 3 stack setup photo.

EcoFlow alternatively bundles an XT60 connector-equipped cable with the 500-W version of the Alternator Charger, intended for use with smaller vehicles and/or more modest portable power stations, but that same cable is also available for standalone purchase:

It’ll be lower power (therefore slower) than the XT150 alternative, but it’s better than nothing! And it’ll recharge both the portable power station and (via the separate XT150-to-XT150 cable) the tethered Smart Extended Battery. Just be sure to secure the stack so it doesn’t tip over!

Also, regarding firmware upgrades, I’d been pleasantly surprised to not receive any DELTA 3 Plus update notifications since late April when it and its Smart Extra Battery companion had come into my possession. Software stability nirvana ended, in late August, alas, and since the update documentation specifically mentioned a “Better experience when using the device with an extra battery,” I decided to proceed. Unfortunately, my first several subsequent upgrade attempts terminated prematurely, at random percentage-complete points, after slower-than-usual progress, and with worrying failure status messages:

Eventually, I crossed my fingers and followed the guidance to restart the device, a process which, I eventually realized after several frustrating, unsuccessful initial attempts, can only be accomplished with the portable power station disconnected from AC. The device was stuck in a partially updated state post-reboot, albeit thankfully still accessible over Bluetooth:

And doubly thankfully, this time the upgrade completed successfully to both the DELTA 3 Plus:

And its tethered Smart Extra Battery:

Phew! As before with the DELTA 2, I think I’ll delay my next update (which hasn’t been offered yet) until I wait an appropriate amount of time and then check in with the user community first for feedback on their experiences. And with that, I await your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post EcoFlow’s DELTA 3 Plus and Smart Extra Battery: Product line impermanence curiosity appeared first on EDN.

Mastering multi-physics effects in 3D IC design

Mon, 11/10/2025 - 12:16

The semiconductor industry is at a pivotal moment as the limits of Moore’s Law motivate a transition to three-dimensional integrated circuit (3D IC) technology. By vertically integrating multiple chiplets, 3D ICs enable advances in performance, functionality, and power efficiency. However, stacking dies introduces layers of complexity driven by multi-physics interactions—thermal, mechanical, and electrical—which must be addressed at the start of design.

This shift from two-dimensional (2D) system-on-chips (SoC) to stacked 3D ICs fundamentally alters the design environment. 2D SoCs benefit from well-established process design kits (PDKs) and predictable workflows.

Figure 1 The 3D IC technology takes IC design to another dimension. Source: Siemens EDA

In contrast, 3D integration often means combining heterogeneous dies that use different process nodes and new interconnection technologies, presenting additional variables throughout the design and verification flow. Multi-physics phenomena are no longer isolated concerns—they are integral to the design’s overall success.

Multi-physics: a new design imperative

The vertical structure of 3D ICs—interconnected by through-silicon vias and micro-bumps and enclosed in advanced packaging materials—creates a tightly coupled environment where heat dissipation, mechanical integrity, and electrical behavior interact in complex ways.

For 2D chips, thermal and mechanical checks were often deferred until late in the cycle, with manageable impact. For 3D ICs, postponing these analyses risks costly redesigns or performance and reliability failures.

Traditional SoC design often relies on high-level RTL descriptions, where many physical optimizations are fixed early and are hard to change later. On the other hand, 3D IC’s complexity and physical coupling require earlier feedback from physics-driven analysis during RTL and floorplanning, enabling designers to make informed choices before costly constraints are locked in.

A chiplet may operate within specifications in isolation, yet face degraded reliability and performance once subjected to the real-world conditions of a 3D stack. Only early, predictive, multi-physics analysis can reveal—and enable cost-effective mitigation of—these risks.

Continuous multi-physics evaluation must begin at floorplanning and continue through every design iteration. Each change to layout, interfaces, or materials can introduce new thermal or mechanical stress concerns, which must be re-evaluated to maintain system reliability and yield.

Moving IC design to the system-level

3D ICs require close coordination among specialized teams: die designers, interposer experts, packaging engineers, and, increasingly, electronic system architects and RTL developers. Each group has its own toolchains and data standards, often with differing net naming conventions, component orientations, and functional definitions, leading to communication and integration challenges.

Adding to the internal challenges, 3D IC design often involves chiplets from multiple vendors, foundries and OSAT providers, each with different methodologies and data formats. While using off-the-shelf chiplets offers flexibility and accelerates development, integration can expose previously hidden multi-physics issues. A chiplet that works in isolation may fail specification after stacking, emphasizing the need for tighter industry collaboration.

Addressing these disparities requires a system-level owner, supported by comprehensive EDA platforms that unify methodologies and aggregate data across domains. This ensures consistency and reduces errors inherent to siloed workflows. For EDA vendors, developing inclusive environments and tools that enable such collaboration is essential.

Inter-company collaboration now also depends on more robust data exchange tools and methodologies. Here, EDA vendors play a central role by providing platforms and standards for seamless communication and data aggregation between fabless houses, foundries, and OSATs.

At the industry level, new standards and 3D IC design kits—such as those developed by the CDX working group and industry partners—are emerging to address these challenges, forging a common language for describing 3D IC components, interfaces, and package architectures. These standards are vital for enabling reliable data exchanges and integration across diverse teams and supply chain partners.

Figure 2 Here is a view of a chiplet design kit (CDK) as per JEDEC JEP30 part model. Source: Siemens EDA

Programs such as TSMC’s 3Dblox initiative provide upfront placement and interconnection definitions, reducing ambiguity and fostering tool interoperability.

Digital twin and predictive multi-physics

The digital twin concept extends multi-physics analysis throughout the entire product lifecycle. Maintaining an accurate digital representation—from transistor-level detail up to full system integration—enables predictive simulation and optimization, accounting for interactions down to the package, board, or even system level. By transferring multi-physics results between levels of abstraction, teams can verify that chiplet behavior under thermal and mechanical loads accurately predicts final product reliability.

Figure 3 A digital twin extends multi-physics analysis throughout the entire product lifecycle. Source: Siemens EDA

For 3D ICs, chiplet electrical models must be augmented by multi-physics data captured from stack-level simulations. Back-annotating temperature and stress outcomes from package-level analysis into chiplet netlists provides the foundation for more accurate system-level electrical simulations. This feedback loop is becoming a critical part of sign-off, ensuring that each chiplet performs within its operational window in the assembled system.

Keeping it cool

Thermal management is the single most important consideration for die-to-die interfaces in 3D ICs. The vertical proximity of active dies can lead to rapid heat accumulation and risks, such as thermal runaway, where ongoing heat generation further degrades electrical performance and creates mechanical stress from varying thermal expansion rates in different materials. Differential expansion between materials can even warp dies and threaten the reliability of interconnects.

To enable predictive design, the industry needs standardized “multi-physics Liberty files” that define temperature and stress dependencies of chiplet blocks, akin to the Liberty files used for place-and-route in 2D design. These files will allow designers to evaluate whether a chiplet within the stack stays within its safe operating range under expected thermal conditions.

Multi-physics analysis must also support back-annotation of temperature and stress information to individual chiplets, ensuring electrical models reflect real operating environments. While toolchains for this process are evolving, the trajectory is clear: comprehensive, physics-aware simulation and data exchange will be integral to sign-off for 3D IC design, ensuring reliable operation and optimal system performance.

Shaping the future of 3D IC design

The journey into 3D IC technology marks a transformative period for the semiconductor industry, fundamentally reshaping how complex systems are designed, verified, and manufactured. 3D IC technology marks a leap forward for semiconductor innovation.

Its success hinges on predictive, early multi-physics analysis and collaboration across the supply chain. Establishing common standards, enabling system-level optimization, and adopting the digital twin concept will drive superior performance, reliability, and time-to-market.

Pioneers in 3D IC design—across EDA, semiconductor and system developers—are moving toward unified, system-level platforms that allow designers to iterate and optimize multi-physics analyses within a “single cockpit” environment that allows designers to optimize and iterate across different types of multi-physics analyses.

Figure 4 The Innovator3D IC solution provides the single, integrated cockpit 3D IC designers need. Source: Siemens EDA

With continued advances in EDA tools, methodologies and collaboration, the semiconductor industry can unlock the full promise of 3D integration, delivering the next generation of electronic systems that push the boundaries of capability, efficiency, and innovation.

Todd Burkholder is a senior editor at Siemens DISW. For over 30 years, he has worked as editor, author, and ghost writer with internal and external customers to create print and digital content across a broad range of high-tech and EDA technologies. Todd began his career in marketing for high-technology and other industries in 1992 after earning a Bachelor of Science at Portland State University and a Master of Science degree from the University of Arizona.

Tarek Ramadan is applications engineering manager for the 3D-IC Technical Solutions Sales (TSS) organization at Siemens EDA. He drives EDA solutions for 2.5D-IC, 3D-IC, and wafer level packaging applications. Prior to that, Tarek was a technical product manager in the Siemens Calibre design solutions organization. Ramadan holds BS and MS degrees in electrical engineering from Ain Shams University, Cairo, Egypt.

John Ferguson brings over 25 years of experience at Siemens EDA to his role as senior director of product management for Caliber 3D IC solutions. With a background in physics and deep expertise in design rule checking (DRC), John has been at the forefront of 3D IC technology development for more than 15 years, witnessing its evolution from early experimental approaches to today’s production-ready solutions.

Related Content

The post Mastering multi-physics effects in 3D IC design appeared first on EDN.

Power pole collapse

Fri, 11/07/2025 - 17:52

Two or three days ago, as of this writing, there was a power pole collapse in Bellmore, NY, at the intersection of Bellmore Avenue and Sunrise Highway. The collapsed pole is seen in Figure 1, lying across two westbound lanes of Sunrise Highway. The traffic lights are dark.

Figure 1 Collapsed power pole in Bellmore, NY, temporarily knocking out power.

Going to Google Maps, I took a close look at a photograph of the collapsed pole taken three months earlier, back in July, when the pole was still standing (Figure 2).

Figure 2 The leaning power pole and its damaged wood in July 2025.

The wood at the base of the leaning power pole was clearly, obviously, and indisputably in a state of severe decrepitude.

An older picture of this same pole on Google Maps, taken in December 2022 (Figure 3), shows this pole to have been damaged even at that time. Clearly, the local power utility company had, by inexcusable neglect, allowed that pole damage to remain unaddressed, which had thus allowed the collapse to happen.

Figure 3 Google Maps image of a power pole showing damage as early as December 2022.

Sunrise Highway is an extremely busy roadway. It is only by sheer blind luck that nobody was injured or killed by this event.

A replacement pole was later installed where the old pole had fallen. The new pole’s placement is exactly vertical, but how many other power poles out there are in a similarly unsafe condition as that fallen pole in Bellmore had been?

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

The post Power pole collapse appeared first on EDN.

System-level tool streamlines quantum design workflows

Fri, 11/07/2025 - 03:29

Quantum System Analysis software, part of Keysight’s Quantum EDA platform, simulates quantum architectures at the system level. It unifies electromagnetic, circuit, and quantum dynamics domains to enable early design validation in a single environment.

By modeling the full quantum workflow—from initial design through system-level experiments—the software reduces reliance on costly cryogenic testing and shortens time-to-validation. It includes tools for optimizing dilution fridge input lines to manage thermal noise and estimate qubit temperatures. A time dynamics simulator models quantum system evolution using Hamiltonians derived from EM or circuit simulations, accurately emulating experiments such as Rabi and Ramsey pulsing to reveal qubit behavior.

Quantum System Analysis supports superconducting qubit platforms and can be extended to other modalities such as spin qubits. It complements Quantum Layout, QuantumPro EM, and Quantum Circuit Simulation tools. 

Keysight Technologies 

The post System-level tool streamlines quantum design workflows appeared first on EDN.

CAN FD transceiver boosts space data rates

Fri, 11/07/2025 - 03:29

Microchip’s ATA6571RT radiation-tolerant CAN FD transceiver supports data rates to 5 Mbps for high-reliability space communications. Well-suited for satellites and spacecraft, it withstands a total ionizing dose of 50 krad(Si) and offers single-event latch-up immunity up to 78 MeV·cm²/mg at +125 °C.

Unlike conventional CAN transceivers limited to 1 Mbps, the ATA6571RT handles CAN FD frames with payloads up to 64 bytes, reducing bus load and improving throughput. It ensures robust, efficient data transmission under harsh space conditions, while backward compatibility with classic CAN enables an easy upgrade path for existing systems.

A cyclic redundancy check (CRC) algorithm enhances error detection and reliability in safety-critical applications. The transceiver also delivers improved EMC and ESD performance, along with low current consumption in sleep and standby modes while retaining full wake-up capability. It interfaces directly with 3-V to 3.6-V microcontrollers through the VIO pin.

The ATA6571RT transceiver costs $210 each in 10-unit quantities. A timeline for availability was not provided at the time of this announcement.

ATA6571RT product page 

Microchip Technology 

The post CAN FD transceiver boosts space data rates appeared first on EDN.

Reconfigurable modules ease analog design

Fri, 11/07/2025 - 03:28

Chameleon adaptive analog modules from Okika provide preprogrammed, ready-to-use analog functions in a compact 14×11-mm form factor. Each module employs the company’s FlexAnalog field-programmable analog array (FPAA) architecture—a reconfigurable matrix with over 40 configurable analog modules (CAMs), including filters, amplifiers, and oscillators. Nonvolatile memory and reconfiguration circuitry are integrated on the module.

Chameleon simplifies analog designs that need flexibility without the complexity of firmware or digital control. Modules function as fixed analog blocks straight out of the box—no microcontroller or firmware required. Configurations can be reprogrammed in-system or using any 3.3-V SPI EEPROM programmer.

Anadigm Designer 2 software provides parameter-tuning and filter-design tools for simulating and verifying Chameleon’s performance. Typical applications include programmable active filters, sensor signal conditioning and linearization, and industrial automation and adaptive control, as well as research and prototyping.

Chameleon extends the FlexAnalog platform into an application-ready design that adapts easily. For pricing and availability, contact sales@okikadevices.com.

Okika Devices 

The post Reconfigurable modules ease analog design appeared first on EDN.

Vertical GaN advances efficiency and power density

Fri, 11/07/2025 - 03:28

onsemi has developed power semiconductors based on a vertical GaN (vGaN) architecture that improves efficiency, power density, and ruggedness. These GaN-on-GaN devices conduct current vertically through the semiconductor, supporting higher operating voltages and faster switching frequencies.

Most commercially available GaN devices are built on silicon or sapphire substrates, which conduct current laterally. onsemi’s GaN-on-GaN technology enables vertical current flow in a monolithic die, handling voltages up to and beyond 1200 V while delivering higher power density, better thermal stability, and robust performance under extreme conditions. Compared with lateral GaN semiconductors, vGaN devices are roughly three times smaller.

These advantages translate to significant system-level benefits. High-end power systems using vGaN can cut energy and heat losses by nearly 50%, while reducing size and weight. The technology enables smaller, lighter, and more efficient systems for AI data centers, electric vehicles, and other electrification applications.

onsemi is now sampling 700-V and 1200-V vGaN devices to early access customers. For additional information about vertical GaN, click here.

onsemi

The post Vertical GaN advances efficiency and power density appeared first on EDN.

SoC delivers dual-mode Bluetooth for edge devices

Fri, 11/07/2025 - 03:28

Ambiq’s Apollo510D Lite SoC provides both Bluetooth Classic and BLE 5.4 connectivity, enabling always-on intelligence at the edge. It is powered by a 32-bit Arm Cortex-M55 processor running at up to 250 MHz with Helium vector processing and Ambiq’s turboSPOT dynamic scaling. A dedicated Cortex-M4F network coprocessor operating at up to 96 MHz handles wireless and sensor-fusion tasks.

According to Ambiq, its Subthreshold Power Optimized Technology (SPOT) delivers 16× faster performance and up to 30× better AI energy efficiency than comparable M4- or M33-based devices. The SoC’s BLE 5.4 radio subsystem provides +14 dBm transmit power, while dual-mode capability supports low-power audio streaming and backward compatibility with Classic Bluetooth.

The Apollo510D Lite integrates 2 MB of RAM and 2 MB of nonvolatile memory with dedicated instruction/data caches for faster execution. It also includes secureSPOT 3.0 and Arm TrustZone to enable secure boot, firmware updates, and data protection across connected devices.

Along with the Apollo510D Lite (dual-mode Bluetooth), Ambiq’s lineup includes the Apollo510 Lite (no BLE radio) and the Apollo510B Lite (BLE-only). The Apollo510 Lite series is sampling now, with volume production expected in Q1 2026.

Apollo510 Lite product page 

Ambiq Micro 

The post SoC delivers dual-mode Bluetooth for edge devices appeared first on EDN.

Dual-range motion sensor simplifies IIoT system designs

Thu, 11/06/2025 - 21:53
STMicroelectronics' ISM6HG256X three-in-one motion sensor.

STMicroelectronics debuts the tiny ISM6HG256X three-in-one motion sensor in a 2.5 × 3-mm package for data-hungry industrial IoT (IIoT) systems, while also supporting edge AI applications. The IMU sensor combines simultaneous low-g (±16 g) and high-g (±256 g) acceleration detection with a high-performance precision gyroscope for angular rate measurement, ensuring the detection from subtle motion or vibrations to severe shocks.

“By integrating an accelerometer with dual full-scale ranges, it eliminates the need for multiple sensors, simplifying system design and reducing overall complexity,” ST said.

The ISM6HG256X is suited for IIoT applications such as asset tracking, worker safety wearables, condition monitoring, robotics, factory automation, and black box event recording.

In addition, the embedded edge processing and self-configurability support real-time event detection and context-adaptive sensing, which are needed for asset tracking sensor nodes, wearable safety devices, continuous industrial equipment monitoring, and automated factory systems.

STMicroelectronics' ISM6HG256X three-in-one motion sensor.(Source: STMicroelectronics)

Key features of the MEMS motion sensor are the unique machine-learning core and finite state machine, together with adaptive self-configuration and sensor fusion low power (SFLP). In addition, thanks to the SFLP algorithm, 3D orientation tracking also is possible with a few µA of current consumption,  according to ST.

These features are designed to bring edge AI directly into the sensor to autonomously classify detected events, which supports real-time, low-latency performance, and ultra-low system power consumption.

The ISM6HG256X is available now in a surface-mount package that can withstand harsh industrial environments from -40°C to 105°C. Pricing starts at $4.27 for orders of 1,000 pieces from the eSTore and through distributors. It is part of ST’s longevity program, ensuring long-term availability of critical components for at least 10 years.

Also available to help with development are the new X-NUCLEO-IKS5A1 industrial expansion board with MEMS Studio design environment and software libraries, X-CUBE-MEMS1. These tools help implement functions such as high-g and low-g fusion, sensor fusion, context awareness, asset tracking, and calibration.

The ISM6HG256X will be showcased in a dedicated STM32 Summit Tech Dive, “From data to insight: build intelligent, low-power IoT solutions with ST smart sensors and STM32,” on November 20.

The post Dual-range motion sensor simplifies IIoT system designs appeared first on EDN.

LIN motor driver improves EV AC applications

Thu, 11/06/2025 - 21:35
Melexis MLX81350 LIN motor driver.

As precise control of cabin airflow and temperature becomes more critical in vehicles to enhance passenger comfort as well as to support advanced thermal management systems, Melexis introduces the MLX81350 LIN motor driver for air conditioning (AC) flaps and automated air vents in electric vehicles (EVs). The MLX81350 delivers a balanced combination of performance, system integration, and cost efficiency to meet these requirements.

The fourth-generation automotive LIN motor driver, built on high-voltage silicon-on-insulator technology, delivers up to 5 W (0.5 A) per motor and provides quiet and efficient motor operation for air conditioning flap motors and electronic air vents.

Melexis MLX81350 LIN motor driver.(Source: Melexis)

In addition to flash programmability, Melexis said the MLX81350 offers high robustness and function density while reducing bill-of-materials complexity. It integrates both analog and digital circuitry, providing a single-chip solution that is fully compliant with industry-standard LIN 2.x/SAE J2602 and ISO 17987-4 specifications for LIN slave nodes.

The MLX81350 features a new software architecture that enhances performance and efficiency over the previous generation. This enhancement includes improved stall detection and the addition of sensorless, closed-loop field-oriented control. This enables smoother motor operation, lower current consumption, and reduced acoustic noise to better support automotive HVAC and thermal management applications, Melexis said.

However, the MLX81350 still maintains pin-to-pin compatibility with its predecessors for easier migration with existing designs.

The LIN motor driver offers lots of peripherals to support advanced motor control and system integration, including a configurable RC clock (24-40 MHz), four general-purpose I/Os (digital and analog), one high-voltage input, 5× 16-bit motor PWM timers, two 16-bit general timers, and a 13-bit ADC with <1.2 -µs conversion time across multiple channels, as well as UART, SPI, and I²C master or slave interfaces. The LIN interface enables seamless communication within vehicle networks, and provides built-in protection and diagnostic features, including over-current, over-voltage, and temperature shutdown, to ensure safe and reliable operation in demanding automotive environments.

The MLX81350 is designed according to ASIL B (ISO 26262) and offers flexible wake-up options via LIN, external pins, or an internal wake-up timer. Other features include a low standby current consumption (25 µA typ.; 50 µA max.) and internal voltage regulators that allow direct powering from the 12-V battery, supporting an operating voltage range of 5.5 V to 28 V.

The MLX81350 is available now. The automotive LIN motor driver is offered in SO-8 EP and QFN-24 packages.

The post LIN motor driver improves EV AC applications appeared first on EDN.

OKW’s plastic enclosures add new custom features

Thu, 11/06/2025 - 21:22
OKW's plastic enclosures.

OKW can now supply its plastic enclosures with bespoke internal metal brackets and mounting plates for displays and other large components. The company’s METCASE metal enclosures division designs and manufactures the custom aluminum parts in-house.

OKW's plastic enclosures.(Source: OKW Enclosures Inc.)

One recent project of this type involved OKW’s CARRYTEC handheld enclosures. Two brackets fitted to the lid allowed a display to be flush mounted; a self-adhesive label covered the join between screen and case. Another mounting plate, fitted in the base, was designed to support a power supply.

Custom brackets and supports can be configured to fit existing PCB pillars in OKW’s standard plastic enclosures. Electronic components can then be installed on the brackets’ standoffs.

CARRYTEC (IP 54 optional) is ideal for medical and laboratory electronics, test/measurement, communications, mobile terminals, data collection, energy management, sensors, Industry 4.0, machine building, construction, agriculture and forestry.

The enclosures feature a robust integrated handle with a soft padded insert. They can accommodate screens from 8.4″ to 13.4″. Interfaces are protected by inset areas on the underside. A 5 × AA battery compartment can also be fitted (machining is required).

These housings can be specified in off-white (RAL 9002) ABS (UL 94 HB) or UV-stable lava ASA+PC (UL 94 V-0) in sizes S 8.74″ × 8.07″ × 3.15″, M 10.63″ × 9.72″ × 1.65/3.58″ and L 13.70″ ×11.93″ × 4.61″.

In addition to the custom metal brackets and mounting plates, other customizing services include machining, lacquering, printing, laser marking, decor foils, RFI/EMI shielding, and installation and assembly of accessories.

For more information, view the OKW website: https://www.okwenclosures.com/en/news/blog/BLG2510-metal-brackets-for-plastic-enclosures.htm

The post OKW’s plastic enclosures add new custom features appeared first on EDN.

Pages