EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 3 hours 32 min ago

How to design a digital-controlled PFC, Part 1

5 hours 51 min ago
Shifting from analog to digital control

An AC/DC power supply with input power greater than 75 W requires power factor correction (PFC) to:

  • Take the universal AC input (90 V to 264 V) and rectify that input to a DC voltage.
  • Maintain the output voltage at a constant level (usually 400 V) with a voltage control loop.
  • Force the input current to follow the input voltage such that the electronics load appears to be a pure resistor with a current control loop.

Designing an analog-controlled PFC is relatively easy because the voltage and current control loops are already built into the controller, making it almost plug-and-play. The power-supply industry is currently transitioning from analog control to digital control, especially in high-performance power-supply design. In fact, nearly all newly designed power supplies in data centers use digital control.

Compared to analog control, digital-controlled PFC provides lower total harmonic distortion (THD), a better power factor, and higher efficiency, along with integrated housekeeping functions.

Switching from analog control to digital control is not easy; however, you will face new challenges where continuous signals are represented in a discrete format. And unlike an analog controller, the MCU used in digital control is essentially a “blank” chip; you must write firmware to implement the control algorithms.

Writing the correct firmware can be a headache for someone who has never done this before. To help you learn digital control, in this article series, I’ll provide a step-by-step guide on how to design a digital-controlled PFC, using totem-pole bridgeless PFC as a design example to illustrate the advantages of digital control.

A digital-controlled PFC system 

Among all PFC topologies, totem-pole bridgeless PFC provides the best efficiency. Figure 1 shows a typical totem-pole bridgeless PFC structure.

Figure 1 Totem-pole bridgeless PFC where Q1 and Q2 are high-frequency switches and will work as either a PFC boost switch or synchronous switch based on the VAC polarity. Source: Texas Instruments

Q1 and Q2 are high-frequency switches. Based on VAC polarity, Q1 and Q2 work as a PFC boost switch or synchronous switch, alternatively.

At a positive AC cycle (where the AC line is higher than neutral), Q2 is the boost switch, while Q1 works as a synchronous switch. The pulse-width modulation (PWM) signal for Q1 and Q2 are complementary: Q2 is controlled by D (the duty cycle from the control loop), while Q1 is controlled by 1-D. Q4 remains on and Q3 remains off for the whole positive AC half cycle.

At a negative AC cycle (where the AC neutral is higher than line), the functionality of Q1 and Q2 swaps: Q1 becomes the boost switch, while Q2 works as a synchronous switch. The PWM signal for Q1 and Q2 are still complementary, but D now controls Q1 and 1-D controls Q2. Q3 remains on and Q4 remains off for the whole negative AC half cycle.

Figure 2 shows a typical digital-controlled PFC system block diagram with three major function blocks:

  • An ADC to sense the VAC voltage, VOUT voltage, and inductor current for conversion into digital signals.
  • A firmware-based average current-mode controller.
  • A digital PWM generator.

Figure 2 Block diagram of a typical digital-controlled PFC system with three major function blocks. Source: Texas Instruments

I’ll introduce these function blocks one by one.

The ADC

An ADC is the fundamental element for an MCU; it senses an analog input signal and converts it to a digital signal. For a 12-bit ADC with a 3.3-V reference, Equation 1 expresses the ADC result for a given input signal Vin as:

Conversely, based on a given ADC conversion result, Equation 2 expresses the corresponding analog input signal as:

To obtain an accurate measurement, the ADC sampling rate must follow the Nyquist theorem, which states that a continuous analog signal can be perfectly reconstructed from its samples if the signal is sampled at a rate greater than twice its highest frequency component.

This minimum sampling rate, known as the Nyquist rate, prevents aliasing, a phenomenon where higher frequencies appear as lower frequencies after sampling, thus losing information about the original signal. For this reason, the ADC sampling rate is set at a much higher rate (tens of kilohertz) than the AC frequency (50 or 60 Hz).

Input AC voltage sensing

The AC input is high voltage; it cannot connect to the ADC pin directly. You must use a voltage divider, as shown in Figure 3, to reduce the AC input magnitude.

Figure 3 Input voltage sensing that allows you to connect the high AC input voltage to the ADC pin. Source: Texas Instruments

The input signal to the ADC pin should be within the measurement range of the ADC (0 V to 3.3 V). But to obtain a better signal-to-noise ratio, the input signal should be as big as possible. Hence, the voltage divider for VAC should follow Equation 3:

where VAC_MAX is the peak value of the maximum VAC voltage that you want to measure.

Adding a small capacitor (C) with low equivalent series resistance (ESR) in the voltage divider can remove any potential high-frequency noise; however, you should place C as close as possible to the ADC pin.

Two ADCs measure the AC line and neutral voltages; subtracting the two readings using firmware will obtain the VAC signal.

Output voltage sensing

Similarly, resistor dividers will attenuate the output voltage, as shown in Figure 4, then connect to an ADC pin. Again, adding C with low ESR in the voltage divider removes any potential high-frequency noise, with C placed as close as possible to the ADC pin.

Figure 4 Resistor divider for output voltage sensing, where C removes any potential high-frequency noise. Source: Texas Instruments

To fully harness the ADC measurement range, the voltage divider for VOUT should follow Equation 4:

where VOUT_OVP is the output overvoltage protection threshold.

AC current sensing

In a totem-pole bridgeless PFC, the inductor current is bidirectional, requiring a bidirectional current sensor such as a Hall-effect sensor. With a Hall-effect sensor, if the sensed current is a sine wave, then the output of the Hall-effect sensor is a sine wave with a DC offset, as shown in Figure 5.

Figure 5 The bidirectional hall-effect current sensor output is a sine wave with a DC offset when the input is a sine wave. Source: Texas Instruments

The Hall-effect sensor you use may have an output range that is less than what the ADC can measure. Scaling the Hall-effect sensor output to match the ADC measurement range using the circuit shown in Figure 6 will fully harness the ADC measurement range.

Figure 6 Hall-effect sensor output amplifier used to scale the Hall-effect sensor output to match the ADC measurement range. Source: Texas Instruments

Equation 5 expresses the amplification of the Hall-effect sensor output:

Firmware-based average current-mode controller

As I mentioned earlier, because the digital controller MCU is a blank chip, you must write firmware to mimic the PFC control algorithm used in the analog controller. This includes voltage loop implementation, current reference generation, current loop implementation, and system protection. I’ll go over these implementations in Part 2 of this article series.

Digital compensator

In Figure 7, GV and GI are compensators for the voltage loop and current loop. One difference between analog control and digital control is that in analog control, the compensator is usually implemented through an operational amplifier, whereas digital control uses a firmware-based proportional-integral-derivative (PID) compensator.

For PFC, its small-signal model is a first-order system; therefore, a proportional-integral (PI) compensator is enough to obtain good bandwidth and phase margin. Figure 7 shows a typical digital PI controller structure.

Figure 7 A digital PI compensator where r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. Source: Texas Instruments

In Figure 7, r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. The compensator output, u(k), clamps to a specific range. The compensator also contains an anti-windup reset logic that allows the integral path to recover from saturation.

Figure 8 shows a C code implementation example for this digital PI compensator.

Figure 8 C code example for a digital PI compensator. Source: Texas Instruments

For other digital compensators such as PID, nonlinear PID, and first-, second-, and third-order compensators, see reference [1].

S/Z domain conversion

If you have an analog compensator that works well, and you want to use the same compensator in digital-controlled PFC, you can convert it through S/Z domain conversion. Assume that you have a type II compensator, as shown in Equation 6:

Replace s with bilinear transformation (Equation 7):

where Ts is the ADC sampling period.

Then H(s) is converted to H(z), as shown in Equation 8:

Rewrite Equation 8 as Equation 9:

To implement Equation 9 in a digital controller, store two previous control output variables: un-1, un-2, and two previous error histories: en-1, en-2. Then use current error en and Equation 9 to calculate the current control output, un.

Digital PWM generation

A digital controller generates a PWM signal much like an analog controller, with the exception that a clock counter generates the RAMP signal; therefore, the PWM signal has limited resolution. The RAMP counter is configurable as up count, down count, or up-down count.

Figure 9 shows the generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation.

Figure 9 Generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation. Source: Texas Instruments

Programming the PERIOD resistor of the PWM generator will determine the switching frequency. For up-count and down-count mode, Equation 10 calculates the PERIOD register value as:

where fclk is the counter clock frequency and fsw is the desired switching frequency.

For the up-down count mode, Equation 11 calculates the PERIOD register value as:

Figure 10 shows an example of using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC.

Figure 10 Using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC. Source: Texas Instruments

Equation 12 shows that the COMP equals the current loop GI output multiplied by the switching period:

The higher the COMP value, the bigger the D.

To prevent short through between the top switch and the bottom switch, adding a delay on the rising edge of PWMA and the rising edge of PWMB inserts dead time between PWMA and PWMB. This delay is programmable, which means that it’s possible to dynamically adjust the dead time to optimize performance.

Blocks in digital-controlled PFC

Now that you have learned about the blocks used in digital-controlled PFC, it’s time to close the control loop. In the next installment, I’ll discuss how to write firmware to implement an average current-mode controller.

Bosheng Sun is a system engineer and Senior Member Technical Staff at Texas Instruments, focused on developing digitally controlled high-performance AC/DC solutions for server and industry applications. Bosheng received a Master of Science degree from Cleveland State University, Ohio, USA, in 2003 and a Bachelor of Science degree from Tsinghua University in Beijing in 1995, both in electrical engineering. He has published over 30 papers and holds six U.S. patents.

Reference

  1. C2000™ Digital Control Library User’s Guide.” TI literature No. SPRUID3, January 2017.

Related Content

The post How to design a digital-controlled PFC, Part 1 appeared first on EDN.

Optical combs yield extreme-accuracy gigahertz RF oscillator

11 hours 6 min ago

It may seem at times that there is a divide between the optical/photonic domain and the RF one, with the terahertz zone between them as a demarcation. If you need to make a transition between the photonic and RF words, you use electrooptical devices such as LEDs and photodetectors of various types. Now, all or most optical systems are being used to perform functions in the optical band where electric comments can’t fulfill the needs, even pushing electronic approaches out of the picture.

In recent years, this divide has also been bridged by newer, advanced technologies such as integrated photonics where optical functions such as lasers, waveguides, tunable elements, filters, and splitters are fabricated on an optically friendly substrate such as lithium niobate (LiNbO3). There are even on-chip integrated transceivers and interconnects such as the ones being developed by Ayar Labs. The capabilities of some of these single- or stacked-chip electro-optical devices are very impressive.

However, there is another way in which electronics and optics are working together with a synergistic outcome. The optical frequency comb (OFC), also called optical comb, was originally developed about 25 years ago—for which John Hall and Theodor Hänsch received the 2005 Nobel Prize in Physics—to count the cycles from optical atomic clocks and for precision laser-based spectroscopy.

It has since found many other uses, of course, as it offers outstanding phase stability at optical frequencies for tuning or as a local oscillator (LO). Some of the diverse applications include X-ray and attosecond pulse generation, trace gas sensing in the oil and gas industry, tests of fundamental physics with atomic clocks, long-range optical links, calibration of atomic spectrographs, precision time/frequency transfer over fiber and through free space, and precision ranging.

Use of optical components is not limited to the optical-only domain. In the last few years, researchers have devised ways to use the incredible precision of the OFC to generate highly stable RF carriers in the 10-GHz range. Phase jitter in the optical signal is actually reduced as part of the down-conversion process, so the RF local oscillator has better performance than its source comb.

This is not an intuitive down-conversion scheme (Figure 1).

Figure 1 Two semiconductor lasers are injection-locked to chip-based spiral resonators. The optical modes of the spiral resonators are aligned, using temperature control, to the modes of the high-finesse Fabry-Perot (F-P) cavity for Pound–Drever–Hall (PDH) locking (a). A microcomb is generated in a coupled dual-ring resonator and is heterodyned with the two stabilized lasers. The beat notes are mixed to produce an intermediate frequency, fIF, which is phase-locked by feedback to the current supply of the microcomb seed laser (b). A modified uni-traveling carrier (MUTC) photodetector chip is used to convert the microcomb’s optical output to a 20-GHz microwave signal; a MUTC photodetector has response to hundreds of GHz (c). Source: Nature

But this simplified schematic diagram does not reveal the true complexity and sophistication of the approach, which is illustrated in Figure 2.

Figure 2 Two distributed-feedback (DFB) lasers at 1557.3 and 562.5 nm are self-injection-locked (SIL) to Si3N4 spiral resonators, amplified and locked to the same miniature F-P cavity. A 6-nm broad-frequency comb with an approximately 20 GHz repetition rate is generated in a coupled-ring resonator. The microcomb is seeded by an integrated DFB laser, which is self-injection-locked to the coupled-ring microresonator. The frequency comb passes through a notch filter to suppress the central line and is then amplified to 60 mW total optical power. The frequency comb is split to beat with each of the PDH-locked SIL continuous wave references. Two beat notes are amplified, filtered and then mixed to produce fIF, which is phase-locked to a reference frequency. The feedback for microcomb stabilization is provided to the current supply of the microcomb seed laser. Lastly, part of the generated microcomb is detected in an MUTC detector to extract the low-noise 20-GHz RF signal. Source: Nature

At present, this is not implemented as a single-chip device or even as a system with just a few discrete optical components; many of the needed precision functions are only available on individual substrates. A complete high-performance system takes a rack-sized chassis fitting in a single-height bay.

However, there has been significant progress on putting multiple functional locks into single-chip substrate, so it wouldn’t be surprising to see a monolithic (or nearly so) device within a decade or perhaps just a few years.

What sort of performance can such a system deliver? There are lots of numbers and perspectives to consider, and testing these systems—at these levels of performance—to assess their capabilities is as much of a challenge as fabricating them. It’s the metrology dilemma: how do you test a precision device? And how do you validate the testing arrangement itself?

One test result indicates that for a 10-GHz carrier, the phase noise is −102 dBc/Hz at 100 Hz offset and decreases to −141 dBc/Hz at 10 kHz offset. Another characterization compares this performance to that of other available techniques (Figure 3).

Figure 3 The platforms are all scaled to 10-GHz carrier and categorized based on the integration capability of the microcomb generator and the reference laser source, excluding the interconnecting optical/electrical parts. Filled (blank) squares are based on the optical frequency division (OFD) standalone microcomb approach: 22-GHz silica microcomb (i); 5-GHz Si3N4 microcomb (ii); 10.8-GHz Si3N4 microcomb (iii) ; 22-GHz microcomb (iv); MgF2 microcomb (v); 100-GHz Si3N4 microcomb (vi); 22-GHz fiber-stabilized SiO2 microcomb (vii); MgF2 microcomb (viii); 14-GHz MgF2 microcomb pumped by an ultrastable laser (ix); and 14-GHz microcomb-based transfer oscillator (x). Source: Nature

There are many good online resources available that explain in detail the use of optical combs for RF-carrier generation. Among these are “Photonic chip-based low-noise microwave oscillator” (Nature); “Compact and ultrastable photonic microwave oscillator” (Optics Letters via ResearchGate); and “Photonic Microwave Sources Divide Noise and Shift Paradigms” (Photonics Spectra).

In some ways, it seems there’s a “frenemy” relationship between today’s advanced photonics and the conventional world of RF-based signal processing. But as has usually been the case, the best technology will win out, and it will borrow from and collaborate with others. Photonics and electronics each have their unique attributes and bring something to the party, while their integrated pairing will undoubtedly enable functions we can’t fully envision—at least not yet.

Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.

Related Content

The post Optical combs yield extreme-accuracy gigahertz RF oscillator appeared first on EDN.

High-performance MCUs target industrial applications

Tue, 11/18/2025 - 21:45
STMicroelectronics' STM32V8 high-performance MCUs.

STMicroelectronics raises the performance bar for embedded edge AI and industrial applications with the new STM32V8 high-performance microcontrollers (MCUs) for demanding industrial applications such as factory automation, motor control, and robotics. It is the first MCU built on ST’s 18-nm silicon-on-insulator (FD-SOI) process technology with embedded phase-change memory (PCM).

The STM32V8’s phase-change non-volatile memory (PCM) claims the smallest cell size on the market, enabling 4 MB of embedded non-volatile memory (NVM).

STMicroelectronics' STM32V8 high-performance MCUs.(Source: STMicroelectronics)

In addition, the STM32V8 is ST’s fastest STM32 MCU to date, designed for high reliability and harsh environments in embedded and edge AI applications, and can handle complex applications and maintain high energy efficiency. The STM32V8 achieves clock speeds of up to 800 MHz, thanks to the Arm Cortex-M85 core and the 18-nm FD-SOI process with embedded PCM. The FD-SOI technology delivers high energy efficiency and supports a maximum junction temperature of up to 140°C.

The MCU integrates special accelerators, including graphic, crypto/hash, and comes with a large selection of IP, including 1-Gb Ethernet, digital interfaces (FD-CAN, octo/hexa xSPI, I2C, UART/USART, and USB), analog peripherals, and timers. It also features state-of-the-art security with the STM32 Trust framework and the latest cryptographic algorithms and lifecycle management standards. It targets PSA Certified Level 3 and SESIP certification to meet compliance with the upcoming Cyber-Resilience Act (CRA). 

The STM32V8 has been selected for the SpaceX Starlink constellation, using it in a mini laser system that connects the satellites traveling at extremely high speeds in low Earth orbit (LEO), ST said. This is thanks in part to the 18-nm FD-SOI technology that provides a higher level of reliability and robustness.

The STM32V8 supports bare-metal or RTOS-based development. It is supported by ST’s development resources, including STM32Cube software development and turnkey hardware including Discovery kits and Nucleo evaluation boards.

The STM32V8 is in early-stage access for selected customers. Key OEM availability will start in the first quarter of 2026, followed by broader availability.

The post High-performance MCUs target industrial applications appeared first on EDN.

FIR temperature sensor delivers high accuracy

Tue, 11/18/2025 - 21:35
Melexis' MLX90637 SMD FIR temperature sensor.

Melexis claims the first automotive-grade surface-mount (SMD) far-infrared (FIR) temperature sensor designed for temperature monitoring of critical components in electric vehicle (EV) powertrain applications. These include inverters, motors, and heating, ventilation, and air conditioning (HVAC) systems.

Melexis' MLX90637 SMD FIR temperature sensor.(Source: Melexis)

The MLX90637 offers several advantages over negative temperature coefficient (NTC) thermistors that have traditionally been used in these systems, where speed and accuracy are critical, Melexis said.

These advantages include eliminating the need for manual labor associated with NTC solutions thanks to the SMD packaging, which supports automated PCB assembly and delivers cost savings. In addition, the FIR temperature sensor with non-contact measurement ensures intrinsic galvanic isolation that helps to enhance EV safety by separating high- and low-voltage circuits, while the inherent electromagnetic compatibility (EMC) eliminates typical noise challenges associated with NTC wires, the company said.

Key features include a 50° field of view, 0.02°C resolution, and fast response time, which are suited for applications such as inverter busbar monitoring where temperature must be carefully managed. Sleep current is less than 2.5 μA. and the ambient operating temperature range is -40°C to 125°C.

The MLX90637 also simplifies system integration with a 3.3-V supply, factory calibration (including post calibration), and an I2C interface for communication with a host microcontroller, including a software-definable I2C address via an external pin. The AEC-Q100-qualified sensor is housed in a 3 × 3-mm package.

The post FIR temperature sensor delivers high accuracy appeared first on EDN.

Accuracy loss from PWM sub-Vsense regulator programming

Tue, 11/18/2025 - 15:00

I’ve recently published Design Ideas (DIs) showing circuits for linear PWM programming of standard bucking-type regulators in applications requiring an output span that can swing below the regulator’s sense voltage (Vsense or Vs). For example: “Simple PWM interface can program regulators for Vout < Vsense.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

Objections have been raised, however, that such circuits entail a significant loss of programming analog accuracy because they rely on adding a voltage term typically derived from an available voltage (e.g., logic rail) source. Therefore, they should be avoided.

The argument relies on the fact that such sources generally have accuracy and stability that are significantly worse (e.g., ±5%) than those of regulator internal references (e.g., ±1%).

But is this objection actually true, and if so, how serious is the problem? How much of an accuracy penalty is actually incurred? This DI addresses these questions. 

Figure 1 shows a basic topology for sub-Vs regulator programming with current expressions as follows:

A = DpwmVs/R1
B = (1 – Dpwm)(Vl – Vs)/(R1 + R4)

Where A is the primary programming current and B is the sub-Vs programming current giving an output voltage:

Vout = R2(A + B) + Vs

Figure 1 Basic PWM regulator programming topology.

Inspection of the A and B current expressions shows that when the PWM duty factor (Dpwm) is set to full-scale 100% (Dpwm = 1), then B = 0. This is due to the (1 – Dpwm) term.

Therefore, there can be no error contribution from the logic rail Vl at full-scale.

At other Dpwm values, however, this happy circumstance no longer applies, and B becomes nonzero. Thus, Vl tolerance and noise degrade accuracy, at least to some extent. But, by how much?

The simplest way to address this crucial question is to evaluate it as a plausible example of Figure 1’s general topology. Figure 2 provides some concrete groundwork for that by adding some example values.

Figure 2 Putting some meat on Figure 1’s bare bones, adding example values to work with.

Assuming perfect resistors, nominal R1 currents are then:

A = Dpwm Vs/3300
B = (1 – Dpwm)(Vl – Vs)/123300
Vout = R2(A + B) + Vs = 75000(A + B) + 1.25

Then, making the (highly pessimistic) assumption that reference errors stack up as the sum of absolute values:

 Aerr = Dpwm 1%Vs/3300 = Dpwm 3.8µA
Berr = (1 – Dpwm) (5% 3.3v + 1% 1.25v)/123300 = (1 – Dpwm) 1.44µA
Vout total error = 75000(Dpwm 3.8µA + (1 – Dpwm)1.44µA)) + 1% Vs

The resulting Vout error plots are shown in Figure 3.

Figure 3 Vout error plots where the x-axis is Dpwm and y-axis is Vout error. Black line is Vout = Vs at Dpwm = 0 and red line is Vout = 0 at Dpwm = 0.

Conclusion: Error does increase in the lower range of Vout when the Vout < Vsense feature is incorporated, but any difference completely disappears at the top end. So, the choice turns on the utility of Vout < Vsense.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Accuracy loss from PWM sub-Vsense regulator programming appeared first on EDN.

Signal integrity and power integrity analysis in 3D IC design

Tue, 11/18/2025 - 11:37

The relentless pursuit of higher performance and greater functionality has propelled the semiconductor industry through several transformative eras. The most recent shift is from traditional monolithic SoCs to heterogeneous integrated advanced package ICs, including 3D integrated circuits (3D ICs). This emerging technology promises to help semiconductor companies sustain Moore’s Law.

However, these advancements bring increasingly complex challenges, particularly in power integrity (PI) and signal integrity (SI). Once secondary, SI/PI have become critical disciplines in modern semiconductor development. As data rates ascend into multiple gigabits per second and power requirements become more stringent, error margins shrink dramatically, making SI/PI expertise indispensable. The fundamental challenge lies in ensuring clean and reliable signal transmissions and stable power delivery across intricate systems.

Figure 1 The above diagram highlights the basic signal integrity (SI) issues. Source: Siemens EDA

This article explains the unique SI/PI challenges in 3D IC designs by contrasting them with traditional SoCs. We will then explore a progressive verification strategy to address these complexities, examine the roles and interdependencies of stakeholders in the 3D IC ecosystem, and illustrate these concepts through a real-world success story. Finally, we will discuss how these innovations drive the future of semiconductor design.

Traditional SI/PI versus 3D IC approaches

In traditional SoC components destined for a PCB system, SI and PI analysis typically validates individual components before system integration. This often treats SoCs, packages, and PCBs as distinct entities, allowing sequential analysis and optimization. For instance, component-level power demand analysis can be performed on the monolithic SoC and its package, while signal integrity analysis validates individual channels.

The design process is often split between separate packaging and PCB teams working in parallel. These teams eventually collaborate to manage design trade-offs such as allocating timing or voltage margins between the package and PCB to accommodate routing constraints. While effective for traditional designs, this compartmentalized approach is inadequate for the inherent complexities of 3D ICs.

A 3D IC’s architecture is not merely a collection of components but a highly condensed system of mini subsystems, characterized by the vertical stacking of multiple dies. Inter-die interfaces, through-silicon vias (TSVs), and microbumps create a dense, highly interactive electrical environment where power and signal integrity issues are deeply intertwined and can propagate across multiple layers.

The tight integration and proximity of the dies introduce novel coupling mechanisms and power delivery challenges that cannot be effectively addressed by sequential, isolated analyses. Therefore, unlike a traditional flow, 3D ICs demand holistic, parallel validation from the outset, with SI and PI analyses commencing early and encompassing all constituent parts concurrently.

Progressive verification

To navigate the intricate landscape of 3D IC design, a progressive verification strategy is paramount. This principle acknowledges that design information is sparse in early stages and becomes progressively detailed.

The core idea behind progressive verification is to initiate analysis as early as possible with available inputs, guiding the design onto the correct path and transforming the final verification step into confirmation rather than a discovery of fundamental issues. Different analysis requirements are addressed as details become available, starting with minimal inputs and gradually incorporating more specific data.

Figure 2 Here is a view of a progressive verification flow. Source: Siemens EDA

Let’s summarize the various analyses involved and their timing in the design flow.

Early architectural feasibility and pre-layout analysis

At the initial design phase, before detailed layout information is available, the focus is on architectural feasibility studies. This involves estimating power budgets and defining high-level interfaces. Even with rough inputs, early analysis can commence. For instance, pre-layout signal integrity analysis can model representative interconnect structures, such as an interposer bridge.

By defining an “envelope” of achievable performance based on preliminary dimensions, designers can establish realistic expectations and guidelines for subsequent layout stages. This proactive approach helps identify potential bottlenecks and ensures a robust electrical foundation.

Floorplanning and implementation-driven analysis

As the design progresses to floorplanning and initial implementation, guidelines from early analysis are translated into a physical layout. At this stage, more in-depth analyses become possible. This includes detailed power delivery network (PDN) analysis to verify power distribution across stacked dies and the substrate.

Signal path verification with actual component interconnections can also begin, enabling early identification and optimization of critical signal routes. This iterative process of layout and analysis enables continuous refinement, ensuring physical implementation aligns with electrical performance targets.

Detailed electrical analysis with vendor-specific IP

The final stage of progressive verification involves comprehensive electrical analysis utilizing actual vendor-specific intellectual property (IP) models. Given the nascent state of 3D IC die-to-die standards—for instance UCIe, BoW, and AIB, which are less mature than established protocols like DDR or PCIe—this detailed analysis is even more critical.

Designers perform in-depth S-parameter modeling of impedance networks, feeding these models with precise current values obtained from die designers and other stakeholders. This granular analysis provides full closure on the design’s electrical performance, ensuring all critical signal paths and power delivery mechanisms meet specifications under real-world operating conditions.

The 3D IC ecosystem

The complexity of 3D IC designs necessitates a highly collaborative environment involving diverse stakeholders, each with unique perspectives and challenges. Effective communication and early engagement among these teams are crucial for successful integration.

  1. System architects are responsible for the high-level floorplanning, determining the number of chiplets, baseband dies, and the communication channels required between them. Their challenge lies in optimizing the overall system architecture for performance, power, and area, while considering the physical constraints imposed by 3D integration.
  2. Die designers focus on individual die architectures and oversee I/O planning and internal power distribution. They must communicate their power requirements and I/O characteristics accurately to ensure compatibility within the stacked system. Their primary challenge is to optimize the die-level performance while adhering to system-level constraints and ensuring robust power and signal delivery across the interfaces.
  3. Layout teams are responsible for the physical implementation, encompassing die-level layout, substrate layout, and silicon interconnects like interposers and bridges. Often different layout teams may handle different aspects of the implementation, requiring meticulous coordination. Their challenges include managing extreme density, minimizing parasitic effects, and ensuring manufacturability across multiple layers.
  4. SI/PI and verification teams act as technical consultants, providing guidelines and feedback at every level. They advise system architects on bump-out strategies for die floorplans and work with die designers to optimize power and ground bump counts. Their role is to proactively identify and mitigate potential SI/PI issues throughout the design cycle, ensuring that the electrical performance targets are met.
  5. Mechanical and thermal teams ensure structural integrity and manage heat dissipation, respectively. Both are critical for the long-term reliability and performance of designs, as beyond electrical considerations, 3D ICs introduce significant mechanical and thermal challenges. For example, the close proximity of die can lead to localized hotspots and mechanical stresses due to differing coefficients of thermal expansion.

By employing a progressive verification methodology, these diverse stakeholders can engage in early and continuous communication, fostering a collaborative environment that makes it significantly easier to build a functional and reliable 3D IC design.

Chipletz’s proof of concept

The efficacy of a progressive verification strategy and collaborative ecosystem is best illustrated through real-world applications. Chipletz, a fabless substrate startup, exemplifies successful navigation of 3D IC design complexities in collaboration with an EDA partner. Chipletz is working closely with Siemens EDA for its Smart Substrate products, utilizing tools capable of supporting advanced 3D IC design requirements.

Figure 3 Smart Substrate uses cutting-edge chiplet integration technology that eliminates an interposer. Source: Siemens EDA

At the time, many industry-standard EDA tools were primarily tailored for traditional package and PCB architectures. Chipletz presented a formidable challenge: its designs featured massive floorplans with up to 50 million pin counts, demanding analysis tools with unprecedented capacity and layout tools capable of handling such intricate structures.

Siemens responded by engaging its R&D teams to enhance tool capacities and capabilities. This collaboration demonstrated not only the ability to handle these complex architectures but also to perform meaningful electrical analyses on such large designs. Initial efforts focused on fundamental aspects such as direct current (DC) IR drop analysis across the substrate and early PDN analysis.

Through these foundational steps, Siemens demonstrated its tools’ capabilities and, crucially, its commitment to working alongside Chipletz to overcome challenging roadblocks. This partnership enabled Chipletz to successfully tape out its initial demonstration vehicle, and it’s now progressing to the second revision of its design. This underscores the importance of adaptable EDA tools and strong collaboration in pushing the boundaries of 3D IC innovation.

Driving 3D IC innovation

3D ICs are unequivocally here to stay, with major semiconductor companies increasingly incorporating various forms of 3D packaging into their product roadmaps. This transition signifies a fundamental shift in how the industry approaches system design and integration. As the industry continues to embrace 3D IC integration as a key enabler for next-generation systems, the methodologies and collaborative approaches outlined in this article for SI and PI will only grow in importance.

The progressive verification strategy, coupled with close collaboration among diverse stakeholders, offers a robust framework for navigating the complex challenges inherent in 3D IC design. Companies and individuals who master these techniques will be exceptionally well-positioned to lead the next wave of semiconductor innovation, creating the high-performance, energy-efficient systems that will power our increasingly digital world.

Todd Burkholder is a senior editor at Siemens DISW. For over 25 years, he has worked as editor, author, and ghost writer with internal and external customers to create print and digital content across a broad range of EDA technologies. Todd began his career in marketing for high-technology and other industries in 1992 after earning a Bachelor of Science at Portland State University and a Master of Science degree from the University of Arizona.

John Caka is a signal and power integrity applications engineer with over a decade of experience in high-speed digital design, modeling, and simulation. He earned his B.S. in electrical engineering from the University of Utah in 2013 and an MBA from the Quantic School of Business and Technology in 2024.

Related Content

The post Signal integrity and power integrity analysis in 3D IC design appeared first on EDN.

Norton amplifiers: Precision and power, the analog way we remember

Tue, 11/18/2025 - 07:58

The Norton amplifier topology brings back the essence of analog design by using clever circuit techniques to deliver strong performance with minimal components. It is not about a brand name—it’s about a timeless analog philosophy that continues to inspire engineers and hobbyists today. This approach shows why analog circuits remain powerful and relevant, even in our digital age.

In electronics, a Norton amplifier—also known as a current differencing amplifier (CDA)—is a specialized analog circuit that functions as a current-controlled voltage source. Its output voltage is directly proportional to the difference between two input currents, making it ideal for applications requiring precise current-mode signal processing.

Conceptually, it serves as the dual of an operational transconductance amplifier (OTA), offering a complementary approach to analog design and expanding the toolkit for engineers working with current-driven systems.

So, while most amplifier discussions orbit op-amps and voltage feedback, the Norton amplifier offers a subtler, current-mode alternative—elegant in its simplicity and quietly powerful in its departure from the norm. Let us go further.

Norton amplifier’s analog elegance

As shown in the LM2900 IC equivalent circuit below, the internal architecture is refreshingly straightforward. The most striking departure from a conventional op-amp—typically centered around a voltage-mode differential pair—lies in the input stage. Rather than employing the familiar long-tailed pair, this Norton amplifier features a current mirror followed by a common-emitter amplifier.

Figure 1 Equivalent circuit highlights the minimalist internal structure of the LM2900 Norton amplifier IC. Source: Texas Instruments

These devices have been around for decades, and they clearly continue to intrigue analog enthusiasts. Just recently, I picked up a batch of LM3900-HLF ICs from an online seller. The LM3900-HLF appears to be a Chinese-sourced variant of the classic LM3900—a quad Norton operational amplifier recognized for its current-differencing input and quietly unconventional topology. These low-cost quads are now widely used across analog systems, especially in medium-frequency and single-supply AC applications.

Figure 2 Pin connections of the LM3900-HLF support easy adoption in practical circuits. Source: HLF

In my view, the LM2900 and LM3900 series are more than just relics—they are reminders of a time when analog design embraced cleverness over conformity. Their current differencing architecture, once a quiet alternative to voltage-mode orthodoxy, still finds relevance in industrial signal chains where noise rejection, single-supply operation, and low-impedance interfacing matter.

You will not see these chips headlining new designs, but the principles they embody—robust, elegant, and quietly efficient—continue to shape sensor front-ends, motor drives, and telemetry systems. The ICs may have faded, but the technique endures, humming beneath the surface of modern infrastructure.

And, while it’s not as widely romanticized as the LM3900, the LM359 Norton amplifier remains a quietly powerful choice for analog enthusiasts who value speed with elegance. Purpose-built for video and fast analog signal processing, it stepped in with serious bandwidth and slewing muscle. As a dual high-speed Norton amplifier, it handles wideband signals with slew rates up to 60 V/μs and gain-bandwidth products reaching 400 MHz—a clear leap beyond its older cousins.

In industrial and instrumentation circles, LM359’s current-differencing input stage still commands respect for its low input bias, fast settling, and graceful handling of capacitive loads. Its legacy lives on in video distribution, pulse amplification, and high-speed analog comparators—especially in designs that prioritize speed and stability over rail-to-rail swing.

Wrapping up with a whiff of flux

There is not much more to say about Norton amplifiers for now, so we will wrap up this slightly off-the-beaten-path blog post here. As a parting gift, here is a practical LM3900-based circuit—just enough to satisfy those who find joy in the scent of solder smoke.

Figure 3 Bring this LM3900-based triangle/square waveform generator circuit to life and trace its quiet Norton-style elegance. Source: Author

Triangle waveforms are usually generated by an integrator, which receives first a positive DC input voltage, and then a negative DC input voltage. The LM3900 Norton amplifier facilitates this operation within systems powered by a single supply voltage, thanks to the current mirror present at its non-inverting (+) input. This feature enables triangle waveform generation without the need for a negative DC input.

In the above schematic diagram, amplifier IC1D functions as an integrator. It first operates with the current through R1 to generate a negative output voltage slope. When the IC1C amplifier—the Schmitt trigger—switches high, the current through R5 causes the output voltage to rise.

For optimal waveform symmetry, R1 should be set to twice the value of R5 (and here R1=1 MΩ and R5=470 KΩ, which is close enough). Note that the Schmitt circuit also provides a square wave output at the same frequency.

Feeling inspired? Fire up your breadboard, test the circuit, or share your own twist. Whether you are a seasoned tinkerer or just rediscovering the joy of analog, let this be your spark to keep exploring.

Finally, I hope this odd topic sparked some interest. If I have misunderstood anything—or if there is a better way to approach it—please feel free to chime in with corrections or suggestions in the comments. Exploring new ground always comes with the risk of missteps, and I welcome the chance to learn and improve.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Norton amplifiers: Precision and power, the analog way we remember appeared first on EDN.

The Fosi Audio V3 Mono: A compelling power amp with a tendency to blow

Mon, 11/17/2025 - 18:16

One of the PFFB (Post-Filter Feedback Technology)-based Class D audio amplifiers showcased in a recent writeup of mine was Fosi Audio’s V3 Mono, which will get sole billing today:

It interestingly (at least to me) originally launched as a Kickstarter project in April 2024:

As the name implies, it’s a monoblock unit, intended to drive only a single speaker, with both single-channel XLR balanced and RCA unbalanced input options.

I own four functional (for now, at least) devices, plus the nonfunctional one whose insides we’ll be seeing today. Why four? It’s not because I plan on driving both front left and right main speakers and a center speaker and subwoofer, or for that matter, the two main transducers plus two surrounds. Instead, it’s for spares, notably ones obtained pre-higher tariffs, and specifically to do with that dead fifth amp.

Design evolution, manufacturing, and reliability issues

Before I go all Debbie Downer on you, I’ll begin with the good news. The V3 Mono is highly reviewer-rated (see, for example, the write-up from my long-time tech compatriot Amir Majidimehr) and has also garnered no shortage of enthusiastic feedback from owners like Tim Bray, who had heard about it from Archimago (here’s part 2). Alas, amidst all that positive press are also a notable number of complaints from folks whose units let the magic smoke escape, sometimes on just the first use, or whose amplifiers had more modest but still annoying issues.

Mis-wired connections

I’ll start with the most innocuous quirk and end with the worst. Initial units were mis-wired from the PCB to the speaker banana plugs (due, I actually suspect, to a fundamental PCB trace layout issue) in such a way that they ended up with inverted-polarity outputs, i.e., signals being 180° out of phase from how they should be.

This wasn’t particularly a problem if all the units in your setup exhibited the issue, because at least then the phase was consistently inverted. However, if one (or some, depending on your setup complexity) of them were in phase and other(s) were out of phase, the inconsistency resulted in a collapsed stereo image and overall decreased volume due to destructive interference between the in- and out-of-phase speakers.

The same goes if you mixed-and-combined out-of-phase V3 Monos with in-phase other devices, whether from other manufacturers or even from Fosi Audio itself. The fix is pretty easy; connect the red speaker wire to the black speaker terminal of the affected V3 Mono instead, and vice versa, to externally reinvert the phase back to how it should be. But from my experience with these units, it’s not possible to discern if a particular device is wired correctly without disassembling it; this guy’s sticker-based methodology, for example, didn’t pan out for me:

As commenter @TheirryG01210 wrote in response to the above video, “A better way to figure out if phase is correct is to check that cables are cross-connected (left solder pads cable goes to the right banana socket and vice versa).”

That’s spot-on advice. Here, for example, is one of my functional units, which has the wires un-crossed, therefore operating in an inverted-output fashion. That said, this approach looks like how it should be wired, right? Therefore, my conjecture that this actually is inherently a PCB layout issue, with wire-swapping the cheaper, easier workaround to the alternative costlier and otherwise more complicated board “turn”.

My photo also matches one of the two in this Audio Science Review discussion thread post:

The other picture in that post shows the wires crossed; it’s not clear to me whether this is something that the owner did post-purchase with a soldering iron or if Fosi Audio revamped units still in its inventory, after discovering the problem and prior to shipping them out:

Conceptually, it matches the from-factory crossed wiring of my other three functional devices, along with today’s teardown victim, although the wire colors are also swapped with my units:

But color doesn’t matter. A crossed-wires configuration is what’s key to a correct-phase output.

The next, more recently introduced issue involves gain-setting inconsistency. Look at the most recent version of the “stock” image for the product on Amazon’s website, for example:

And you’ll see that the two gain-switch options supported for the RCA input (the switch doesn’t affect the XLR input) are 19 dB and 25 dB. That said, the gain options shown in the online user manual are instead 25 dB and 31 dB, which match the original units, including all of mine:

Here’s the key excerpt from an email by Fosi Audio quoted in a relevant Audio Science Review post (bolded emphasis is mine):

We would like to confirm whether your V3 mono gain is the old version or the new version. Since V3mono does not have a volume adjustment knob. It has already obtained a large power output when it is turned on, so we have reduced the gain of 31db to 25db, and 25db to 19db in the new version, which can effectively ensure the stable output of V3mono, safe use and extend the service life.

Loud “pop” sound

Which leads to my last, and the most concerning, issue. After a seemingly random duration of operation, but sometimes just the first use, judging from comments I’ve seen on Audio Science Review, Amazon, Fosi’s online store, and elsewhere, the amplifier emits a loud “pop” and the sound disappears, never to return.

The front panel light still glows, and you can still hear the “click” when the amp initially turns on or transitions out of standby in response to sensing an active input source (or when you transition from one input to another, for that matter), but as for the output…nothing but the sound(s) of silence. This very issue happened with one of the devices I purchased brand new, fortunately, within the return-for-full-refund period.

Several of the other V3 Monos I acquired “open box” off eBay also arrived already DOA. In one particularly mind (and amp)-blowing case, I bought a single-box two-device set. When I opened it up, one of the amps had a piece of blue tape stuck to the top with the word “good” scribbled on it. Yep, the other one was not “good”. 

What the eBay seller explained to me in the process of issuing a ship-back-for-full-refund is that when large retailers get a return, they sometimes just turn around and resell it discounted to eBay sellers like her, apparently without remembering to test it first (or, more cynically, maybe just not caring about its current condition).

A blown-output case study

Today’s victim (1,000+ words in) was another eBay-DOA example. In this case, the seller didn’t ask me to return it prior to issuing a refund, and it therefore became a teardown candidate, hopefully enabling me to discern just where the Achilles’ Heel in this design is.

To Fosi Audio’s credit, by the way, the pace of complaints for this particular issue seems to have slowed down dramatically of late. When I first looked at the customer feedback on Amazon, etc., earlier this year, comments were overwhelmingly negative. Now, revisiting various feedback forums, I see the mix has notably shifted in the positive-percentage direction. That said, my cynical side wonders if Fosi and Amazon might just now be nuking negative posts, but hope springs eternal…

I’ll start with some overview shots of our patient, one of which you’ve already seen, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the V3 Mono, not including its bulbous suite of external power supply options, has dimensions of 105 x 35 x 142 mm and weighs 480 grams).

Remove the two side screws from the back panel:

And the front panel slides right out:

The “Aesthetic and Practical Dust-Proof Filter Screens” (I’m quoting from Fosi Audio’s website, though I concur that they both look cool and act cooling) also then slide right out if you wish:

Removing two more screws on the bottom:

Now allows for the extraction of the internal assembly (here again, you saw one photo already):

PCB extraction and examination

The front and back halves of the “Sturdy and Durable All-Aluminum Alloy Chassis” are identical (and an aside: pretty snazzy shots, eh?):

Returning to the PCB topside (with still-attached back panel), let’s take a closer look:

One thing I didn’t notice at first is that none of the ICs are PCB-silkscreened as to their type (R for resistor, for example, C for capacitor, L for inductor, U for IC, etc…), far from their specific-device identifying number (R1, C3, L5, U2…). Along the left side, top to bottom, are:

  • The three-position switch for on, auto, and off operating modes
  • The power status LED
  • The two-position XLR-vs-RCA input selector switch, and
  • A nifty two-contact spring-loaded switch that’s depressed when the front panel is in place. I suspect, but didn’t test for myself, that it prevents amplifier operation whenever the front panel is removed.

Note, too, the four screw heads in between the two multi-position switches, along with the ribbon cable. Look closely and you’ll realize that the first three items mentioned are actually located on a separate mini-PCB, connected to the main one mechanically via the screws (which, as you’ll see shortly, are actually bolts) and electrically via the ribbon cable.

And in fact, the silkscreen marking on the mini-PCB says (among other things) “SW PCB” (SW meaning switch, I assume) while the main PCB silkscreen in the lower left corner says…drumroll…”MAIN PCB”.

Why Fosi Audio went this multi-PCB route is frankly a mystery to me. Until I noticed the labeled silkscreen markings (admittedly just now, as I was writing this section) I’d thought that perhaps the main board was common to multiple amplifier product proliferations, with the front panel switches, etc. differentiating between them. But given that both boards’ silkscreens also say “Fosi Audio V3 MONO” on them, I can now toss that theory out the window. Readers’ ideas are welcome in the comments!

In the middle of the photo are two 8-pin DIP socketed chips, op-amps in fact, Texas Instruments NE5532P dual low-noise operational amplifiers to be precise.

They’re socketed because, as Fosi Audio promotes on the product page and akin to the two Douk Audio amplifiers I showcased in my prior coverage, too, they’re intended to be user-swappable, analogous to the “tube rolling” done by vacuum tube-based audio equipment enthusiasts.

Numerous (Elna) electrolytic and surface-mount capacitors (along with other SMD passives) dot the landscape, which is dominated by two massive Nichicon 63V/2200μF electrolytic filtering capacitors (explicitly identified as such, along with the Elna ones, by visual and text shout-outs on the V3 Mono product page, believe it or not). And one other, smaller Texas Instruments 8-lead IC (soldered SOP this time) on the bottom toward the right bears mentioning. It’s marked as follows:

N5532
TI41M
A9GG

Its first-line mark similarity to the previously mentioned NE5532P is notable, albeit potentially also coincidental. That said, Google Image search results also imply that it’s indeed another dual low-noise op amp. And it’s not the last of them we’ll see. Speaking of which, let’s next look at the other half of the PCB topside:

There it was at the bottom; another socketed TI NE5532P! Straddling it on either side are Omron G6K-2P-Y relays. At the top are even more relays, this time with functional symbol marks on top to eliminate any identity confusion: another white-color one, this time a Zhejiang HKE HRS3FTH-S-DC24V-A, and below it a dark grey HCP2-S-DC24V-A from the same supplier.

Remember when I mentioned earlier that after one V3 Mono stopped outputting amplified audio, I could still hear relay clicks when I toggled its power and input-select switches? Voila, the click-sound sources.

Those coupling capacitors are another curious component call-out on the V3 Mono product page; they’re apparently sourced from German supplier WIMA. The latter two, on either side of the aforementioned PCB solder pads that end up at the speaker’s banana plug connectors, are grey here but yellow colored at Fosi Audio’s website, so…🤷‍♂️

To the left of the red coupling caps is a grey metal box with two slits on top and copper-color contents visible through them; hold that thought. And last but not least, along the right edge of the PCB are (top to bottom) the power-input connector, two hefty resistors, the XLR input, and the RC input. The two-wire harness in the lower corner goes to the aforementioned gain switch.

Insufficient thermal protection?

Now for the other side:

That IC at far left was quite a challenge to identify. To the right of an “AB” company logo is the following three-line mark:

TNJB0089A
UMG992
2349

Google searches on the text, either line-by-line or its entirety, were fruitless (at least to me). However, I found a photo of a chip with a matching first-line mark here. About the only thing on that page that I could read was the words “AB137A SOP16”, but that was the clue I needed.

The AB137A, is (more accurately was) from the company Shenzhen Bluetrum Technology, which Internet Archive snapshots suggest changed its name to Shenzhen Zhongke Lanxun Technology at the beginning of this year. The bluetrum.com/product/ab137a.html product page no longer seems to exist, nor does the link from there to the datasheet at bluetrum.com/upload/file/202411/1732257601186423.pdf. But again, thanks to the Internet Archive (the last valid snapshot of the product page that seems to exist there is from last November) I’ve been able to discern the following:

  • CPU and Flexible IO
    High-performance 32-bit RISC-V processor Core with DSP instructions
    RISC-V typical speed: 125 MHz
    Program memory: internal 2 Mbit flash
    Internal 60 KB RAM for data and program
    Flexible GPIO pins with programmable pull-up and pull-down resistors
    Support GPIO wakeup or interrupt
  • Audio Interface
    High-performance stereo DAC with 95 dB SNR
    High-performance mono ADC with 90 dB SNR
    Support flexible audio EQ adjust
    MIC amplifier input
    Support Sample rate 8, 11.025, 12, 16, 22.05, 32, 44.1, and 48 kHz
    Four-channel Stereo Analog MUX
  • Package
    SOP16
  • Temperature
    Operating temperature: -40℃ to +85℃
    Storage temperature: -65℃ to +150℃

So, there you have it (at least I think)!

The other half of this side of the PCB is less exciting, unless you’re into blobs of solder (along with, let’s not forget, another glimpse of those hefty resistors), that is:

But it’s what’s in the middle of this side of the PCB, therefore common to both of those PCB pictures, that had me particularly intrigued; you too, I suspect. Remove the two screws whose heads are on the PCB’s other side:

Lift off the plate:

Clean the thermal paste off the top of the IC, and what comes into view is what you’ve probably already suspected: Texas Instruments’ TPA3255, the design’s Class D amplification nexus:

At this point in the write-up, I’m going to offer my conjecture on what happened with this device. The inside of the metal plate, acting as a heatsink, paste-mates with the TPA3255:

while the outside, also thermal paste-augmented, is intended to further transfer the heat to the bottom of the aluminum case via the two screws I removed prior to pulling the PCB out of it:

Key to my theory are the words and phrases “bottom” and “thermal paste”. First off, it’s a bit odd to me that the TPA3255, the design’s obvious primary heat-generation source, is on the bottom of the PCB, given that (duh) heat rises. The tendency would then be for it to “cook” not only itself but also circuitry above it, on the other side of the PCB, although the metal plate-as-heatsink should at least somewhat mitigate this issue or at least spread it out.

This leads to my other observation: there’s scant thermal paste on either side of the plate for heat-transfer purposes, off the IC and ultimately to the outside world, and what exists is pockmarked. I’m therefore guessing that the TPA3255 thermally destroyed itself, and with that, the music died.

Wrapping up

Before I forget, let’s detach that mini-PCB I mentioned earlier. Here are the backside nuts:

And the front-side bolt heads:

Disconnect the ribbon cable:

And you already know what comes next:

Not too exciting, but I’ve gotta be thorough, right?

At this point, it occurred to me that I hadn’t yet taken any main-PCB side shots. Front:

Left side:

The back:

The right side:

And after removing the two screws surrounding the XLR input:

I was able to lift the back panel away, exposing to view even more PCB circuitry:

In closing, remember that “grey box with two slits on top and copper-color contents visible through them” that I mentioned earlier? Had I looked closely enough at the V3 Mono product page before proceeding, I would have already realized what it was (although, in my slight defense, the photo is mis-captioned there):

Then again, I also could have identified it via the photo I included in my previous write-up:

Instead, I proceeded to use my flat-head screwdriver to rip it off the PCB in the process of attempting to more conservatively detach just its “lid”:

As I already suspected from the “copper-color contents visible through the two slits on top”, it’s a dual wirewound inductor:

from Sumida, offering “superior signal purity and noise reduction, elevating the amplifier’s sound performance,” per Fosi Audio’s website.

Crossing through 3,000 words, I’ll wrap up at this point and turn the keyboard over to you for your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

 Related Content

The post The Fosi Audio V3 Mono: A compelling power amp with a tendency to blow appeared first on EDN.

Edge AI powers the next wave of industrial intelligence

Mon, 11/17/2025 - 16:00
Smart factory.

Artificial intelligence is moving out of the cloud and into the operations that create and deliver products to us every day. Across manufacturing lines, logistics centers, and production facilities, AI at the edge is transforming industrial operations, bringing intelligence directly to the source of data. As the industrial internet of things (IIoT) matures, edge-based AI is no longer an optional enhancement; it’s the foundation for the next generation of productivity, quality, and safety in industrial environments.

This shift is driven by the need for real-time, contextually aware intelligence—systems that can see, hear, and even “feel” their surroundings, analyze sensor data instantly, and make split-second decisions without relying on distant cloud servers. From predictive maintenance and automated inspection to security monitoring and logistics optimization, edge AI is redefining how machines think and act.

Why industrial AI belongs at the edge

Traditional industrial systems rely heavily on centralized processing. Data from machines, sensors, and cameras is transmitted to the cloud for analysis before insights are sent back to the factory floor. While effective in some cases, this model is increasingly impractical and inefficient for modern, latency-sensitive operations.

Implementing at the edge addresses that. Instead of sending vast streams of data off-site, intelligence is brought closer to where data is created, within or around the machine, gateway, or local controller itself. This local processing offers three primary advantages:

  • Low latency and real-time decision-making: In production lines, milliseconds matter. Edge-based AI can detect anomalies or safety hazards and trigger corrective actions instantly without waiting for a network round-trip.
  • Enhanced security and privacy: Industrial environments often involve proprietary or sensitive operational data. Processing locally minimizes data exposure and vulnerability to network threats.
  • Reduced power and connectivity costs: By limiting cloud dependency, edge systems conserve bandwidth and energy, a crucial benefit in large, distributed deployments such as logistics hubs or complex manufacturing centers.

These benefits have sparked a wave of innovation in AI-native embedded systems, designed to deliver high performance, low power consumption, and robust environmental resilience—all within compact, cost-optimized footprints.

Smart factory.Edge-based AI is the foundation for the next generation of productivity, quality, and safety in industrial environments, delivering low latency, real-time decision-making, enhanced security and privacy, and reduced power and connectivity costs. (Source: Adobe AI Generated) Localized intelligence for industrial applications

Edge AI’s success in IIoT is largely based on contextual awareness, which can be defined as the ability to interpret local conditions and act intelligently based on situational data. This requires multimodal sensing and inference across vision, audio, and even haptic inputs. In manufacturing, for example:

  • Vision-based inspection systems equipped with local AI can detect surface defects or assembly misalignments in real time, reducing scrap rates and downtime.
  • Audio-based diagnostics can identify early signs of mechanical failure by recognizing subtle deviations in sound signatures.
  • Touch or vibration sensors help assess machine wear, contributing to predictive maintenance strategies that reduce unplanned outages.

In logistics and security, edge AI cameras provide real-time monitoring, object detection, and identity verification, enabling autonomous access control or safety compliance without constant cloud connectivity. A practical example of this approach is a smart license-plate-recognition system deployed in industrial zones, a compact unit capable of processing high-resolution imagery locally to grant or deny vehicle access in milliseconds.

In all of these scenarios, AI inference happens on-site, reducing latency and power consumption while maintaining operational autonomy even in network-constrained environments.

Low power, low latency, and local learning

Industrial environments are unforgiving. Devices must operate continuously, often in high-temperature or high-vibration conditions, while consuming minimal power. This has made energy-efficient AI accelerators and domain-specific system-on-chips (SoCs) critical to edge computing.

A good example of this trend is the early adoption of the Synaptics Astra SL2610 SoC platform by Grinn, which has already resulted in a production-ready system-on-module (SOM), Grinn AstraSOM-261x, and a single-board computer (SBC). By offering a compact, industrial-grade module with full software support, Grinn enables OEMs to accelerate the design of new edge AI devices and shorten time to market. This approach helps bridge the gap between advanced silicon capabilities and practical system deployment, ensuring that innovations can quickly translate into deployable industrial solutions.

The Grinn–Synaptics collaboration demonstrates how industrial AI systems can now run advanced vision, voice, and sensor fusion models within compact, thermally optimized modules.

These platforms combine:

  • Embedded quad-core Arm processors for general compute tasks
  • Dedicated neural processing units (NPUs) delivering multi-trillion operations per second for inference
  • Comprehensive I/O for camera, sensor, and audio input
  • Industrial-grade security

Equally important is support for custom small language models (SLMs) and on-device training capabilities. Industrial environments are unique. Each factory line, conveyor system, or inspection station may generate distinct datasets. Edge devices that can perform localized retraining or fine-tuning on new sensor patterns can adapt faster and maintain high accuracy without cloud retraining cycles.

The Grinn OneBox AI-enabled industrial SBC.The Grinn OneBox AI-enabled industrial SBC, designed for embedded edge AI applications, leverages a Grinn AstraSOM compute module and the Synaptics SL1680 processor. (Source: Grinn Global) Emergence of compact multimodal platforms

The recent introduction of next-generation SoCs such as Synaptics’ SL2610 underscores the evolution of edge AI hardware. Built for embedded and industrial systems, these platforms offer integrated NPUs, vision digital-signal processors, and sensor fusion engines that allow devices to perceive multiple inputs simultaneously, such as camera feeds, audio signals, or even environmental readings.

Such capabilities enable richer human-machine interaction in industrial contexts. For instance, a line operator can use voice commands and gestures to control inspection equipment, while the system responds with real-time feedback through both visual indicators and audio prompts.

Because the processing happens on-device, latency is minimal, and the system remains responsive even if external networks are congested. Low-power design and adaptive performance scaling also make these platforms suitable for battery-powered or fanless industrial devices.

From the cloud to the floor: practical examples

Collaborations like the Grinn–Synaptics development have produced compact, power-efficient edge computing modules for industrial and smart city deployments. These modules integrate high-performance neural processing, customized AI implementations, and ruggedized packaging suitable for manufacturing and outdoor environments.

Deployed in use cases such as automated access control and vision-guided robotics, these systems demonstrate how localized AI can replace bulky servers and external GPUs. All inference, from image recognition to object tracking, is performed on a module the size of a matchbox, using only a few watts of power.

The results:

  • Reduced latency from hundreds of milliseconds to under 10 ms
  • Lower total system cost by eliminating cloud compute dependencies
  • Improved reliability in areas with limited connectivity or strict privacy requirements

The same architecture supports multimodal sensing, enabling combined visual, auditory, and contextual awareness—key for applications such as worker safety systems that must recognize both spoken alerts and visual cues in noisy and complex factory environments.

Toward self-learning, sustainable intelligence

The evolution of edge AI is about more than just performance; it’s about autonomy and adaptability. With support for custom, domain-specific SLMs, industrial systems can evolve through continual learning. For example, an inspection model might retrain locally as lighting conditions or material types change, maintaining precision without manual recalibration.

Moreover, the combination of low-power processing and localized AI aligns with growing sustainability goals in industrial operations. Reducing data transmission, cooling needs, and cloud dependencies contributes directly to lower carbon footprints and energy costs, critical as industrial AI deployments scale globally.

Edge AI as the engine of industrial transformation

The rise of AI at the edge marks a turning point for IIoT. By merging context-aware intelligence with efficient, scalable compute, organizations can unlock new levels of operational visibility, flexibility, and resilience.

Edge AI is no longer about supplementing the cloud; it’s about bringing intelligence where it’s most needed, empowering machines and operators alike to act faster, safer, and smarter.

From the shop floor to the supply chain, localized, multimodal, and energy-efficient AI systems are redefining the digital factory. With continued innovation from technology partnerships that blend high-performance silicon with real-world design expertise, the industrial world is moving toward a future where every device is an intelligent, self-aware contributor to production excellence.

The post Edge AI powers the next wave of industrial intelligence appeared first on EDN.

The ecosystem view around an embedded system development

Mon, 11/17/2025 - 06:46

Like in nature, development tools for embedded systems form “ecosystems.” Some ecosystems are very self-contained, with little overlap on others, while other ecosystems are very open and broad with support for everything but the kitchen sink. Moreover, developers and engineers have strong opinions (to put it mildly) about this subject.

So, we developed a greenhouse that sustains multiple ecosystems; the greenhouse demo we built shows multiple microcontrollers (MCUs) and their associated ecosystems working together.

The greenhouse demo

The greenhouse demo is a simplified version of a greenhouse controller. The core premise of this implementation is to intelligently open/close the roof to allow rainwater into the greenhouse. This is implemented using a motorized canvas tarp mechanism. The canvas tarp was created from old promotional canvas tote bags and sewn into the required shape.

The mechanical guides and lead screw for the roof are repurposed from a 3D printer with a stepper motor drive. An evaluation board is used as a rain sensor. Finally, a user interface panel enables a manual override of the automatic (rain) controls.

Figure 1 The greenhouse demo is mounted on a tradeshow wedge. Source: Microchip

It’s implemented as four function blocks:

  1. A user interface, capacitive touch controller with the PIC32CM GC Curiosity Pro (EA36K74A) in VS Code
  2. A smart stepper motor controller reference design built on the AVR EB family of MCUs in MPLAB Code Configurator Melody
  3. A main application processor with SAM E54 on the Xplained Pro development kit (ATSAME54-XPRO), running Zephyr RTOS
  4. A liquid detector using the MTCH9010 evaluation kit

The greenhouse demo outlined in in this article is based on a retractable roof developed by Microchip’s application engineering team in Romania. This reference design is implemented in a slightly different fashion to the greenhouse, with the smart stepper motor controller interfacing directly with the MTCH9010 evaluation board to control the roof position. This configuration is ideal for applications where the application processor does not need to be aware of the current state of the roof.

Figure 2 This retractable roof demo was developed by a design team in Romania. Source: Microchip

User interface controller

Since the control panel for this greenhouse normally would be in an area where water should be expected, it was important to take this into account when designing the user interface. Capacitive touch panels are attractive as they have no moving parts and can be sealed under a panel easily. However, capacitive touch can be vulnerable to false triggers from water.

To minimize these effects, an MCU with an enhanced peripheral touch controller (PTC) was used to contain the effects of any moisture present. Development of the capacitive touch interface was aided with MPLAB Harmony and the capacitive touch libraries, which greatly reduce the difficulty in developing touch applications.

The user interface for this demo is composed of a PIC32CM GC Curiosity Pro (EA36K74A) development kit connected to a QT7 XPlained Pro Extension (ATQT7-XPRO) kit to provide a (capacitive) slider and two touch buttons.

Figure 3 The QT7 Xplained extension kit comes with self-capacitance slider and two self-capacitance buttons alongside 8 LEDs to enable button state and slider position feedback. Source: Microchip

The two buttons allow the user to fully open or close the tarp, while the slider enables partial open or closed configurations. When the user interface is idle for 30 seconds or more, the demo switches back to the MTCH9010 rain sensor to automatically determine whether the tarp should be opened or closed.

Smart stepper motor controller

The smart stepper motor controller is a reference design that utilizes the AVR EB family of MCUs to generate the waveforms required to perform stepping/half-stepping/microstepping of a stepper motor. By having the MCU generate the waveforms, the motor can behave independently, rather than requiring logic or interaction from the main application processor(s) elsewhere in the system. This is useful for signals such as limit switches, mechanical stops, quadrature encoders, or other signals to monitor.

Figure 4 Smart stepper motor reference design uses core independent peripherals (CIPs) inside the MCUs to microstep a bipolar winding stepper motor. Source: Microchip

The MCU receives commands from the application processor and executes them to move the tarp to a specified location. One of the nice things about this being a “smart” stepper motor controller is that the functionality can be adjusted in software. For instance, if analog signals or limit switches are added, the firmware can be modified to account for these signals.

While the PCB attached to the motor is custom, this function block can be replicated with the multi-phase power board (EV35Z86A), the AVR EB Curiosity Nano adapter (EV88N31A) and the AVR EB Curiosity Nano (EV73J36A).

Application processor and other ecosystems

The application processor in this demo is a SAM E54 MCU that runs Zephyr real-time operating system (RTOS). One of the biggest advantages of Zephyr over other RTOSes and toolchains is the way that the application programming interface (API) is kept uniform with clean divisions between the vendor-specific code and the abstracted, higher-level APIs. This allows developers to write code that works across multiple MCUs with minimal headaches.

Zephyr also has robust networking support and an ever-expanding list of capabilities that make it a must-have for complex applications. Zephyr is open source (Apache 2.0 licensing) with a very active user base and support for multiple different programming tools such as—but not limited to—OpenOCD, Segger J-Link and gdb.

Beyond the ecosystems used directly in the greenhouse demo, there are several other options. Some of the more popular examples include IAR Embedded Workbench, Arm Keil, MikroE’s Necto Studio and SEGGER Embedded Studio. These tools are premium offerings with advanced features and high-quality support to match.

For instance, I recently had an issue with booting Zephyr on an MCU where I could not access the usual debuggers and printf was not an option. I used SEGGER Ozone with a J-Link+ to troubleshoot this complex issue. Ozone is a special debug environment that eschews the usual IDE tabs to provide the developer with more specialized windows and screens.

In my case, the issue occurred where the MCU would start up correctly from the debugger, but not from a cold start. After some troubleshooting and testing, I eventually determined one of the faults was a RAM initialization error in my code. I patched the issue with a tiny piece of startup assembly that ran before the main kernel started up. The snippet of assembly that I wrote is attached below for anyone interested.

The moral of the story is that development environments offer unique advantages. An example of this is IAR adding support for Zephyr to its IDE solution. In many ways, the choice of what ecosystem to develop in is up to personal preference.

There isn’t really a wrong answer, if it does what you need to make your design work. The greenhouse demo embodies this by showing multiple ecosystems and toolchains working together in a single system.

Robert Perkel is an application engineer at Microchip Technology. In this role, he develops technical content such as application notes, contributed articles, and design videos. He is also responsible for analyzing use-cases of peripherals and the development of code examples and demonstrations. Perkel is a graduate of Virginia Tech where he earned a Bachelor of Science degree in Computer Engineering.

Related Content

The post The ecosystem view around an embedded system development appeared first on EDN.

The role of motion sensors in the industrial market

Fri, 11/14/2025 - 16:00
The ISM6HG256X IMU sensor.

The future of the industrial market is being established by groundbreaking technologies that promise to reveal unique potential and redefine what is possible. These innovations range from collaborative robots (cobots) and artificial intelligence to the internet of things, digital twins, and cloud computing.

Cobots are not just tools but partners, empowering human workers to achieve greater creativity and productivity together. AI is ushering industries into a new era of intelligence, where data-driven insights accelerate innovation and transform challenges into opportunities.

The IoT is weaving vast, interconnected machines and systems that enable seamless communication and real-time responsiveness like never before. Digital twins bring imagination to life by creating virtual environments where ideas can be tested, refined, and perfected before they touch reality. Cloud computing serves as the backbone of this revolution, offering limitless power and connectivity to drive brave visions forward.

Together, these technologies are inspiring a new industrial renaissance, where innovation, sustainability, and human initiative converge to build a smarter, more resilient world.

The role of sensors

Sensors are the silent leaders driving the industrial market’s transformation into a realm of intelligence and possibility. Serving as the “eyes and ears” of smart factories, these devices unlock the power of real-time data, enabling industries to look beyond the surface and anticipate the future. By continuously sensing pressure, temperature, position, vibration, and more, sensors enable workers to be continuously monitored and bring machines to life, turning them into connected, responsive entities within the industrial IoT (IIoT).

This flow of information accelerates innovation, enables predictive maintenance, and enhances safety. Sensors do not just monitor; they usher in a new era where efficiency meets sustainability, where every process is optimized, and where industries embrace change with confidence. In this industrial landscape, sensors are the catalysts that transform raw data into insights for smarter, faster, and more resilient industries.

Challenges for industrial motion sensing applications

Sensors in industrial environments face several significant challenges. They must operate continuously for years on battery power without failure. Additionally, it is crucial that they capture every critical event to ensure no incidents are missed. Sensors must provide accurate and precise tracking to manage processes effectively. Simultaneously, they need to be compact yet powerful, integrating multiple functions into a small device.

Most importantly, sensors must deliver reliable tracking and data collection in any environment—whether harsh, noisy, or complex—ensuring consistent performance regardless of external conditions. Overcoming these challenges is essential to making factories smarter and more efficient through connected technologies, such as the IIoT and MEMS motion sensors.

MEMS inertial sensors are essential devices that detect motion by measuring accelerations, vibrations, and angular rates, ensuring important events are never missed in an industrial environment. Customers need these motion sensors to work efficiently while saving power and to keep performing reliably even in tough conditions, such as high temperatures.

However, there are challenges to overcome. Sometimes sensors can become overwhelmed, causing them to miss important impact or vibration details. Using multiple sensors to cover different motion ranges can be complicated, and managing power consumption in an IIoT node is also a concern.

There is a tradeoff between accuracy and range: Sensors that measure small movements are very precise but can’t handle strong impacts, while those that detect strong impacts are less accurate. In industrial settings, sensors must be tough enough to handle harsh environments while still providing reliable and accurate data. Solving these challenges is key to making MEMS sensors more effective in many applications.

How the new ST industrial IMU can help

Inertial measurement units (IMUs) typically integrate accelerometers to measure linear acceleration and gyroscopes to detect angular velocity. These devices often deliver space and cost savings while reducing design complexity.

One example is ST’s new ISM6HG256X intelligent IMU. This MEMS sensor is the industry’s first IMU for the industrial market to integrate high-g and low-g sensing into a single package with advanced features such as sensor fusion and edge processing.

The ISM6HG256X addresses key industrial market challenges by integrating a single mechanical structure for an accelerometer with a wide dynamic range capable of capturing both low-g vibrations (16 g) and high-g shocks (256 g) and a gyroscope, effectively eliminating the need for multiple sensors and simplifying system architecture. This compact device leverages embedded edge processing and adaptive self-configurability to optimize performance while significantly reducing power consumption, thereby extending battery life.

Engineered to withstand harsh industrial environments, the IMU reliably operates at temperatures up to 105°C, ensuring consistent accuracy and durability under demanding conditions. Supporting Industry 5.0 initiatives, the sensor’s advanced sensing architecture and edge processing capabilities enable smarter, more autonomous industrial systems that drive innovation.

Unlocking smarter tracking and safety, this integrated MEMS motion sensor is designed to meet the demanding needs of the industrial sector. It enables real-time asset tracking for logistics and shipping, providing up-to-the-minute information on location, status, and potential damage. It also enhances worker safety through wearable devices that detect falls and impacts, instantly triggering emergency alerts to protect personnel.

Additionally, it supports condition monitoring by accurately tracking vibration, shock, and precise motion of industrial equipment, helping to prevent downtime and costly failures. In factory automation, the solution detects unusual vibrations or impacts in robotic systems instantly, ensuring smooth and reliable operation. By combining tracking, monitoring, and protection into one component, industrial operations can achieve higher efficiency, safety, and reliability with streamlined system design.

STMicroelectronics ISM6HG256X IMU, an integrated MEMS motion sensor.The ISM6HG256X IMU sensor combines simultaneous low-g (±16 g) and high-g (±256 g) acceleration detection with a high-performance precision gyroscope for angular rate measurement. (Source: STMicroelectronics)

As the industrial market landscape evolves toward greater flexibility, sustainability, and human-centered innovation, industrial IMU solutions are aligned with the key drivers shaping the future of the industrial market. IMUs can enable precise motion tracking, reliable condition monitoring, and energy-efficient edge processing while supporting the decentralization of production and enhancing resilience and agility within supply chains.

Additionally, the integration of advanced sensing technologies contributes to sustainability goals by optimizing resource use and minimizing waste. As manufacturers increasingly adopt AI-driven collaboration and advanced technology integration, IMU solutions provide the critical data and reliability needed to drive innovation, customization, and continuous improvement across the industry.

The post The role of motion sensors in the industrial market appeared first on EDN.

Lightning and trees

Fri, 11/14/2025 - 15:00

We’ve looked at lightning issues before. Please see “Ground strikes and lightning protection of buried cables.”

This headline below was found online at the URL hyperlinked here.

Recent headline from the local paper. Source: ABC7NY

This ABC NY article describes how a teenage boy tried to take refuge from the rain in a thunderstorm by getting under the canopy of a tree. In that article, we find this quote: “The teen had no way of knowing that the tree would be hit by lightning.”

This quote, apparently the opinion of the article’s author, is absolutely incorrect. It is total and unforgivable rubbish.

Even when I was knee-high to Jiminy Cricket, I was told over and over and over by my parents NEVER to try to get away from rain by hiding under a tree. Any tree that you come across will have its leaves reaching way up into the air, and those wet leaves are a prime target for a lightning strike, as illustrated in this screenshot:

Conceptual image of lightning striking tree. Source: Stockvault

Somebody didn’t impart this basic safety lesson to this teenager. It is miraculous that this teenager survived the event. The above article cites second-degree burns, but a radio item that I heard about this incident also cites nerve damage and a great deal of lingering pain.

Recovery is expected.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

The post Lightning and trees appeared first on EDN.

10BASE-T1S endpoints simplify zonal networks

Thu, 11/13/2025 - 19:55

Microchip’s LAN866x 10BASE-T1S endpoint devices use the Remote Control Protocol (RCP) to extend Ethernet connectivity in in-vehicle networks. The endpoints enable centralized control of edge nodes for data streaming and device management, while the 10BASE-T1S multidrop topology supports an all-Ethernet zonal architecture.

LAN866X endpoints serve as bridges that translate Ethernet packets directly to local interfaces for lighting control, audio transmission, and sensor or actuator management over the network. This approach eliminates node-specific software programming, simplifying system architecture and reducing both hardware and engineering costs.

The RCP-enabled endpoint devices join Microchip’s Single Pair Ethernet (SPE) line of transceivers, bridges, switches, and development tools. These components enable reliable, high-speed data transmission over a single twisted pair cable supporting 10BASE-T1S, 100BASE-T1, and 1000BASE-T1.

The LAN8660 control, LAN8661 lighting, and LAN8662 audio endpoints are available in limited sampling. For more information about Microchip’s automotive Ethernet products, including these endpoints, click here.

Microchip Technology 

The post 10BASE-T1S endpoints simplify zonal networks appeared first on EDN.

Development kit enables low-power presence detection

Thu, 11/13/2025 - 19:55

SPARK’s Presence Detection Kit (PDK), powered by the SR1120 LE-UWB transceiver, delivers low-power, robust sensing for connected devices. Its low-energy ultra-wideband (LE-UWB) technology helps designers overcome the high power consumption and interference challenges of Bluetooth, Wi-Fi, and conventional UWB.

LE-UWB supports unidirectional and bidirectional communication, ultra-low-power beaconing with configurable detection zones, and line-of-sight Time-of-Flight (ToF) measurement for precise proximity and distance sensing. SPARK reports that its LE-UWB technology consumes over 10× less power (30 µW at 4 Hz) than Bluetooth/BLE beaconing and delivers more than 20× higher power efficiency than standard UWB.

SPARK provides an energy-optimized firmware stack for presence detection, including APIs for beaconing, ranging, data transmission, and OTA firmware updates. Reference hardware kits, demo applications, and GUIs allow engineers to evaluate detection performance, adjust detection zones, and accelerate prototyping. The PDK hardware is selected to optimize performance, power, and cost, and integrates across a broad range of MCUs and software architectures.

Presence detection kits are available now. For details on board and kit configurations, contact NA_sales@sparkmicro.com.

SPARK Microsystems 

The post Development kit enables low-power presence detection appeared first on EDN.

High-density ATE supplies boost test capabilities

Thu, 11/13/2025 - 19:55

Keysight has expanded its power test portfolio with three families of ATE system power supplies spanning 1.5 kW to 12 kW. The RP5900 series of regenerative DC power supplies, EL4900 series of regenerative DC electronic loads, and DP5700 series of system DC power supplies deliver high density, bidirectional, and regenerative capabilities, paired with intelligent automation software for efficient design validation.

This range enables engineers to validate complex, multi-kilowatt devices with greater precision and repeatability while using less space and energy. Supplies deliver up to 6 kW in 1U or 12 kW in 2U, with full regenerative capability, offering two to three times more channels in the same footprint as previous systems.

Keysight’s automated power suite lets engineers run complex tests—such as long-duration cycling, state-of-charge battery emulation, and transient replication—consistently and efficiently. It includes removable SD memory for secure workflows between classified and open labs, software that complies with NIST SP800-171 SSDF standards, and regenerative operation that returns energy to the grid, reducing waste and supporting sustainability.

For more information about each of the three high-density power series, click here.

Keysight Technologies 

The post High-density ATE supplies boost test capabilities appeared first on EDN.

Partners bring centimeter-level GNSS to IoT

Thu, 11/13/2025 - 19:55

Quectel is bundling its Real-Time Kinematic (RTK)-capable GNSS modules and antennas with Swift Navigation’s Skylark RTK correction service. Together, the hardware and service enable centimeter-level positioning accuracy for mass-market IoT applications and streamline RTK adoption.

Partnering with Swift allows Quectel to deliver optimized solutions for specific applications, helping equipment manufacturers navigate the complexities of RTK adoption. The Quectel RTK Correction Solution supports a wide range of use cases, including robotics, automotive, micro-mobility, precision agriculture, surveying, and mining. Swift’s Skylark provides multi-constellation, multi-frequency RTK corrections with broad geographic coverage across North America, Europe, and Asia-Pacific.

The RTK global offering ensures consistent compatibility and performance across regions, supporting quad–band GNSS RTK modules such as the LG290P, LG580P, and LG680P, as well as the dual-band LC29H series. These modules maintain exceptional RTK accuracy even in challenging environments. Quectel complements its hardware with full-stack services, including engineering support, precision antenna provisioning, and tuning.

Quectel

Swift Navigation 

The post Partners bring centimeter-level GNSS to IoT appeared first on EDN.

Multiprotocol firmware streamlines LoRa IoT design

Thu, 11/13/2025 - 19:55

Semtech’s Unified Software Platform (USP) for its LoRa Plus transceivers enables multiprotocol IoT deployments on a single hardware platform. It manages LoRaWAN, Wireless M-Bus, Wi-SUN FSK, and proprietary protocols, eliminating the need for protocol-specific hardware variants.

LoRa Plus LR20xx transceivers integrate 4th-generation LoRa IP that supports both terrestrial and non-terrestrial networks across sub-GHz, 2.4-GHz ISM, and licensed S-bands. The LoRa USP provides a unified firmware ecosystem for multiprotocol operation on various MCU platforms through open-source environments such as Zephyr. It also offers backward-compatible build options for Gen 2 SX126x and Gen 3 LR11xx devices.

LoRa USP succeeds LoRa Basics Modem as Semtech’s multiprotocol firmware platform. Both platforms share the same set of APIs, ensuring a seamless transition to the USP version. USP supports both bare-metal and Zephyr OS implementations.

LoRa USP product page 

Semtech

The post Multiprotocol firmware streamlines LoRa IoT design appeared first on EDN.

Designer’s guide: PMICs for industrial applications

Thu, 11/13/2025 - 16:00
Nexperia’s NEH71x0 energy-harvesting PMIC.

Power management integrated circuits (PMICs) are an essential component in the design of any power supply. Their main function is to integrate several complex features, such as switching and linear power regulators, electrical protection circuits, battery monitoring and charging circuits, energy-harvesting systems, and communication interfaces, into a single chip.

Compared with a solution based on discrete components, PMICs greatly simplify the development of the power stage, reducing the number of components required, accelerating validation and therefore the design’s time to market. In addition, PMICs qualified for specific applications, such as automotive or industrial, are commercially available.

In industrial and industrial IoT (IIoT) applications, PMICs address key power challenges such as high efficiency, robustness, scalability, and flexibility. The use of AI techniques is being investigated to improve PMIC performance, with the aim of reducing power losses, increasing energy efficiency, and reducing heat dissipation.

Achieving high efficiency

Industrial and IIoT applications require multiple power lines with different voltage and current requirements. Logic processing components, such as microcontrollers (MCUs) and FPGAs, require very low voltages, while peripherals, such as GPIOs and communication interfaces, require voltages of 3.3 V, 5 V, or higher.

These requirements are now met by multichannel PMICs, which integrate switching buck, boost, or buck-boost regulators, as well as one or more linear regulators, typically of the low-dropout (LDO) type, and power switches, very useful for motor control. Switching regulators offer very high efficiency but generate electromagnetic noise related to the charging and discharging process of the inductor.

LDO regulators, which achieve high efficiency only when the output voltage differs slightly from the input voltage to the converter, are instead suitable for low-noise applications such as sensors and, more generally, where analog voltages with very low amplitude need to be managed.

Besides multiple power rails, industrial and IIoT applications require solutions with high efficiency. This requirement is essential for prolonging battery life, reducing heat dissipation, and saving space on the printed-circuit board (PCB) using fewer components.

To achieve high efficiency, one of the first parameters to consider is the quiescent current (IQ), which is the current that the PMIC draws when it is not supplying any load, while keeping the regulators and other internal functions active. A low IQ value reduces power losses and is essential for battery-powered applications, enabling longer battery operation.

PMICs are now commercially available that integrate regulators with very low IQ values, in the order of microseconds or less. However, a low IQ value should not compromise transient response, another parameter to consider for efficiency. Transient response, or response time, indicates the time required by the PMIC to adapt to sudden load changes, such as when switching from no load to active load. In general, depending on the specific application, it is advisable to find the right compromise between these two parameters.

Nordic Semiconductor’s nPM2100 (Figure 1) is an example of a low-power PMIC. Integrating an ultra-efficient boost regulator, the nPM2100 provides a very low IQ, addressing the needs of various battery-powered applications, including Bluetooth asset tracking, remote controls, and smart sensors.

The boost regulator can be powered from an input range of 0.7 to 3.4 V and provides an output voltage in the range of 1.8 V to 3.3 V, with a maximum output current of 150 mA. It also integrates an LDO/load switch that provides up to 50-mA output current with an output voltage in the range of 0.8 V to 3.0 V.

The nPM2100’s regulator offers an IQ of 150 nA and achieves up to 95% power conversion efficiency at 50 mA and 90.5% efficiency at 10 µA. The device also has a low-current ship mode of 35 nA that allows it to be transported without removing the battery inserted. Multiple options are available for waking up the device from this low-power state.

An ultra-low-power wakeup timer is also available. This is suitable for timed wakeups, such as Bluetooth LE advertising performed by a sensor that remains in an idle state for most of the time. In this hibernate state, the maximum current absorbed by the device is 200 nA.

Nordic Semiconductor’s nPM2100 PMIC.Figure 1: Nordic Semiconductor’s nPM2100 PMIC can be easily interfaced to low-power system-on-chips or MCUs, such as Nordic’s nRF52, nRF53, and nRF54 Series. (Source: Nordic Semiconductor)

Another relevant parameter that helps to increase efficiency is dynamic voltage and frequency scaling (DVFS).

When powering logic devices built with CMOS technology, such as common MCUs, processors, and FPGAs, a distinction can be made between static and dynamic power consumption. While the former is simply the product of the supply voltage by the current in idle conditions, dynamic power is expressed by the following formula:

Pdynamic = C × Vcc2 × fsw

where C is the load capacity, VCC is the voltage applied to the device, and fSW is the switching frequency. This formula shows that the power dissipated has a quadratic relationship with voltage and a linear relationship with frequency. The DVFS technique works by reducing these two electrical parameters and adapting them to the dynamic requirements of the load.

Consider now a sensor that transmits data sporadically and for short intervals, or an industrial application, such as a data center’s board running AI models. By reducing both voltage and frequency when they are not needed, DVFS can optimize power management, enabling significant improvements in energy efficiency.

NXP Semiconductors’ PCA9460 is a 13-channel PMIC specifically designed for low-power applications. It supports the i.MX 8ULP ultra-low-power family processor, providing four high-efficiency 1-A step-down regulators, four VLDOs, one SVVS LDO, and four 150-mΩ load switches, all enclosed in a 7 × 6-bump-array, 0.4-mm-pitch WSCSP42 package.

The four buck regulators offer an ultra-low IQ of 1.5 μA at low-power mode and 5.5 μA at normal mode, while the four LDOs achieve an IQ of 300 nA. Two buck regulators support smart DVFS, enabling the PMIC to always set the right voltage on the processors it is powering. This feature, enabled through specific pins of the PMIC, minimizes the overall power consumption and increases energy efficiency.

Energy harvesting

The latest generation of PMICs has introduced the possibility of obtaining energy from various sources such as light, heat, vibrations, and radio waves, opening up new scenarios for systems used in IIoT and industrial environments. This feature is particularly important in IIoT and wireless devices, where maintaining a continuous power source for long periods of time is a significant challenge.

Nexperia’s NEH71x0 low-power PMIC (Figure 2) is a full power management solution integrating advanced energy-harvesting features. Harvesting energy from ambient power sources, such as indoor and outdoor PV cells, kinetic (movement and vibrations), piezo, or a temperature gradient, this device allows designers to extend battery life or recharge batteries and supercapacitors.

With an input power range from 15 μW to 100 mW, the PMIC achieves an efficiency up to 95%, features an advanced maximum power-point tracking block that uses a proprietary algorithm to deliver the highest output to the storage element, and integrates an LDO/load switch with a configurable output voltage from 1.2 V to 3.6 V.

Reducing the bill of materials and PCB space, the NEH71x0 eliminates the need for an external inductor, offering a compact footprint in a 4 × 4-mm QFN28 package. Typical applications include remote controls, smart tags, asset trackers, industrial sensors, environmental monitors, tire pressure monitors, and any other IIoT application.

Nexperia’s NEH71x0 energy-harvesting PMIC.Figure 2: Nexperia’s NEH71x0 energy-harvesting PMIC can convert energy with an efficiency of up to 95%. (Source: Nexperia) PMICs for AI and AI in PMICs

To meet the growing demand for power in the industrial sector and data centers, Microchip Technology Inc. has introduced the MCP16701, a PMIC specifically designed to power high-performance logic devices, such as Microchip’s PIC64GX microprocessors and PolarFire FPGAs. The device integrates eight 1.5-A buck converters that can be connected in parallel, four 300-mA LDOs, and a controller for driving external MOSFETs.

The MCP16701 offers a small footprint of 8 × 8 mm in a VQFN package (Figure 3), enabling a 48% reduction in PCB area and a 60% reduction in the number of components compared with a discrete solution. All converters, which can be connected in parallel to achieve a higher output current, share the same inductor.

A unique feature of this PMIC is its ability to dynamically adjust the output voltage on all converters in steps of 12.5 mV or 25 mV, with an accuracy of ±0.8% over the temperature range. This flexibility allows designers to precisely adjust the voltage supplied to loads, optimizing energy efficiency and system performance.

Microchip’s MCP16701.Figure 3: Microchip’s MCP16701 enables engineers to fine-tune power delivery, improving system efficiency and performance. (Source: Microchip Technology Inc.)

As in many areas of modern electronics, AI techniques are also being studied and introduced in the power management sector. This area of study is referred to as cognitive power management. PMICs, for example, can use machine-learning techniques to predict load evolution over time, adjusting the output voltage value in real time.

Tools such as PMIC.AI, developed by AnDAPT, use AI to optimize PMIC architecture and component selection, while Alif Semiconductor’s autonomous intelligent power management (aiPM) tool dynamically manages power based on AI workloads. These solutions enable voltage scaling, increasing system efficiency and extending battery life.

The post Designer’s guide: PMICs for industrial applications appeared first on EDN.

Basic design equations for three precision current sources

Thu, 11/13/2025 - 15:00

A frequently encountered category of analog system component is the precision current source. Many good designs are available, but concise and simple arithmetic for choosing the component values necessary to tailor them to specific applications isn’t always provided. I guess some designers feel such tedious details are just too trivially obvious to merit mentioning. But I sometimes don’t feel that. 

Wow the engineering world with your unique design: Design Ideas Submission Guide

Here are some examples I think some folks might find useful. I hope they won’t feel too terribly obvious, trivial, or tedious.

The circuit in Figure 1 is versatile and capable of high performance.

Figure 1 A simple high-accuracy current source that can source current with better than 1% accuracy.

With suitable component choices, this circuit can: source current with better than 1% accuracy and have Q1 drain currents ranging from < 1mA to > 10 A, while working with power supply voltages (Vps) from < 5V to > 100 V.

Here are some helpful hints for resistor values, resistor wattages, and safety zener D1. First note

  • Vps = power supply voltage
  • R1(W), Q1(W), and R2(W) = respective component power dissipation
  • Id = Q1 drain current in amps

Adequate heat sinking for Q1(W). Another thing assumed is:

Vps > Q1 (Vgs ON voltage) + 1.24 + R1*100µA

The design equations are as follows:

  1. R1 = (Vps – 1.24)/1mA
  2. R1(W) = R1/1E6
  3. Q1(W) = (Vps – Vload – 1.24)*Id
  4. R2 = 1.24/Id
  5. R2(W) = 1.24 Id
  6. R2 precision 1% or better at the temperature produced by #5 heat dissipation
  7. D1 is needed only if Vps > 15V

Figure 2 substitutes an N-channel MOSFET for Figure 1’s Q1 and an anode-referenced 431 regulator chip in place of the cathode-referenced 4041 to produce a very similar current sink. Its design equations are identical.

Figure 2 A simple, high-accuracy current sink uses identical design math.

Okay, okay, I can almost hear the (very reasonable) objection that, for these simple circuits, the design math really was pretty much tedious, trivial, and obvious. 

So I’ll finish with a very less obvious and more creative example from frequent contributor Christopher Paul’s DI “Precision, voltage-compliant current source.”

Taking parts parameters from Christopher Paul’s Figure 3, we can define:

  1. Vs = chosen voltage across the R3R4 divider
  2. V5 = voltage across R5
  3. Id = chosen application-specific M1 drain current

Then:

  1. Vs = 5V
  2. V5 = 5V – 0.65V = 4.35V
  3. R5 = 4.35V/150µA = 30kΩ
  4. I4 = Id – 290µA
  5. R3 = 1.24/I4
  6. R4 = (Vs – 1.24)/I4 = 3.76/I4
  7. R3(W) = 1.24 I4
  8. R4(W) = 3.76 I4
  9. M1(W) = Id(Vs – Vd)

For example, if Id = 50 mA and Vps = 15 V, then:

  •  I4 = 49.7 mA
  • R5 = 30 kΩ
  • R4 = 75.7 Ω
  • R3 = 25.2 Ω
  • R3(W) = 1.24 I4 = 100 mW
  • R4(W) = 3.76 I4 = 200 mW
  • M1(W) = 500 mW

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

The post Basic design equations for three precision current sources appeared first on EDN.

How to limit TCP/IP RAM usage on STM32 microcontrollers

Thu, 11/13/2025 - 09:14

The TCP/IP functionality of a connected device uses dynamic RAM allocation because of the unpredictable nature of network behavior. For example, if a device serves a web dashboard, we cannot control how many clients might connect at the same time. Likewise, if a device communicates with a cloud server, we may not know in advance how large the exchanged messages will be.

Therefore, limiting the amount of RAM used by the TCP/IP stack improves the device’s security and reliability, ensuring it remains responsive and does not crash due to insufficient memory.

Microcontroller RAM overview

It’s common that on microcontrollers, available memory resides in several non-contiguous regions. Each of these regions can have different cache characteristics, performance levels, or power properties, and certain peripheral controllers may only support DMA operations to specific memory areas.

Let’s take the STM32H723ZG microcontroller as an example. Its datasheet, in section 3.3.2, defines embedded SRAM regions:

Here is an example linker script snippet for this microcontroller generated by the CubeMX:

Ethernet DMA memory

We can clearly see that RAM is split into several regions. The STM32H723ZG device includes a built-in Ethernet MAC controller that uses DMA for its operation. It’s important to note that the DMA controller is in domain D2, meaning it cannot directly access memory in domain D1. Therefore, the linker script and source code must ensure that Ethernet DMA data structures are placed in domain D2; for example, in RAM_D2.

To achieve this, first define a section in the linker script and place it in the RAM_D2 region:

Second, the Ethernet driver source code must put respective data into that section. It may look like this:

Heap memory

The next important part is the microcontroller’s heap memory. The standard C library provides two basic functions for dynamic memory allocation:

Typically, ARM-based microcontroller SDKs are shipped with the ARM GCC compiler, which includes the Newlib C library. This library, like many others, has a concept of so-called “syscalls” featuring low level routines that user can override, and which are called by the standard C functions. In our case, the malloc() and free() standard C routines call the _sbrk() syscall, which firmware code can override.

It’s typically done in the sycalls.c or sysmem.c file, and may look this:

As we can see, the _sbrk() operates on a single memory region:

That means that such implementation cannot be used in several RAM regions. There are more advanced implementations, like FreeRTOS’s heap4.c, which can use multiple RAM regions and provides pvPortMalloc() and pvPortFree() functions.

In any case, standard C functions malloc() and free() provide heap memory as a shared resource. If several subsystems in a device’s firmware use dynamic memory and their memory usage is not limited by code, any of them can potentially exhaust the available memory. This can leave the device in an out-of-memory state, which typically causes it to stop operating.

Therefore, the solution is to have every subsystem that uses dynamic memory allocation operate within a bounded memory pool. This approach protects the entire device from running out of memory.

Memory pools

The idea behind a memory pool is to split a single shared heap—with a single malloc and free—into multiple “heaps” or memory pools, each with its own malloc and free. The pseudo-code might look like this:

The next step is to make each firmware subsystem use its own memory pool. This can be achieved by creating a separate memory pool for each subsystem and using the pool’s malloc and free functions instead of the standard ones.

In the case of a TCP/IP stack, this would require all parts of the networking code—driver, HTTP/MQTT library, TLS stack, and application code—to use a dedicated memory pool. This can be tedious to implement manually.

RTOS memory pool API

Some RTOSes provide a memory pool API. For example, Zephyr provides memory heaps:

The other example of an RTOS that provides memory pools is ThreadX:

Using external allocator

The other alternative is to use an external allocator. There are many implementations available. Here are some notable ones:

  • umm_malloc is specifically designed to work with the ARM7 embedded processor, but it should work on many other 32-bit processors, as well as 16- and 8-bit processors.
  • o1heap is a highly deterministic constant-complexity memory allocator designed for hard real-time high-integrity embedded systems. The name stands for O(1) heap.

Example: Mongoose and O1Heap

The Mongoose embedded TCP/IP stack makes it easy to limit its memory usage, because Mongoose uses its own functions mg_calloc() and mg_free() to allocate and release memory. The default implementation uses the C standard library functions calloc() and free(), but Mongoose allows user to override these functions with their own implementations.

We can pre-allocate memory for Mongoose at firmware startup, for example 50 Kb, and use o1heap library to use that preallocated block and implement mg_calloc() and mg_free() using o1heap. Here are the exact steps:

  1. Fetch o1heap.c and o1heap.h into your source tree
  2. Add o1heap.c to the list of your source files
  3. Preallocate memory chunk at the firmware startup

  1. Implement mg_calloc() and mg_free() using o1heap and preallocated memory chunk

You can see the full implementation procedure in the video linked at the end of this article.

Avoid memory exhaustion

This article provides information on the following design aspects:

  • Understand STM32’s complex RAM layout
  • Ensure Ethernet DMA buffers reside in accessible memory
  • Avoid memory exhaustion by using bounded memory pools
  • Integrate the o1heap allocator with Mongoose to enforce TCP/IP RAM limits

By isolating the network stack’s memory usage, you make your firmware more stable, deterministic, and secure, especially in real-time or resource-constrained systems.

If you would like to see a practical application of these principles, see the complete tutorial, including a video with a real-world example, which describes how RAM limiting is implemented in practice using the Mongoose embedded TCP/IP stack. This video tutorial provides a step-by-step guide on how to use Mongoose Wizard to restrict TCP/IP networking on a microcontroller to a preallocated memory pool.

As part of this tutorial, a real-time web dashboard is created to show memory usage in real time. The demo uses an STM32 Nucleo-F756ZG board with built-in Ethernet, but the same approach works seamlessly on other architectures too.

Sergey Lyubka is the co-founder and technical director of Cesanta Software Ltd. He is known as the author of the open-source Mongoose Embedded Web Server and Networking Library, which has been on the market since 2004 and has over 12K stars on GitHub. Sergey tackles the issue of making embedded networking simpler to access for all developers.

Related Content

The post How to limit TCP/IP RAM usage on STM32 microcontrollers appeared first on EDN.

Pages