Українською
  In English
Microelectronics world news
How to design a digital-controlled PFC, Part 1
Shifting from analog to digital control
An AC/DC power supply with input power greater than 75 W requires power factor correction (PFC) to:
- Take the universal AC input (90 V to 264 V) and rectify that input to a DC voltage.
- Maintain the output voltage at a constant level (usually 400 V) with a voltage control loop.
- Force the input current to follow the input voltage such that the electronics load appears to be a pure resistor with a current control loop.
Designing an analog-controlled PFC is relatively easy because the voltage and current control loops are already built into the controller, making it almost plug-and-play. The power-supply industry is currently transitioning from analog control to digital control, especially in high-performance power-supply design. In fact, nearly all newly designed power supplies in data centers use digital control.
Compared to analog control, digital-controlled PFC provides lower total harmonic distortion (THD), a better power factor, and higher efficiency, along with integrated housekeeping functions.
Switching from analog control to digital control is not easy; however, you will face new challenges where continuous signals are represented in a discrete format. And unlike an analog controller, the MCU used in digital control is essentially a “blank” chip; you must write firmware to implement the control algorithms.
Writing the correct firmware can be a headache for someone who has never done this before. To help you learn digital control, in this article series, I’ll provide a step-by-step guide on how to design a digital-controlled PFC, using totem-pole bridgeless PFC as a design example to illustrate the advantages of digital control.
A digital-controlled PFC systemAmong all PFC topologies, totem-pole bridgeless PFC provides the best efficiency. Figure 1 shows a typical totem-pole bridgeless PFC structure.
Figure 1 Totem-pole bridgeless PFC where Q1 and Q2 are high-frequency switches and will work as either a PFC boost switch or synchronous switch based on the VAC polarity. Source: Texas Instruments
Q1 and Q2 are high-frequency switches. Based on VAC polarity, Q1 and Q2 work as a PFC boost switch or synchronous switch, alternatively.
At a positive AC cycle (where the AC line is higher than neutral), Q2 is the boost switch, while Q1 works as a synchronous switch. The pulse-width modulation (PWM) signal for Q1 and Q2 are complementary: Q2 is controlled by D (the duty cycle from the control loop), while Q1 is controlled by 1-D. Q4 remains on and Q3 remains off for the whole positive AC half cycle.
At a negative AC cycle (where the AC neutral is higher than line), the functionality of Q1 and Q2 swaps: Q1 becomes the boost switch, while Q2 works as a synchronous switch. The PWM signal for Q1 and Q2 are still complementary, but D now controls Q1 and 1-D controls Q2. Q3 remains on and Q4 remains off for the whole negative AC half cycle.
Figure 2 shows a typical digital-controlled PFC system block diagram with three major function blocks:
- An ADC to sense the VAC voltage, VOUT voltage, and inductor current for conversion into digital signals.
- A firmware-based average current-mode controller.
- A digital PWM generator.

Figure 2 Block diagram of a typical digital-controlled PFC system with three major function blocks. Source: Texas Instruments
I’ll introduce these function blocks one by one.
The ADCAn ADC is the fundamental element for an MCU; it senses an analog input signal and converts it to a digital signal. For a 12-bit ADC with a 3.3-V reference, Equation 1 expresses the ADC result for a given input signal Vin as:
Conversely, based on a given ADC conversion result, Equation 2 expresses the corresponding analog input signal as:

To obtain an accurate measurement, the ADC sampling rate must follow the Nyquist theorem, which states that a continuous analog signal can be perfectly reconstructed from its samples if the signal is sampled at a rate greater than twice its highest frequency component.
This minimum sampling rate, known as the Nyquist rate, prevents aliasing, a phenomenon where higher frequencies appear as lower frequencies after sampling, thus losing information about the original signal. For this reason, the ADC sampling rate is set at a much higher rate (tens of kilohertz) than the AC frequency (50 or 60 Hz).
Input AC voltage sensingThe AC input is high voltage; it cannot connect to the ADC pin directly. You must use a voltage divider, as shown in Figure 3, to reduce the AC input magnitude.

Figure 3 Input voltage sensing that allows you to connect the high AC input voltage to the ADC pin. Source: Texas Instruments
The input signal to the ADC pin should be within the measurement range of the ADC (0 V to 3.3 V). But to obtain a better signal-to-noise ratio, the input signal should be as big as possible. Hence, the voltage divider for VAC should follow Equation 3:

where VAC_MAX is the peak value of the maximum VAC voltage that you want to measure.
Adding a small capacitor (C) with low equivalent series resistance (ESR) in the voltage divider can remove any potential high-frequency noise; however, you should place C as close as possible to the ADC pin.
Two ADCs measure the AC line and neutral voltages; subtracting the two readings using firmware will obtain the VAC signal.
Output voltage sensingSimilarly, resistor dividers will attenuate the output voltage, as shown in Figure 4, then connect to an ADC pin. Again, adding C with low ESR in the voltage divider removes any potential high-frequency noise, with C placed as close as possible to the ADC pin.

Figure 4 Resistor divider for output voltage sensing, where C removes any potential high-frequency noise. Source: Texas Instruments
To fully harness the ADC measurement range, the voltage divider for VOUT should follow Equation 4:

where VOUT_OVP is the output overvoltage protection threshold.
AC current sensingIn a totem-pole bridgeless PFC, the inductor current is bidirectional, requiring a bidirectional current sensor such as a Hall-effect sensor. With a Hall-effect sensor, if the sensed current is a sine wave, then the output of the Hall-effect sensor is a sine wave with a DC offset, as shown in Figure 5.

Figure 5 The bidirectional hall-effect current sensor output is a sine wave with a DC offset when the input is a sine wave. Source: Texas Instruments
The Hall-effect sensor you use may have an output range that is less than what the ADC can measure. Scaling the Hall-effect sensor output to match the ADC measurement range using the circuit shown in Figure 6 will fully harness the ADC measurement range.
Figure 6 Hall-effect sensor output amplifier used to scale the Hall-effect sensor output to match the ADC measurement range. Source: Texas Instruments
Equation 5 expresses the amplification of the Hall-effect sensor output:
![]()
As I mentioned earlier, because the digital controller MCU is a blank chip, you must write firmware to mimic the PFC control algorithm used in the analog controller. This includes voltage loop implementation, current reference generation, current loop implementation, and system protection. I’ll go over these implementations in Part 2 of this article series.
Digital compensatorIn Figure 7, GV and GI are compensators for the voltage loop and current loop. One difference between analog control and digital control is that in analog control, the compensator is usually implemented through an operational amplifier, whereas digital control uses a firmware-based proportional-integral-derivative (PID) compensator.
For PFC, its small-signal model is a first-order system; therefore, a proportional-integral (PI) compensator is enough to obtain good bandwidth and phase margin. Figure 7 shows a typical digital PI controller structure.

Figure 7 A digital PI compensator where r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. Source: Texas Instruments
In Figure 7, r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. The compensator output, u(k), clamps to a specific range. The compensator also contains an anti-windup reset logic that allows the integral path to recover from saturation.
Figure 8 shows a C code implementation example for this digital PI compensator.

Figure 8 C code example for a digital PI compensator. Source: Texas Instruments
For other digital compensators such as PID, nonlinear PID, and first-, second-, and third-order compensators, see reference [1].
S/Z domain conversionIf you have an analog compensator that works well, and you want to use the same compensator in digital-controlled PFC, you can convert it through S/Z domain conversion. Assume that you have a type II compensator, as shown in Equation 6:
Replace s with bilinear transformation (Equation 7):
![]()
where Ts is the ADC sampling period.
Then H(s) is converted to H(z), as shown in Equation 8:

Rewrite Equation 8 as Equation 9:
![]()
To implement Equation 9 in a digital controller, store two previous control output variables: un-1, un-2, and two previous error histories: en-1, en-2. Then use current error en and Equation 9 to calculate the current control output, un.
Digital PWM generationA digital controller generates a PWM signal much like an analog controller, with the exception that a clock counter generates the RAMP signal; therefore, the PWM signal has limited resolution. The RAMP counter is configurable as up count, down count, or up-down count.
Figure 9 shows the generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation.

Figure 9 Generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation. Source: Texas Instruments
Programming the PERIOD resistor of the PWM generator will determine the switching frequency. For up-count and down-count mode, Equation 10 calculates the PERIOD register value as:

where fclk is the counter clock frequency and fsw is the desired switching frequency.
For the up-down count mode, Equation 11 calculates the PERIOD register value as:

Figure 10 shows an example of using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC.

Figure 10 Using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC. Source: Texas Instruments
Equation 12 shows that the COMP equals the current loop GI output multiplied by the switching period:
![]()
The higher the COMP value, the bigger the D.
To prevent short through between the top switch and the bottom switch, adding a delay on the rising edge of PWMA and the rising edge of PWMB inserts dead time between PWMA and PWMB. This delay is programmable, which means that it’s possible to dynamically adjust the dead time to optimize performance.
Blocks in digital-controlled PFCNow that you have learned about the blocks used in digital-controlled PFC, it’s time to close the control loop. In the next installment, I’ll discuss how to write firmware to implement an average current-mode controller.
Bosheng Sun is a system engineer and Senior Member Technical Staff at Texas Instruments, focused on developing digitally controlled high-performance AC/DC solutions for server and industry applications. Bosheng received a Master of Science degree from Cleveland State University, Ohio, USA, in 2003 and a Bachelor of Science degree from Tsinghua University in Beijing in 1995, both in electrical engineering. He has published over 30 papers and holds six U.S. patents.
Reference
- “C2000
Digital Control Library User’s Guide.” TI literature No. SPRUID3, January 2017.
Related Content
- Digital control for power factor correction
- Digital control unveils a new epoch in PFC design
- Power Tips #124: How to improve the power factor of a PFC
- Power Tips #115: How GaN switch integration enables low THD and high efficiency in PFC
- Power Tips #116: How to reduce THD of a PFC
The post How to design a digital-controlled PFC, Part 1 appeared first on EDN.
CEA-Leti launches multi-lateral program to accelerate AI with micro-LED data links
AlixLabs raises €15m in Series A funding round to accelerate APS beta testing
onsemi authorizes $6bn share repurchase program
Mojo Vision adds Dr Anthony Yu to advisory board
5V mini-buck to the rescue! Fixing cooked Eight Sleep Pod 4 hub
| | submitted by /u/Nerfarean [link] [comments] |
Optical combs yield extreme-accuracy gigahertz RF oscillator

It may seem at times that there is a divide between the optical/photonic domain and the RF one, with the terahertz zone between them as a demarcation. If you need to make a transition between the photonic and RF words, you use electrooptical devices such as LEDs and photodetectors of various types. Now, all or most optical systems are being used to perform functions in the optical band where electric comments can’t fulfill the needs, even pushing electronic approaches out of the picture.
In recent years, this divide has also been bridged by newer, advanced technologies such as integrated photonics where optical functions such as lasers, waveguides, tunable elements, filters, and splitters are fabricated on an optically friendly substrate such as lithium niobate (LiNbO3). There are even on-chip integrated transceivers and interconnects such as the ones being developed by Ayar Labs. The capabilities of some of these single- or stacked-chip electro-optical devices are very impressive.
However, there is another way in which electronics and optics are working together with a synergistic outcome. The optical frequency comb (OFC), also called optical comb, was originally developed about 25 years ago—for which John Hall and Theodor Hänsch received the 2005 Nobel Prize in Physics—to count the cycles from optical atomic clocks and for precision laser-based spectroscopy.
It has since found many other uses, of course, as it offers outstanding phase stability at optical frequencies for tuning or as a local oscillator (LO). Some of the diverse applications include X-ray and attosecond pulse generation, trace gas sensing in the oil and gas industry, tests of fundamental physics with atomic clocks, long-range optical links, calibration of atomic spectrographs, precision time/frequency transfer over fiber and through free space, and precision ranging.
Use of optical components is not limited to the optical-only domain. In the last few years, researchers have devised ways to use the incredible precision of the OFC to generate highly stable RF carriers in the 10-GHz range. Phase jitter in the optical signal is actually reduced as part of the down-conversion process, so the RF local oscillator has better performance than its source comb.
This is not an intuitive down-conversion scheme (Figure 1).

Figure 1 Two semiconductor lasers are injection-locked to chip-based spiral resonators. The optical modes of the spiral resonators are aligned, using temperature control, to the modes of the high-finesse Fabry-Perot (F-P) cavity for Pound–Drever–Hall (PDH) locking (a). A microcomb is generated in a coupled dual-ring resonator and is heterodyned with the two stabilized lasers. The beat notes are mixed to produce an intermediate frequency, fIF, which is phase-locked by feedback to the current supply of the microcomb seed laser (b). A modified uni-traveling carrier (MUTC) photodetector chip is used to convert the microcomb’s optical output to a 20-GHz microwave signal; a MUTC photodetector has response to hundreds of GHz (c). Source: Nature
But this simplified schematic diagram does not reveal the true complexity and sophistication of the approach, which is illustrated in Figure 2.

Figure 2 Two distributed-feedback (DFB) lasers at 1557.3 and 562.5 nm are self-injection-locked (SIL) to Si3N4 spiral resonators, amplified and locked to the same miniature F-P cavity. A 6-nm broad-frequency comb with an approximately 20 GHz repetition rate is generated in a coupled-ring resonator. The microcomb is seeded by an integrated DFB laser, which is self-injection-locked to the coupled-ring microresonator. The frequency comb passes through a notch filter to suppress the central line and is then amplified to 60 mW total optical power. The frequency comb is split to beat with each of the PDH-locked SIL continuous wave references. Two beat notes are amplified, filtered and then mixed to produce fIF, which is phase-locked to a reference frequency. The feedback for microcomb stabilization is provided to the current supply of the microcomb seed laser. Lastly, part of the generated microcomb is detected in an MUTC detector to extract the low-noise 20-GHz RF signal. Source: Nature
At present, this is not implemented as a single-chip device or even as a system with just a few discrete optical components; many of the needed precision functions are only available on individual substrates. A complete high-performance system takes a rack-sized chassis fitting in a single-height bay.
However, there has been significant progress on putting multiple functional locks into single-chip substrate, so it wouldn’t be surprising to see a monolithic (or nearly so) device within a decade or perhaps just a few years.
What sort of performance can such a system deliver? There are lots of numbers and perspectives to consider, and testing these systems—at these levels of performance—to assess their capabilities is as much of a challenge as fabricating them. It’s the metrology dilemma: how do you test a precision device? And how do you validate the testing arrangement itself?
One test result indicates that for a 10-GHz carrier, the phase noise is −102 dBc/Hz at 100 Hz offset and decreases to −141 dBc/Hz at 10 kHz offset. Another characterization compares this performance to that of other available techniques (Figure 3).

Figure 3 The platforms are all scaled to 10-GHz carrier and categorized based on the integration capability of the microcomb generator and the reference laser source, excluding the interconnecting optical/electrical parts. Filled (blank) squares are based on the optical frequency division (OFD) standalone microcomb approach: 22-GHz silica microcomb (i); 5-GHz Si3N4 microcomb (ii); 10.8-GHz Si3N4 microcomb (iii) ; 22-GHz microcomb (iv); MgF2 microcomb (v); 100-GHz Si3N4 microcomb (vi); 22-GHz fiber-stabilized SiO2 microcomb (vii); MgF2 microcomb (viii); 14-GHz MgF2 microcomb pumped by an ultrastable laser (ix); and 14-GHz microcomb-based transfer oscillator (x). Source: Nature
There are many good online resources available that explain in detail the use of optical combs for RF-carrier generation. Among these are “Photonic chip-based low-noise microwave oscillator” (Nature); “Compact and ultrastable photonic microwave oscillator” (Optics Letters via ResearchGate); and “Photonic Microwave Sources Divide Noise and Shift Paradigms” (Photonics Spectra).
In some ways, it seems there’s a “frenemy” relationship between today’s advanced photonics and the conventional world of RF-based signal processing. But as has usually been the case, the best technology will win out, and it will borrow from and collaborate with others. Photonics and electronics each have their unique attributes and bring something to the party, while their integrated pairing will undoubtedly enable functions we can’t fully envision—at least not yet.
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- Is Optical Computing in Our Future?
- Use optical fiber as an isolated current sensor?
- Analog Optical Fiber Forges RF-Link Alternative
- Silicon yields phased-arrays for optics, not just RF
- Attosecond laser pulses drive petahertz optical transistor switching
The post Optical combs yield extreme-accuracy gigahertz RF oscillator appeared first on EDN.
LPF using OP07 IC
| submitted by /u/SpecialistRare832 [link] [comments] |
High-performance MCUs target industrial applications

STMicroelectronics raises the performance bar for embedded edge AI and industrial applications with the new STM32V8 high-performance microcontrollers (MCUs) for demanding industrial applications such as factory automation, motor control, and robotics. It is the first MCU built on ST’s 18-nm silicon-on-insulator (FD-SOI) process technology with embedded phase-change memory (PCM).
The STM32V8’s phase-change non-volatile memory (PCM) claims the smallest cell size on the market, enabling 4 MB of embedded non-volatile memory (NVM).
(Source: STMicroelectronics)
In addition, the STM32V8 is ST’s fastest STM32 MCU to date, designed for high reliability and harsh environments in embedded and edge AI applications, and can handle complex applications and maintain high energy efficiency. The STM32V8 achieves clock speeds of up to 800 MHz, thanks to the Arm Cortex-M85 core and the 18-nm FD-SOI process with embedded PCM. The FD-SOI technology delivers high energy efficiency and supports a maximum junction temperature of up to 140°C.
The MCU integrates special accelerators, including graphic, crypto/hash, and comes with a large selection of IP, including 1-Gb Ethernet, digital interfaces (FD-CAN, octo/hexa xSPI, I2C, UART/USART, and USB), analog peripherals, and timers. It also features state-of-the-art security with the STM32 Trust framework and the latest cryptographic algorithms and lifecycle management standards. It targets PSA Certified Level 3 and SESIP certification to meet compliance with the upcoming Cyber-Resilience Act (CRA).
The STM32V8 has been selected for the SpaceX Starlink constellation, using it in a mini laser system that connects the satellites traveling at extremely high speeds in low Earth orbit (LEO), ST said. This is thanks in part to the 18-nm FD-SOI technology that provides a higher level of reliability and robustness.
The STM32V8 supports bare-metal or RTOS-based development. It is supported by ST’s development resources, including STM32Cube software development and turnkey hardware including Discovery kits and Nucleo evaluation boards.
The STM32V8 is in early-stage access for selected customers. Key OEM availability will start in the first quarter of 2026, followed by broader availability.
The post High-performance MCUs target industrial applications appeared first on EDN.
FIR temperature sensor delivers high accuracy

Melexis claims the first automotive-grade surface-mount (SMD) far-infrared (FIR) temperature sensor designed for temperature monitoring of critical components in electric vehicle (EV) powertrain applications. These include inverters, motors, and heating, ventilation, and air conditioning (HVAC) systems.
(Source: Melexis)
The MLX90637 offers several advantages over negative temperature coefficient (NTC) thermistors that have traditionally been used in these systems, where speed and accuracy are critical, Melexis said.
These advantages include eliminating the need for manual labor associated with NTC solutions thanks to the SMD packaging, which supports automated PCB assembly and delivers cost savings. In addition, the FIR temperature sensor with non-contact measurement ensures intrinsic galvanic isolation that helps to enhance EV safety by separating high- and low-voltage circuits, while the inherent electromagnetic compatibility (EMC) eliminates typical noise challenges associated with NTC wires, the company said.
Key features include a 50° field of view, 0.02°C resolution, and fast response time, which are suited for applications such as inverter busbar monitoring where temperature must be carefully managed. Sleep current is less than 2.5 μA. and the ambient operating temperature range is -40°C to 125°C.
The MLX90637 also simplifies system integration with a 3.3-V supply, factory calibration (including post calibration), and an I2C interface for communication with a host microcontroller, including a software-definable I2C address via an external pin. The AEC-Q100-qualified sensor is housed in a 3 × 3-mm package.
The post FIR temperature sensor delivers high accuracy appeared first on EDN.
GlobalFoundries acquires Advanced Micro Foundry to expand silicon photonics AI infrastructure portfolio
Filtronic completes multi-year project to develop plastic QFN packaging for GaN devices
Accuracy loss from PWM sub-Vsense regulator programming

I’ve recently published Design Ideas (DIs) showing circuits for linear PWM programming of standard bucking-type regulators in applications requiring an output span that can swing below the regulator’s sense voltage (Vsense or Vs). For example: “Simple PWM interface can program regulators for Vout < Vsense.”
Wow the engineering world with your unique design: Design Ideas Submission Guide
Objections have been raised, however, that such circuits entail a significant loss of programming analog accuracy because they rely on adding a voltage term typically derived from an available voltage (e.g., logic rail) source. Therefore, they should be avoided.
The argument relies on the fact that such sources generally have accuracy and stability that are significantly worse (e.g., ±5%) than those of regulator internal references (e.g., ±1%).
But is this objection actually true, and if so, how serious is the problem? How much of an accuracy penalty is actually incurred? This DI addresses these questions.
Figure 1 shows a basic topology for sub-Vs regulator programming with current expressions as follows:
A = DpwmVs/R1
B = (1 – Dpwm)(Vl – Vs)/(R1 + R4)
Where A is the primary programming current and B is the sub-Vs programming current giving an output voltage:
Vout = R2(A + B) + Vs
Figure 1 Basic PWM regulator programming topology.
Inspection of the A and B current expressions shows that when the PWM duty factor (Dpwm) is set to full-scale 100% (Dpwm = 1), then B = 0. This is due to the (1 – Dpwm) term.
Therefore, there can be no error contribution from the logic rail Vl at full-scale.
At other Dpwm values, however, this happy circumstance no longer applies, and B becomes nonzero. Thus, Vl tolerance and noise degrade accuracy, at least to some extent. But, by how much?
The simplest way to address this crucial question is to evaluate it as a plausible example of Figure 1’s general topology. Figure 2 provides some concrete groundwork for that by adding some example values.

Figure 2 Putting some meat on Figure 1’s bare bones, adding example values to work with.
Assuming perfect resistors, nominal R1 currents are then:
A = Dpwm Vs/3300
B = (1 – Dpwm)(Vl – Vs)/123300
Vout = R2(A + B) + Vs = 75000(A + B) + 1.25
Then, making the (highly pessimistic) assumption that reference errors stack up as the sum of absolute values:
Aerr = Dpwm 1%Vs/3300 = Dpwm 3.8µA
Berr = (1 – Dpwm) (5% 3.3v + 1% 1.25v)/123300 = (1 – Dpwm) 1.44µA
Vout total error = 75000(Dpwm 3.8µA + (1 – Dpwm)1.44µA)) + 1% Vs
The resulting Vout error plots are shown in Figure 3.

Figure 3 Vout error plots where the x-axis is Dpwm and y-axis is Vout error. Black line is Vout = Vs at Dpwm = 0 and red line is Vout = 0 at Dpwm = 0.
Conclusion: Error does increase in the lower range of Vout when the Vout < Vsense feature is incorporated, but any difference completely disappears at the top end. So, the choice turns on the utility of Vout < Vsense.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Simple PWM interface can program regulators for Vout < Vsense
- Three discretes suffice to interface PWM to switching regulators
- Revisited: Three discretes suffice to interface PWM to switching regulators
- PWM nonlinearity that software can’t fix
- Another PWM controls a switching voltage regulator
The post Accuracy loss from PWM sub-Vsense regulator programming appeared first on EDN.
Wolfspeed launches 1200V silicon carbide six-pack power modules for E-mobility propulsion systems
University of Arkansas opens Multi-User Silicon Carbide Facility
Signal integrity and power integrity analysis in 3D IC design

The relentless pursuit of higher performance and greater functionality has propelled the semiconductor industry through several transformative eras. The most recent shift is from traditional monolithic SoCs to heterogeneous integrated advanced package ICs, including 3D integrated circuits (3D ICs). This emerging technology promises to help semiconductor companies sustain Moore’s Law.
However, these advancements bring increasingly complex challenges, particularly in power integrity (PI) and signal integrity (SI). Once secondary, SI/PI have become critical disciplines in modern semiconductor development. As data rates ascend into multiple gigabits per second and power requirements become more stringent, error margins shrink dramatically, making SI/PI expertise indispensable. The fundamental challenge lies in ensuring clean and reliable signal transmissions and stable power delivery across intricate systems.

Figure 1 The above diagram highlights the basic signal integrity (SI) issues. Source: Siemens EDA
This article explains the unique SI/PI challenges in 3D IC designs by contrasting them with traditional SoCs. We will then explore a progressive verification strategy to address these complexities, examine the roles and interdependencies of stakeholders in the 3D IC ecosystem, and illustrate these concepts through a real-world success story. Finally, we will discuss how these innovations drive the future of semiconductor design.
Traditional SI/PI versus 3D IC approaches
In traditional SoC components destined for a PCB system, SI and PI analysis typically validates individual components before system integration. This often treats SoCs, packages, and PCBs as distinct entities, allowing sequential analysis and optimization. For instance, component-level power demand analysis can be performed on the monolithic SoC and its package, while signal integrity analysis validates individual channels.
The design process is often split between separate packaging and PCB teams working in parallel. These teams eventually collaborate to manage design trade-offs such as allocating timing or voltage margins between the package and PCB to accommodate routing constraints. While effective for traditional designs, this compartmentalized approach is inadequate for the inherent complexities of 3D ICs.
A 3D IC’s architecture is not merely a collection of components but a highly condensed system of mini subsystems, characterized by the vertical stacking of multiple dies. Inter-die interfaces, through-silicon vias (TSVs), and microbumps create a dense, highly interactive electrical environment where power and signal integrity issues are deeply intertwined and can propagate across multiple layers.
The tight integration and proximity of the dies introduce novel coupling mechanisms and power delivery challenges that cannot be effectively addressed by sequential, isolated analyses. Therefore, unlike a traditional flow, 3D ICs demand holistic, parallel validation from the outset, with SI and PI analyses commencing early and encompassing all constituent parts concurrently.
Progressive verification
To navigate the intricate landscape of 3D IC design, a progressive verification strategy is paramount. This principle acknowledges that design information is sparse in early stages and becomes progressively detailed.
The core idea behind progressive verification is to initiate analysis as early as possible with available inputs, guiding the design onto the correct path and transforming the final verification step into confirmation rather than a discovery of fundamental issues. Different analysis requirements are addressed as details become available, starting with minimal inputs and gradually incorporating more specific data.

Figure 2 Here is a view of a progressive verification flow. Source: Siemens EDA
Let’s summarize the various analyses involved and their timing in the design flow.
Early architectural feasibility and pre-layout analysis
At the initial design phase, before detailed layout information is available, the focus is on architectural feasibility studies. This involves estimating power budgets and defining high-level interfaces. Even with rough inputs, early analysis can commence. For instance, pre-layout signal integrity analysis can model representative interconnect structures, such as an interposer bridge.
By defining an “envelope” of achievable performance based on preliminary dimensions, designers can establish realistic expectations and guidelines for subsequent layout stages. This proactive approach helps identify potential bottlenecks and ensures a robust electrical foundation.
Floorplanning and implementation-driven analysis
As the design progresses to floorplanning and initial implementation, guidelines from early analysis are translated into a physical layout. At this stage, more in-depth analyses become possible. This includes detailed power delivery network (PDN) analysis to verify power distribution across stacked dies and the substrate.
Signal path verification with actual component interconnections can also begin, enabling early identification and optimization of critical signal routes. This iterative process of layout and analysis enables continuous refinement, ensuring physical implementation aligns with electrical performance targets.
Detailed electrical analysis with vendor-specific IP
The final stage of progressive verification involves comprehensive electrical analysis utilizing actual vendor-specific intellectual property (IP) models. Given the nascent state of 3D IC die-to-die standards—for instance UCIe, BoW, and AIB, which are less mature than established protocols like DDR or PCIe—this detailed analysis is even more critical.
Designers perform in-depth S-parameter modeling of impedance networks, feeding these models with precise current values obtained from die designers and other stakeholders. This granular analysis provides full closure on the design’s electrical performance, ensuring all critical signal paths and power delivery mechanisms meet specifications under real-world operating conditions.
The 3D IC ecosystem
The complexity of 3D IC designs necessitates a highly collaborative environment involving diverse stakeholders, each with unique perspectives and challenges. Effective communication and early engagement among these teams are crucial for successful integration.
- System architects are responsible for the high-level floorplanning, determining the number of chiplets, baseband dies, and the communication channels required between them. Their challenge lies in optimizing the overall system architecture for performance, power, and area, while considering the physical constraints imposed by 3D integration.
- Die designers focus on individual die architectures and oversee I/O planning and internal power distribution. They must communicate their power requirements and I/O characteristics accurately to ensure compatibility within the stacked system. Their primary challenge is to optimize the die-level performance while adhering to system-level constraints and ensuring robust power and signal delivery across the interfaces.
- Layout teams are responsible for the physical implementation, encompassing die-level layout, substrate layout, and silicon interconnects like interposers and bridges. Often different layout teams may handle different aspects of the implementation, requiring meticulous coordination. Their challenges include managing extreme density, minimizing parasitic effects, and ensuring manufacturability across multiple layers.
- SI/PI and verification teams act as technical consultants, providing guidelines and feedback at every level. They advise system architects on bump-out strategies for die floorplans and work with die designers to optimize power and ground bump counts. Their role is to proactively identify and mitigate potential SI/PI issues throughout the design cycle, ensuring that the electrical performance targets are met.
- Mechanical and thermal teams ensure structural integrity and manage heat dissipation, respectively. Both are critical for the long-term reliability and performance of designs, as beyond electrical considerations, 3D ICs introduce significant mechanical and thermal challenges. For example, the close proximity of die can lead to localized hotspots and mechanical stresses due to differing coefficients of thermal expansion.
By employing a progressive verification methodology, these diverse stakeholders can engage in early and continuous communication, fostering a collaborative environment that makes it significantly easier to build a functional and reliable 3D IC design.
Chipletz’s proof of concept
The efficacy of a progressive verification strategy and collaborative ecosystem is best illustrated through real-world applications. Chipletz, a fabless substrate startup, exemplifies successful navigation of 3D IC design complexities in collaboration with an EDA partner. Chipletz is working closely with Siemens EDA for its Smart Substrate products, utilizing tools capable of supporting advanced 3D IC design requirements.

Figure 3 Smart Substrate uses cutting-edge chiplet integration technology that eliminates an interposer. Source: Siemens EDA
At the time, many industry-standard EDA tools were primarily tailored for traditional package and PCB architectures. Chipletz presented a formidable challenge: its designs featured massive floorplans with up to 50 million pin counts, demanding analysis tools with unprecedented capacity and layout tools capable of handling such intricate structures.
Siemens responded by engaging its R&D teams to enhance tool capacities and capabilities. This collaboration demonstrated not only the ability to handle these complex architectures but also to perform meaningful electrical analyses on such large designs. Initial efforts focused on fundamental aspects such as direct current (DC) IR drop analysis across the substrate and early PDN analysis.
Through these foundational steps, Siemens demonstrated its tools’ capabilities and, crucially, its commitment to working alongside Chipletz to overcome challenging roadblocks. This partnership enabled Chipletz to successfully tape out its initial demonstration vehicle, and it’s now progressing to the second revision of its design. This underscores the importance of adaptable EDA tools and strong collaboration in pushing the boundaries of 3D IC innovation.
Driving 3D IC innovation
3D ICs are unequivocally here to stay, with major semiconductor companies increasingly incorporating various forms of 3D packaging into their product roadmaps. This transition signifies a fundamental shift in how the industry approaches system design and integration. As the industry continues to embrace 3D IC integration as a key enabler for next-generation systems, the methodologies and collaborative approaches outlined in this article for SI and PI will only grow in importance.
The progressive verification strategy, coupled with close collaboration among diverse stakeholders, offers a robust framework for navigating the complex challenges inherent in 3D IC design. Companies and individuals who master these techniques will be exceptionally well-positioned to lead the next wave of semiconductor innovation, creating the high-performance, energy-efficient systems that will power our increasingly digital world.
Todd Burkholder is a senior editor at Siemens DISW. For over 25 years, he has worked as editor, author, and ghost writer with internal and external customers to create print and digital content across a broad range of EDA technologies. Todd began his career in marketing for high-technology and other industries in 1992 after earning a Bachelor of Science at Portland State University and a Master of Science degree from the University of Arizona.
John Caka is a signal and power integrity applications engineer with over a decade of experience in high-speed digital design, modeling, and simulation. He earned his B.S. in electrical engineering from the University of Utah in 2013 and an MBA from the Quantic School of Business and Technology in 2024.
Related Content
- Putting 3D IC to work for you
- Making your architecture ready for 3D IC
- The multiphysics challenges of 3D IC designs
- Mastering multi-physics effects in 3D IC design
- Advanced IC Packaging: The Roadmap to 3D IC Semiconductor Scaling
- Automating FOWLP design: A comprehensive framework for next-generation integration
The post Signal integrity and power integrity analysis in 3D IC design appeared first on EDN.
Norton amplifiers: Precision and power, the analog way we remember

The Norton amplifier topology brings back the essence of analog design by using clever circuit techniques to deliver strong performance with minimal components. It is not about a brand name—it’s about a timeless analog philosophy that continues to inspire engineers and hobbyists today. This approach shows why analog circuits remain powerful and relevant, even in our digital age.
In electronics, a Norton amplifier—also known as a current differencing amplifier (CDA)—is a specialized analog circuit that functions as a current-controlled voltage source. Its output voltage is directly proportional to the difference between two input currents, making it ideal for applications requiring precise current-mode signal processing.
Conceptually, it serves as the dual of an operational transconductance amplifier (OTA), offering a complementary approach to analog design and expanding the toolkit for engineers working with current-driven systems.
So, while most amplifier discussions orbit op-amps and voltage feedback, the Norton amplifier offers a subtler, current-mode alternative—elegant in its simplicity and quietly powerful in its departure from the norm. Let us go further.
Norton amplifier’s analog elegance
As shown in the LM2900 IC equivalent circuit below, the internal architecture is refreshingly straightforward. The most striking departure from a conventional op-amp—typically centered around a voltage-mode differential pair—lies in the input stage. Rather than employing the familiar long-tailed pair, this Norton amplifier features a current mirror followed by a common-emitter amplifier.

Figure 1 Equivalent circuit highlights the minimalist internal structure of the LM2900 Norton amplifier IC. Source: Texas Instruments
These devices have been around for decades, and they clearly continue to intrigue analog enthusiasts. Just recently, I picked up a batch of LM3900-HLF ICs from an online seller. The LM3900-HLF appears to be a Chinese-sourced variant of the classic LM3900—a quad Norton operational amplifier recognized for its current-differencing input and quietly unconventional topology. These low-cost quads are now widely used across analog systems, especially in medium-frequency and single-supply AC applications.

Figure 2 Pin connections of the LM3900-HLF support easy adoption in practical circuits. Source: HLF
In my view, the LM2900 and LM3900 series are more than just relics—they are reminders of a time when analog design embraced cleverness over conformity. Their current differencing architecture, once a quiet alternative to voltage-mode orthodoxy, still finds relevance in industrial signal chains where noise rejection, single-supply operation, and low-impedance interfacing matter.
You will not see these chips headlining new designs, but the principles they embody—robust, elegant, and quietly efficient—continue to shape sensor front-ends, motor drives, and telemetry systems. The ICs may have faded, but the technique endures, humming beneath the surface of modern infrastructure.
And, while it’s not as widely romanticized as the LM3900, the LM359 Norton amplifier remains a quietly powerful choice for analog enthusiasts who value speed with elegance. Purpose-built for video and fast analog signal processing, it stepped in with serious bandwidth and slewing muscle. As a dual high-speed Norton amplifier, it handles wideband signals with slew rates up to 60 V/μs and gain-bandwidth products reaching 400 MHz—a clear leap beyond its older cousins.
In industrial and instrumentation circles, LM359’s current-differencing input stage still commands respect for its low input bias, fast settling, and graceful handling of capacitive loads. Its legacy lives on in video distribution, pulse amplification, and high-speed analog comparators—especially in designs that prioritize speed and stability over rail-to-rail swing.
Wrapping up with a whiff of flux
There is not much more to say about Norton amplifiers for now, so we will wrap up this slightly off-the-beaten-path blog post here. As a parting gift, here is a practical LM3900-based circuit—just enough to satisfy those who find joy in the scent of solder smoke.

Figure 3 Bring this LM3900-based triangle/square waveform generator circuit to life and trace its quiet Norton-style elegance. Source: Author
Triangle waveforms are usually generated by an integrator, which receives first a positive DC input voltage, and then a negative DC input voltage. The LM3900 Norton amplifier facilitates this operation within systems powered by a single supply voltage, thanks to the current mirror present at its non-inverting (+) input. This feature enables triangle waveform generation without the need for a negative DC input.
In the above schematic diagram, amplifier IC1D functions as an integrator. It first operates with the current through R1 to generate a negative output voltage slope. When the IC1C amplifier—the Schmitt trigger—switches high, the current through R5 causes the output voltage to rise.
For optimal waveform symmetry, R1 should be set to twice the value of R5 (and here R1=1 MΩ and R5=470 KΩ, which is close enough). Note that the Schmitt circuit also provides a square wave output at the same frequency.
Feeling inspired? Fire up your breadboard, test the circuit, or share your own twist. Whether you are a seasoned tinkerer or just rediscovering the joy of analog, let this be your spark to keep exploring.
Finally, I hope this odd topic sparked some interest. If I have misunderstood anything—or if there is a better way to approach it—please feel free to chime in with corrections or suggestions in the comments. Exploring new ground always comes with the risk of missteps, and I welcome the chance to learn and improve.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Op amps: the 10,000-foot view
- Op-Amp Circuits, Configurations, and Schematics
- A generalized amplifier and the Miller-effect paradox
- New op amps address new—and old—design challenges
- Introduction to Operational Amplifier Applications, Op-amp Basics
The post Norton amplifiers: Precision and power, the analog way we remember appeared first on EDN.
6581 SID controlled by an Arduino
| | So I got this thing chirping but I think the little battery powered amp/speaker I’m using is faulty. Super fun though if you have a busted Commodore 64. [link] [comments] |
Oxford Instruments’ plasma processing equipment enabling Coherent to ramp up 6-inch InP fabs
Keypad
| submitted by /u/PepeIsLife69_ [link] [comments] |



