Українською
  In English
Feed aggregator
Ohio State University buys Aixtron CCS MOCVD system
Pulse-density modulation (PDM) audio explained in a quick primer

Pulse-density modulation (PDM) is a compact digital audio format used in devices like MEMS microphones and embedded systems. This compact primer eases you into the essentials of PDM audio.
Let’s begin by revisiting a ubiquitous PDM MEMS microphone module based on MP34DT01-M—an omnidirectional digital MEMS audio sensor that continues to serve as a reliable benchmark in embedded audio design.

Figure 1 A MEMS microphone mounted on a minuscule module detects sound and produces a 1-bit PDM signal. Source: Author
When properly implemented, PDM can digitally encode high-quality audio while remaining cost-effective and easy to integrate. As a result, PDM streams are now widely adopted as the standard data output format for MEMS microphones.
On paper, the anatomy of a PDM microphone boils down to a few essential building blocks like:
- MEMS microphone element, typically a capacitive MEMS structure, unlike the electret capsules found in analog microphones.
- Analog preamplifier boosts the low-level signal from the MEMS element for further processing.
- PDM modulator converts the analog signal into a high-frequency, 1-bit pulse-density modulated stream, effectively acting as an integrated ADC.
- Digital interface logic handles timing, clock synchronization, and data output to the host system.
Next is the function block diagram of T3902, a digital MEMS microphone that integrates a microphone element, impedance converter amplifier, and fourth-order sigma-delta (Σ-Δ) modulator. Its PDM interface enables time multiplexing of two microphones on a single data line, synchronized by a shared clock.

Figure 2 Functional block diagram outlines the internal segments of the T3902 digital MEMS microphone. Source: TDK
The analog signal generated by the MEMS sensing element in a PDM microphone—sometimes referred to as a digital microphone—is first amplified by an internal analog preamplifier. This amplified signal is then sampled at a high rate and quantized by the PDM modulator, which combines the processes of quantization and noise shaping. The result is a single-bit output stream at the system’s sampling rate.
Noise shaping plays a critical role by pushing quantization noise out of the audible frequency range, concentrating it at higher frequencies where it can be more easily filtered out. This ensures relatively low noise within the audio band and higher noise outside it.
The microphone’s interface logic accepts a master clock signal from the host device—typically a microcontroller (MCU) or a digital signal processor (DSP)—and uses it to drive the sampling and bitstream transmission. The master clock determines both the sampling rate and the bit transmission rate on the data line.
Each 1-bit sample is asserted on the data line at either the rising or falling edge of the master clock. Most PDM microphones support stereo operation by using edge-based multiplexing: one microphone transmits data on the rising edge, while the other transmits on the falling edge.
During the opposite edge, the data output enters a high-impedance state, allowing both microphones to share a single data line. The PDM receiver is then responsible for demultiplexing the combined stream and separating the two channels.
As a side note, the aforesaid microphone module is hardwired to treat data as valid when the clock signal is low.
The magic behind 1-bit audio streams
Now, back in the driveway. PDM is a clever way to represent a sampled signal using just a stream of single bits. It relies on delta-sigma conversion, also known as sigma-delta, and it’s the core technology behind many oversampling ADCs and DACs.
At first glance, a one-bit stream seems hopelessly noisy. But here is the trick: by sampling at very high rates and applying noise-shaping techniques, most of that noise is pushed out of the audible range—above 20 kHz—where it no longer interferes with the listening experience. That is how PDM preserves audio fidelity despite its minimalist encoding.
There is a catch, though. You cannot properly dither a 1-bit stream, which means a small amount of distortion from quantization error is always present. Still, for many applications, the trade-off is worth it.
Diving into PDM conversion and reconstruction, we begin with the direct sampling of an analog signal at a high rate—typically several megahertz or more. This produces a pulse-density modulation stream, where the density of 1s and 0s reflects the amplitude of the original signal.

Figure 3 An example that renders a single cycle of a sine wave as a digital signal using pulse density modulation. Source: Author
Naturally, the encoding relies on 1-bit delta-sigma modulation: a process that uses a one-bit quantizer to output either a 1 or a 0 depending on the instantaneous amplitude. A 1 represents a signal driven fully high, while a 0 corresponds to fully low.
And, because the audio frequencies of interest are much lower than the PDM’s sampling rate, reconstruction is straightforward. Passing the PDM stream through a low-pass filter (LPF) effectively restores the analog waveform. This works because the delta-sigma modulator shapes quantization noise into higher frequencies, which the low-pass filter attenuates, preserving the desired low-frequency content.
Inside digital audio: Formats at a glance
It goes without saying that in digital audio systems, PCM, I²S, PWM, and PDM each serve distinct roles tailored to specific needs. Pulse code modulation (PCM) remains the most widely used format for representing audio signals as discrete amplitude samples. Inter-IC Sound (I²S) excels in precise, low-latency audio data transmission and supports flexible stereo and multichannel configurations, making it a popular choice for inter-device communication.
Though not typically used for audio signal representation, pulse width modulation (PWM) plays a vital role in audio amplification—especially in Class D amplifiers—by encoding amplitude through duty cycle variation, enabling efficient speaker control with minimal power loss.
On a side note, you can convert a PCM signal to PDM by first increasing its sample rate (interpolation), then reducing its resolution to a single bit. Conversely, a PDM signal can be converted back to PCM by reducing its sampling rate (decimation) and increasing its word length. In both cases, the ratio of the PDM bit rate to the PCM sample rate is known as the oversampling ratio (OSR).
Crisp audio for makers: PDM to power simplified
Cheerfully compact and maker-friendly PDM input Class D audio power amplifier ICs simplify the path from microphone to speaker. By accepting digital PDM signals directly—often from MEMS mics—they scale down both complexity and component count. Their efficient Class D architecture keeps the power draw low and heat minimal, which is ideal for battery-powered builds.
That is to say, audio ICs like MAX98358 require minimal external components, making prototyping a pleasure. With filterless Class D output and built-in features, they simplify audio design, freeing makers to focus on creativity rather than complexity.
Sidewalk: For those eager to experiment, ample example code is available online for SoCs like the ESP32-S3, which use a sigma-delta driver to produce modulated output on a GPIO pin. Then with a passive or active low-pass filter, this output can be shaped into clean, sensible analog signal.
Well, the blueprint below shows an active low-pass filter using the Sallen & Key topology, arguably the simplest active two-pole filter configuration you will find.

Figure 4 Circuit blueprint outlines a simple active low-pass filter. Source: Author
Echoes and endings
As usual, I feel there is so much more to cover, but let’s jump to a quick wrap-up.
Whether you are decoding microphone specs or sketching out a signal chain, understanding PDM is a quiet superpower. It is not just about 1-bit streams; it’s about how digital sound travels, transforms, and finds its voice in your design. If this primer helped demystify the basics, you are already one step closer to building smarter, cleaner audio systems.
Let’s keep listening, learning, and simplifying.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Fundamentals of USB Audio
- Audio design by graphical tool
- Hands-on review: Is a premium digital audio player worth the price?
- Understanding superior professional audio design: A block-by-block approach
- Edge AI Game Changer: Actions Technology Is Redefining the Future of Audio Chips
The post Pulse-density modulation (PDM) audio explained in a quick primer appeared first on EDN.
First PCB
| | Got my first PCB delivered from JLCPCB [link] [comments] |
eevBLAB 135 - SNEAKY Gmail is Training AI with YOUR Emails
MES meets the future

Industry 4.0 focuses on how automation and connectivity could transform the manufacturing canvas. Manufacturing execution systems (MES) with strong automation and connectivity capabilities thrived under the Industry 4.0 umbrella. With the recent expansion of AI usage through large language models (LLMs), Model Context Protocol, agentic AI, etc., we are facing a new era where MES and automation are no longer enough. Data produced on the shop floor can provide insights and lead to better decisions, and patterns can be analyzed and used as suggestions to overcome issues.
As factories become smarter, more connected, and increasingly autonomous, the intersection of MES, digital twins, AI-enabled robotics, and other innovations will reshape how operations are designed and optimized. This convergence is not just a technological evolution but a strategic inflection point. MES, once seen as the transactional layer of production, is transforming into the intelligence core of digital manufacturing, orchestrating every aspect of the shop floor.
MES as the digital backbone of smart manufacturingTraditionally, MES is the operational execution king: tracking production orders, managing work in progress, and ensuring compliance and traceability. But today’s factories demand more. Static, transactional systems no longer suffice when decisions are required in near-real time and production lines operate with little margin for error.
The modern MES is evolving and assuming a role as an intelligent orchestrator, connecting data from machines, people, and processes. It is not just about tracking what happened; it can explain why it happened and provide recommendations on what to do next.
Modern MES ecosystems will become the digital nervous system of the enterprise, combining physical and digital worlds and handling and contextualizing massive streams of shop-floor data. Advanced technologies such as digital twins, AI robotics, and LLMs can thrive by having the new MES capabilities as a foundation.
A data-centric MES delivers contextualized information critical for digital twins to operate, and together, they enable instant visibility of changes in production, equipment conditions, or environmental parameters, contributing to smarter factories. (Source: Critical Manufacturing)
Digital twins: the virtual mirror of the factory
A digital twin is more than a 3D model; it is a dynamic, data-driven representation of the real-world factory, continuously synchronized with live operational data. It enables users to simulate scenarios and test improvements before they ever touch the physical production line. It’s easy to understand how dependent on meaningful data these systems are.
Performing simulations of complex systems as a production line is an impossible task when relying on poor or, even worse, unreliable data. This is where a data-driven MES comes to the rescue. MES sits at the crossroads of every operational transaction: It knows what’s being produced, where, when, and by whom. It integrates human activities, machine telemetry, quality data, and performance metrics into one consistent operational narrative. A data-centric MES is the epitome of abundance of contextualized information crucial for digital twins to operate.
Several key elements made it possible for the MES ecosystems to evolve beyond their transactional heritage into a data-centric architecture built for interoperability and analytics. These include:
- Unified/canonical data model: MES consolidates and contextualizes data from diverse systems (ERP, SCADA, quality, maintenance) into a single model, maintaining consistency and traceability. This common model ensures that the digital twin always reflects accurate, harmonized information.
- Event-driven data streaming: Real-time updates are critical. An event-driven MES architecture continuously streams data to the digital twin, enabling instant visibility of changes in production, equipment conditions, or environmental parameters.
- Edge and cloud integration: MES acts as the intelligent gateway between the edge (where data is generated) and the cloud (where digital twins and analytics reside). Edge nodes pre-process data for latency-sensitive scenarios, while MES ensures that only contextual, high-value data is passed to higher layers for simulation and visualization.
- API-first and semantic connectivity: Modern MES systems expose data through well-defined APIs and semantic frameworks, allowing digital twin tools to query MES data dynamically. This flexibility provides the capability to “ask questions,” such as machine utilization trends or product genealogy, and receive meaningful answers in a timely manner.
It is an established fact that automation is crucial for manufacturing optimization. However, AI is bringing automation to a new level. Robotics is no longer limited to executing predefined movements; now, capable robots may learn and adapt their behavior through data.
Traditional industrial robots operate within rigidly predefined boundaries. Their movements, cycles, and tolerances are programmed in advance, and deviations are handled manually. Robots can deliver precision, but they lack adaptability: A robot cannot determine why a deviation occurs or how to overcome it. Cameras, sensors, and built-in machine-learning models provide robots with capabilities to detect anomalies in early stages, interpret visual cues, provide recommendations, or even act autonomously. This represents a shift from reactive quality control to proactive process optimization.
But for that intelligence to drive improvement at scale, it must be based on operational context. And that’s precisely where MES comes in. As in the case of digital twins, AI-enabled robots are highly dependent on “good” data, i.e., operational context. A data-centric MES ecosystem provides the context and coordination that AI alone cannot. This functionality includes:
- Operational context: MES can provide information such as the product, batch, production order, process parameters, and their tolerances to the robot. All of this information provides the required context for better decisions, aligned with process definition and rules.
- Real-time feedback: Robots send performance data back to the MES, validating it against known thresholds, and log results for traceability and future usage.
- Closed-loop control: MES can authorize adaptive changes (speed, temperature, or torque) based on recommendations inferred from past patterns while maintaining compliance.
- Human collaboration: Through MES dashboards and alerts, operators can monitor and oversee AI recommendations, combining human judgment with machine precision.
For this synergy to work, modern MES ecosystems must support:
- High-volume data ingestion from sensors and vision systems
- Edge analytics to pre-process robotic data close to the source
- API-based communication for real-time interaction between control systems and enterprise layers
- Centralized and contextualized data lakes storing both structured and unstructured contextualized information essential for AI model training
Every day, we see how incredibly fast technology evolves and how instantly its applications reshape entire industries. The wave of innovation fueled by AI, LLMs, and agentic systems is redefining the boundaries of manufacturing.
MES, digital twins, and robotics can be better interconnected, contributing to smarter factories. There is no crystal ball to predict where this transformation will lead, but one thing is undeniable: Data sits at the heart of it all—not just raw data but meaningful, contextualized, and structured information. On the shop floor, this kind of data is pure gold.
MES, by its very nature, occupies a privileged position: It is becoming the bridge between operations, intelligence, and strategy. Yet to leverage from that position, the modern MES must evolve beyond its transactional roots to become a true, data-driven ecosystem: open, scalable, intelligent, and adaptive. It must interpret context, enable real-time decisions, augment human expertise, and serve as the foundation upon which digital twins simulate, AI algorithms learn, and autonomous systems act.
This is not about replacing people with technology. When an MES provides workers with AI-driven insights grounded in operational reality, and when it translates strategic intent into executable actions, it amplifies human judgment rather than diminishing it.
The convergence is here. Technology is maturing. The competitive pressure is mounting. Manufacturers now face a defining choice: Evolve the MES into the intelligent heart of their operations or risk obsolescence as smarter, more agile competitors pull ahead.
Those who make this leap, recognizing that the future belongs to factories where human ingenuity and AI work as a team, will not just modernize their operations; they will secure their place in the future of manufacturing.
The post MES meets the future appeared first on EDN.
КПІшники — переможці Huawei Student Tech Challenge 2025!
Під час щорічного командного змагання серед студентів технічних спеціальностей учасники створювали MVP-розробки для реальних бізнес-кейсів під менторством експертів Huawei.
Участь КПІ ім. Ігоря Сікорського в Українському тижні «Відкривай Україну: тиждень знань і культури» у Великій Британії
🇺🇦🇬🇧 Проректор з наукової роботи Сергій Стіренко і директорка НН ІЕЕ Оксана Вовк у складі делегації з України відвідали університети Великої Британії задля розвитку подальшої співпраці у сфері вищої освіти та науки (у межах UK-Ukraine Twinning Initiativе, спільні проєкти у Horizon Europe, спільні проєкти тощо). Були відвідані: Cardiff University, Birkbeck University of London, University College London (UCL).
My workbench plus my interns.
| submitted by /u/robotlasagna [link] [comments] |
Always a work in progress
| | submitted by /u/holysbit [link] [comments] |
How to design a digital-controlled PFC, Part 1
Shifting from analog to digital control
An AC/DC power supply with input power greater than 75 W requires power factor correction (PFC) to:
- Take the universal AC input (90 V to 264 V) and rectify that input to a DC voltage.
- Maintain the output voltage at a constant level (usually 400 V) with a voltage control loop.
- Force the input current to follow the input voltage such that the electronics load appears to be a pure resistor with a current control loop.
Designing an analog-controlled PFC is relatively easy because the voltage and current control loops are already built into the controller, making it almost plug-and-play. The power-supply industry is currently transitioning from analog control to digital control, especially in high-performance power-supply design. In fact, nearly all newly designed power supplies in data centers use digital control.
Compared to analog control, digital-controlled PFC provides lower total harmonic distortion (THD), a better power factor, and higher efficiency, along with integrated housekeeping functions.
Switching from analog control to digital control is not easy; however, you will face new challenges where continuous signals are represented in a discrete format. And unlike an analog controller, the MCU used in digital control is essentially a “blank” chip; you must write firmware to implement the control algorithms.
Writing the correct firmware can be a headache for someone who has never done this before. To help you learn digital control, in this article series, I’ll provide a step-by-step guide on how to design a digital-controlled PFC, using totem-pole bridgeless PFC as a design example to illustrate the advantages of digital control.
A digital-controlled PFC systemAmong all PFC topologies, totem-pole bridgeless PFC provides the best efficiency. Figure 1 shows a typical totem-pole bridgeless PFC structure.
Figure 1 Totem-pole bridgeless PFC where Q1 and Q2 are high-frequency switches and will work as either a PFC boost switch or synchronous switch based on the VAC polarity. Source: Texas Instruments
Q1 and Q2 are high-frequency switches. Based on VAC polarity, Q1 and Q2 work as a PFC boost switch or synchronous switch, alternatively.
At a positive AC cycle (where the AC line is higher than neutral), Q2 is the boost switch, while Q1 works as a synchronous switch. The pulse-width modulation (PWM) signal for Q1 and Q2 are complementary: Q2 is controlled by D (the duty cycle from the control loop), while Q1 is controlled by 1-D. Q4 remains on and Q3 remains off for the whole positive AC half cycle.
At a negative AC cycle (where the AC neutral is higher than line), the functionality of Q1 and Q2 swaps: Q1 becomes the boost switch, while Q2 works as a synchronous switch. The PWM signal for Q1 and Q2 are still complementary, but D now controls Q1 and 1-D controls Q2. Q3 remains on and Q4 remains off for the whole negative AC half cycle.
Figure 2 shows a typical digital-controlled PFC system block diagram with three major function blocks:
- An ADC to sense the VAC voltage, VOUT voltage, and inductor current for conversion into digital signals.
- A firmware-based average current-mode controller.
- A digital PWM generator.

Figure 2 Block diagram of a typical digital-controlled PFC system with three major function blocks. Source: Texas Instruments
I’ll introduce these function blocks one by one.
The ADCAn ADC is the fundamental element for an MCU; it senses an analog input signal and converts it to a digital signal. For a 12-bit ADC with a 3.3-V reference, Equation 1 expresses the ADC result for a given input signal Vin as:
Conversely, based on a given ADC conversion result, Equation 2 expresses the corresponding analog input signal as:

To obtain an accurate measurement, the ADC sampling rate must follow the Nyquist theorem, which states that a continuous analog signal can be perfectly reconstructed from its samples if the signal is sampled at a rate greater than twice its highest frequency component.
This minimum sampling rate, known as the Nyquist rate, prevents aliasing, a phenomenon where higher frequencies appear as lower frequencies after sampling, thus losing information about the original signal. For this reason, the ADC sampling rate is set at a much higher rate (tens of kilohertz) than the AC frequency (50 or 60 Hz).
Input AC voltage sensingThe AC input is high voltage; it cannot connect to the ADC pin directly. You must use a voltage divider, as shown in Figure 3, to reduce the AC input magnitude.

Figure 3 Input voltage sensing that allows you to connect the high AC input voltage to the ADC pin. Source: Texas Instruments
The input signal to the ADC pin should be within the measurement range of the ADC (0 V to 3.3 V). But to obtain a better signal-to-noise ratio, the input signal should be as big as possible. Hence, the voltage divider for VAC should follow Equation 3:

where VAC_MAX is the peak value of the maximum VAC voltage that you want to measure.
Adding a small capacitor (C) with low equivalent series resistance (ESR) in the voltage divider can remove any potential high-frequency noise; however, you should place C as close as possible to the ADC pin.
Two ADCs measure the AC line and neutral voltages; subtracting the two readings using firmware will obtain the VAC signal.
Output voltage sensingSimilarly, resistor dividers will attenuate the output voltage, as shown in Figure 4, then connect to an ADC pin. Again, adding C with low ESR in the voltage divider removes any potential high-frequency noise, with C placed as close as possible to the ADC pin.

Figure 4 Resistor divider for output voltage sensing, where C removes any potential high-frequency noise. Source: Texas Instruments
To fully harness the ADC measurement range, the voltage divider for VOUT should follow Equation 4:

where VOUT_OVP is the output overvoltage protection threshold.
AC current sensingIn a totem-pole bridgeless PFC, the inductor current is bidirectional, requiring a bidirectional current sensor such as a Hall-effect sensor. With a Hall-effect sensor, if the sensed current is a sine wave, then the output of the Hall-effect sensor is a sine wave with a DC offset, as shown in Figure 5.

Figure 5 The bidirectional hall-effect current sensor output is a sine wave with a DC offset when the input is a sine wave. Source: Texas Instruments
The Hall-effect sensor you use may have an output range that is less than what the ADC can measure. Scaling the Hall-effect sensor output to match the ADC measurement range using the circuit shown in Figure 6 will fully harness the ADC measurement range.
Figure 6 Hall-effect sensor output amplifier used to scale the Hall-effect sensor output to match the ADC measurement range. Source: Texas Instruments
Equation 5 expresses the amplification of the Hall-effect sensor output:
![]()
As I mentioned earlier, because the digital controller MCU is a blank chip, you must write firmware to mimic the PFC control algorithm used in the analog controller. This includes voltage loop implementation, current reference generation, current loop implementation, and system protection. I’ll go over these implementations in Part 2 of this article series.
Digital compensatorIn Figure 7, GV and GI are compensators for the voltage loop and current loop. One difference between analog control and digital control is that in analog control, the compensator is usually implemented through an operational amplifier, whereas digital control uses a firmware-based proportional-integral-derivative (PID) compensator.
For PFC, its small-signal model is a first-order system; therefore, a proportional-integral (PI) compensator is enough to obtain good bandwidth and phase margin. Figure 7 shows a typical digital PI controller structure.

Figure 7 A digital PI compensator where r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. Source: Texas Instruments
In Figure 7, r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. The compensator output, u(k), clamps to a specific range. The compensator also contains an anti-windup reset logic that allows the integral path to recover from saturation.
Figure 8 shows a C code implementation example for this digital PI compensator.

Figure 8 C code example for a digital PI compensator. Source: Texas Instruments
For other digital compensators such as PID, nonlinear PID, and first-, second-, and third-order compensators, see reference [1].
S/Z domain conversionIf you have an analog compensator that works well, and you want to use the same compensator in digital-controlled PFC, you can convert it through S/Z domain conversion. Assume that you have a type II compensator, as shown in Equation 6:
Replace s with bilinear transformation (Equation 7):
![]()
where Ts is the ADC sampling period.
Then H(s) is converted to H(z), as shown in Equation 8:

Rewrite Equation 8 as Equation 9:
![]()
To implement Equation 9 in a digital controller, store two previous control output variables: un-1, un-2, and two previous error histories: en-1, en-2. Then use current error en and Equation 9 to calculate the current control output, un.
Digital PWM generationA digital controller generates a PWM signal much like an analog controller, with the exception that a clock counter generates the RAMP signal; therefore, the PWM signal has limited resolution. The RAMP counter is configurable as up count, down count, or up-down count.
Figure 9 shows the generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation.

Figure 9 Generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation. Source: Texas Instruments
Programming the PERIOD resistor of the PWM generator will determine the switching frequency. For up-count and down-count mode, Equation 10 calculates the PERIOD register value as:

where fclk is the counter clock frequency and fsw is the desired switching frequency.
For the up-down count mode, Equation 11 calculates the PERIOD register value as:

Figure 10 shows an example of using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC.

Figure 10 Using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC. Source: Texas Instruments
Equation 12 shows that the COMP equals the current loop GI output multiplied by the switching period:
![]()
The higher the COMP value, the bigger the D.
To prevent short through between the top switch and the bottom switch, adding a delay on the rising edge of PWMA and the rising edge of PWMB inserts dead time between PWMA and PWMB. This delay is programmable, which means that it’s possible to dynamically adjust the dead time to optimize performance.
Blocks in digital-controlled PFCNow that you have learned about the blocks used in digital-controlled PFC, it’s time to close the control loop. In the next installment, I’ll discuss how to write firmware to implement an average current-mode controller.
Bosheng Sun is a system engineer and Senior Member Technical Staff at Texas Instruments, focused on developing digitally controlled high-performance AC/DC solutions for server and industry applications. Bosheng received a Master of Science degree from Cleveland State University, Ohio, USA, in 2003 and a Bachelor of Science degree from Tsinghua University in Beijing in 1995, both in electrical engineering. He has published over 30 papers and holds six U.S. patents.
Reference
- “C2000
Digital Control Library User’s Guide.” TI literature No. SPRUID3, January 2017.
Related Content
- Digital control for power factor correction
- Digital control unveils a new epoch in PFC design
- Power Tips #124: How to improve the power factor of a PFC
- Power Tips #115: How GaN switch integration enables low THD and high efficiency in PFC
- Power Tips #116: How to reduce THD of a PFC
The post How to design a digital-controlled PFC, Part 1 appeared first on EDN.
CEA-Leti launches multi-lateral program to accelerate AI with micro-LED data links
AlixLabs raises €15m in Series A funding round to accelerate APS beta testing
onsemi authorizes $6bn share repurchase program
Mojo Vision adds Dr Anthony Yu to advisory board
5V mini-buck to the rescue! Fixing cooked Eight Sleep Pod 4 hub
| | submitted by /u/Nerfarean [link] [comments] |
Optical combs yield extreme-accuracy gigahertz RF oscillator

It may seem at times that there is a divide between the optical/photonic domain and the RF one, with the terahertz zone between them as a demarcation. If you need to make a transition between the photonic and RF words, you use electrooptical devices such as LEDs and photodetectors of various types. Now, all or most optical systems are being used to perform functions in the optical band where electric comments can’t fulfill the needs, even pushing electronic approaches out of the picture.
In recent years, this divide has also been bridged by newer, advanced technologies such as integrated photonics where optical functions such as lasers, waveguides, tunable elements, filters, and splitters are fabricated on an optically friendly substrate such as lithium niobate (LiNbO3). There are even on-chip integrated transceivers and interconnects such as the ones being developed by Ayar Labs. The capabilities of some of these single- or stacked-chip electro-optical devices are very impressive.
However, there is another way in which electronics and optics are working together with a synergistic outcome. The optical frequency comb (OFC), also called optical comb, was originally developed about 25 years ago—for which John Hall and Theodor Hänsch received the 2005 Nobel Prize in Physics—to count the cycles from optical atomic clocks and for precision laser-based spectroscopy.
It has since found many other uses, of course, as it offers outstanding phase stability at optical frequencies for tuning or as a local oscillator (LO). Some of the diverse applications include X-ray and attosecond pulse generation, trace gas sensing in the oil and gas industry, tests of fundamental physics with atomic clocks, long-range optical links, calibration of atomic spectrographs, precision time/frequency transfer over fiber and through free space, and precision ranging.
Use of optical components is not limited to the optical-only domain. In the last few years, researchers have devised ways to use the incredible precision of the OFC to generate highly stable RF carriers in the 10-GHz range. Phase jitter in the optical signal is actually reduced as part of the down-conversion process, so the RF local oscillator has better performance than its source comb.
This is not an intuitive down-conversion scheme (Figure 1).

Figure 1 Two semiconductor lasers are injection-locked to chip-based spiral resonators. The optical modes of the spiral resonators are aligned, using temperature control, to the modes of the high-finesse Fabry-Perot (F-P) cavity for Pound–Drever–Hall (PDH) locking (a). A microcomb is generated in a coupled dual-ring resonator and is heterodyned with the two stabilized lasers. The beat notes are mixed to produce an intermediate frequency, fIF, which is phase-locked by feedback to the current supply of the microcomb seed laser (b). A modified uni-traveling carrier (MUTC) photodetector chip is used to convert the microcomb’s optical output to a 20-GHz microwave signal; a MUTC photodetector has response to hundreds of GHz (c). Source: Nature
But this simplified schematic diagram does not reveal the true complexity and sophistication of the approach, which is illustrated in Figure 2.

Figure 2 Two distributed-feedback (DFB) lasers at 1557.3 and 562.5 nm are self-injection-locked (SIL) to Si3N4 spiral resonators, amplified and locked to the same miniature F-P cavity. A 6-nm broad-frequency comb with an approximately 20 GHz repetition rate is generated in a coupled-ring resonator. The microcomb is seeded by an integrated DFB laser, which is self-injection-locked to the coupled-ring microresonator. The frequency comb passes through a notch filter to suppress the central line and is then amplified to 60 mW total optical power. The frequency comb is split to beat with each of the PDH-locked SIL continuous wave references. Two beat notes are amplified, filtered and then mixed to produce fIF, which is phase-locked to a reference frequency. The feedback for microcomb stabilization is provided to the current supply of the microcomb seed laser. Lastly, part of the generated microcomb is detected in an MUTC detector to extract the low-noise 20-GHz RF signal. Source: Nature
At present, this is not implemented as a single-chip device or even as a system with just a few discrete optical components; many of the needed precision functions are only available on individual substrates. A complete high-performance system takes a rack-sized chassis fitting in a single-height bay.
However, there has been significant progress on putting multiple functional locks into single-chip substrate, so it wouldn’t be surprising to see a monolithic (or nearly so) device within a decade or perhaps just a few years.
What sort of performance can such a system deliver? There are lots of numbers and perspectives to consider, and testing these systems—at these levels of performance—to assess their capabilities is as much of a challenge as fabricating them. It’s the metrology dilemma: how do you test a precision device? And how do you validate the testing arrangement itself?
One test result indicates that for a 10-GHz carrier, the phase noise is −102 dBc/Hz at 100 Hz offset and decreases to −141 dBc/Hz at 10 kHz offset. Another characterization compares this performance to that of other available techniques (Figure 3).

Figure 3 The platforms are all scaled to 10-GHz carrier and categorized based on the integration capability of the microcomb generator and the reference laser source, excluding the interconnecting optical/electrical parts. Filled (blank) squares are based on the optical frequency division (OFD) standalone microcomb approach: 22-GHz silica microcomb (i); 5-GHz Si3N4 microcomb (ii); 10.8-GHz Si3N4 microcomb (iii) ; 22-GHz microcomb (iv); MgF2 microcomb (v); 100-GHz Si3N4 microcomb (vi); 22-GHz fiber-stabilized SiO2 microcomb (vii); MgF2 microcomb (viii); 14-GHz MgF2 microcomb pumped by an ultrastable laser (ix); and 14-GHz microcomb-based transfer oscillator (x). Source: Nature
There are many good online resources available that explain in detail the use of optical combs for RF-carrier generation. Among these are “Photonic chip-based low-noise microwave oscillator” (Nature); “Compact and ultrastable photonic microwave oscillator” (Optics Letters via ResearchGate); and “Photonic Microwave Sources Divide Noise and Shift Paradigms” (Photonics Spectra).
In some ways, it seems there’s a “frenemy” relationship between today’s advanced photonics and the conventional world of RF-based signal processing. But as has usually been the case, the best technology will win out, and it will borrow from and collaborate with others. Photonics and electronics each have their unique attributes and bring something to the party, while their integrated pairing will undoubtedly enable functions we can’t fully envision—at least not yet.
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- Is Optical Computing in Our Future?
- Use optical fiber as an isolated current sensor?
- Analog Optical Fiber Forges RF-Link Alternative
- Silicon yields phased-arrays for optics, not just RF
- Attosecond laser pulses drive petahertz optical transistor switching
The post Optical combs yield extreme-accuracy gigahertz RF oscillator appeared first on EDN.
LPF using OP07 IC
| submitted by /u/SpecialistRare832 [link] [comments] |
Студенти і аспіранти КПІ ім. Ігоря Сікорського — стипендіати компанії 🇺🇸 Simulmedia для молодих дослідників!
До Київської політехніки завітав засновник і генеральний директор американської компанії Simulmedia Дейв Морган (Dave Morgan). Він зустрівся зі стипендіатами: студентами магістратури та аспірантами Факультет прикладної математики (ФПМ), які навчаються за спеціальністю F1 Прикладна математика.
КПІ ім. Ігоря Сікорського у 1000 найсталіших університетів світу за версією QS Sustainability 2026!
За результатами QS World University Rankings: Sustainability 2026 наш університет демонструє значний прогрес і підвищує свої позиції одразу за всіма рівнями оцінювання:
КПІшнику Тарасу Карпюку присвоєно звання Герой України з удостоєнням ордена Золота Зірка (посмертно)
Під час церемонії вручення державних нагород українським воїнам Президент України Володимир Зеленський передав найвищу державну нагороду України «Золота Зірка» Тараса Карпюка, випускника Факультету соціології і права (ФСП) КПІ ім. Ігоря Сікорського, його брату — Юрію, КПІшнику, випускнику кафедри АПЕПС ТЕФ (наразі НН ІАТЕ), чинному військовому-добровольцю — та батькові Юрію Нестеренку, начальнику відділу з питань цивільного захисту КПІ.



