Feed aggregator

ams OSRAM launches FIREFLY SFH 4030B and SFH 4060B IREDs

Semiconductor today - Thu, 11/20/2025 - 17:39
ams OSRAM GmbH of Premstätten, Austria and Munich, Germany has launched the new-generation FIREFLY SFH 4030B and SFH 4060B infrared light-emitting diodes (IREDs), which are claimed to set new standards for infrared LEDs in augmented reality (AR) and virtual reality (VR) applications such as eye tracking in smart glasses and AR/VR headsets...

📰 Газета "Київський політехнік" № 41-42 за 2025 (.pdf)

Новини - Thu, 11/20/2025 - 17:07
📰 Газета "Київський політехнік" № 41-42 за 2025 (.pdf)
Image
Інформація КП чт, 11/20/2025 - 17:07
Текст

Вийшов 41-42 номер газети "Київський політехнік" за 2025 рік

Compute: Powering the transition from Industry 4.0 to 5.0

EDN Network - Thu, 11/20/2025 - 16:00
Chip design illustration.

Industry 4.0 has transformed manufacturing, connecting machines, automating processes, and changing how factories think and operate. But its success has revealed a new constraint: compute. As automation, AI, and data-driven decision-making scale exponentially, the world’s factories are facing a compute challenge that extends far beyond performance. The next industrial era—Industry 5.0—will bring even more compute demand as it builds on the IoT to improve collaboration between humans and machines, industry, and the environment.

Progress in this next wave of industrial development is dependent on advances at the semiconductor level. Advances in chip design, materials science, and process innovation are essential. Alongside this, there needs to be a reimagining of how we power industrial intelligence, not just in terms of the processing capability but in how that capability is designed, sourced, and sustained.

Rethinking compute for a connected future

The exponential rise of data and compute has placed intense pressure on the chips that drive industrial automation. AI-enabled systems, predictive maintenance, and real-time digital twins all require compute to move closer to where data is created: at the edge. However, edge environments come with tight energy, size, and cooling constraints, creating a growing imbalance between compute demand and power availability.

AI and digital triplets, which build on traditional digital twin models by leveraging agentic AI to continuously learn and analyze data in the field, have moved the requirement for processing to be closer to where the data is created. In use cases such as edge computing, where computing takes place within sensing and measuring devices directly, this can be intensive. That decentralization introduces new power and efficiency pressures on infrastructure that wasn’t designed for such intensity.

The result is a growing imbalance between performance and the limitations of semiconductor manufacturing. Businesses must have broader thinking around energy consumption, heat management, power balance, and raw materials sourcing. Sustainability can no longer be treated as an unwarranted cost or compliance exercise; it’s becoming a new indicator of competitiveness, where energy-efficient, low-emission compute enables manufacturers to meet growing data reliance without exceeding environmental limits.

Businesses must take these challenges seriously, as the demand for compute will only escalate with Industry 5.0. AI will become more embedded, and the data it relies on will grow in scale and sophistication.

If manufacturing designers dismiss these issues, they run the risk of bottlenecking their productivity with poor efficiency and sustainability. This means that when chip designers optimize for Industry 5.0 applications, they should consider responsibility, efficiency, and longevity alongside performance and cost. The challenge is no longer just “can we build faster systems?” It’s now “can we build systems that endure environmentally, economically, and geopolitically?”

Innovation starts at the material level

The semiconductor revolution of Industry 5.0 won’t be defined solely by faster chips but by the science and sustainability embedded in how those chips are made. For decades, semiconductor progress has been measured in nanometers; the next leap forward will be measured in materials. Advances in compounds such as silicon carbide and gallium nitride are improving chip performance and transforming how the industry approaches sustainability, supply chain resilience, and sovereignty.

Chip design illustration.Advances in chip design, materials science, and process innovation are essential in the next wave of industrial development. (Source: Adobe Stock)

These materials allow for higher power efficiency and longer lifespans, reducing energy consumption across industrial systems. Combined with cleaner fabrication techniques such as ambient temperature processing and hydrogen-based chemistries, they mark a significant step toward sustainable compute. The result is a new paradigm where sustainability no longer comes at an artificial premium but is an inherent feature of technological progress.

Process innovations, such as ambient temperature fabrication and green hydrogen, offer new ways to reduce environmental footprint while improving yield and reliability. Beyond the technology itself and material innovations, more focus should be placed on decentralization and alternative sources of raw materials. This will empower businesses and the countries they operate in to navigate geopolitical and supply chain challenges.

Collaboration is the new competitive edge

The compute challenge that Industry 5.0 presents isn’t an isolated problem to solve. The demand and responsibility for change doesn’t lie with a single company, government or research body. It requires an ecosystem mindset, where collaboration is encouraged, replacing competition in key areas of innovation and infrastructure.

Collaboration between semiconductor manufacturers, industrial original equipment manufacturers, policymakers, and researchers is important to accelerate energy-efficient design and responsible sourcing. Interconnected and shared platforms within the semiconductor ecosystem de-risk tech investments. This ensures the collective benefits of sustainability and resilience benefit entire industrial innovation, not just individual players.

The next era of industrial progress will see the most competitive organizations collaborate and work together, with the goal of shared innovation and progress.

Powering compute in the Industry 5.0 transition

The evolution from Industry 4.0 to Industry 5.0 is more than a technological upgrade; it represents a change in attitude around how digital transformation is approached in industrial settings. This new era will see new approaches to technological sustainability, sovereignty, and collaboration that prioritize productivity and speed. Compute will be the central driver of this transition. Materials, processes, and partnerships will determine whether the industrial sector can grow without outpacing its own energy and sustainability limits.

Industry 5.0 presents a vision of industrialization that gives back more than it takes, amplifying both productivity and possibility. The transition is already underway. Now, businesses need to ensure innovation, efficiency, and resilience evolve together to power a truly sustainable era of compute.

The post Compute: Powering the transition from Industry 4.0 to 5.0 appeared first on EDN.

A holiday shopping guide for engineers: 2025 edition

EDN Network - Thu, 11/20/2025 - 15:00

As of this year, EDN has consecutively published my odes to holiday-excused consumerism for more than a half-decade straight (and intentionally ahead of Black Friday, if you hadn’t already deduced), now nearing ten editions in total. Here are the 2019, 2020, 2021, 2022, 2023, and 2024 versions; I skipped a few years between 2014 and its successors.

As usual, I’ve included up-front links to prior-year versions of the Holiday Shopping Guide for Engineers because I’ve done my best here to not regurgitate any past recommendations; the stuff I’ve previously suggested largely remains valid, after all. That said, it gets increasingly difficult each year not to repeat myself! And as such, I’ve “thrown in the towel” this year, at least to some degree…you’ll find a few repeat categories this time, albeit with new product suggestions within them.

Without any further ado, and as usual, ordered solely in the order in which they initially came out of my cranium…

A Windows 11-compatible (or alternative O/S-based) computer

Microsoft’s general support for Windows 10 ended nearly a month ago (on October 14, to be exact) as I’m writing these words. For you Windows users out there, options exist for extending Windows 10 support updates (ESUs) for another year on consumer-licensed systems, both paid (spending $30 or redeeming 1,000 Microsoft Rewards points, with both ESU options covering up to 10 devices) and free (after syncing your PC settings).

If you’re an IT admin, the corporate license ESU program specifics are different; see here. And, as I covered in hands-on detail a few months back, (unsanctioned) options also exist for upgrading officially unsupported systems to Windows 11, although I don’t recommend relying on them for long-term use (assuming the hardware-hack attempt is successful at all, that is). As I wrote back in June:

The bottom line: any particular system whose specifications aren’t fully encompassed by Microsoft’s Windows 11 requirements documentation is fair game for abrupt no-boot cutoff at any point in the future. At minimum, you’ll end up with a “stuck” system, incapable of being further upgraded to newer Windows 11 releases, therefore doomed to fall off the support list at some point in the future. And if you try to hack around the block, you’ll end up with a system that may no longer reliably function, if it even boots at all.

You could also convert your existing PC over to run a different O/S, such as ChromeOS Flex (originally Neverware’s CloudReady, then acquired and now maintained by Google) or a Linux distro of your preference. For that matter, you could also just “get a Mac”. That said, any of these options will likely also compel conversions to new apps for the new O/S foundation. The aggregate learning curve from all these software transitions can end up being a “bridge too far”.

Instead, I’d suggest you just “bite the bullet” and buy a new PC for yourself and/or others for the holidays, before CPUs, DRAM, SSDs, and other building block components become even more supply-constrained and tariff-encumbered than they are now, and to ease the inevitable eventual transition to Windows 11.

Then donate your old hardware to charity for someone else to O/S-convert and extend its useful life. That’s what I’ll be doing, for example, with my wife’s Dell Inspiron 5570, which, as it turns out, wasn’t Windows 11-upgradeable after all.

Between now and next October, when the Windows 10 ESU runs out (unless the deadline gets extended again), we’ll replace it with the Dell 16 Plus (formerly Inspiron 16 Plus) in the above photo.

An AI-enhanced mobile device

The new Dell laptop I just mentioned, which we’d bought earlier this summer (ironically just prior to Microsoft’s unveiling of the free Windows 10 ESU option), is compatible with Microsoft’s Copilot+ specifications for AI-enhanced PCs by virtue of the system’s Intel Core Ultra 7 256V CPU with an integrated 47 TOPS NPU.

That said, although its support for local (vs conventional cloud) AI inference is nice from a future-proofing standpoint, there’s not much evidence of compelling on-client AI benefits at this early stage, save perhaps for low-latency voice interface capabilities (not to mention broader uninterrupted AI-based functionality when broadband goes down).

The current situation is very different when it comes to fully mobile devices. Yes, I know, laptops also have built-in batteries, but they often still spend much of their operating life AC-tethered, and anyway, their battery packs are much beefier than the ones in the smartphones and tablets I’m talking about here.

Local AI processing is not only faster than to-and-back-from-cloud roundtrip delays (particularly lengthy over cellular networks), but it also doesn’t gobble up precious limited-monthly-allocation data. Then there’s the locally stored-and-processed data enhanced privacy factor to consider, along with the oft-substantial power saving accrued by not needing to constantly leverage the mobile device’s Wi-Fi and cellular data subsystems.

You may indeed believe (as, full disclosure, I do) that AI features are of limited-at-best benefit at the moment, at least for the masses. But I think we can also agree that ongoing widespread-and-expanding and intense industry attention on AI will sooner or later cultivate compelling capabilities.

Therefore, I’ve showcased mobile devices’ AI attributes in recent years’ announcement coverage (such as that of Google’s Pixel 10 series shown in the photo above) and why I recommend them, again from a future-proofing angle if nothing else, if you’re (and/or yours are) due for a gadget upgrade this year. Meanwhile, I’ll soldier on with my Pixel 7s

Audio education resources

As regular readers likely already realize, audio has received particular showcase attention in my blog posts and teardowns this past year-plus (a trend which will admittedly also likely extend into at least next year). This provided, among other things, an opportunity for me to refresh and expand my intellectual understanding of the topic.

I kept coming across references to Bob Cordell, mentioning both his informative website and his classic tomes, Designing Audio Power Amplifiers (make sure you purchase the latest 2nd edition, published in 2019, whose front cover is shown above) and the newer Designing Audio Circuits and Systems, released just last year.

Fair warning: neither book is inexpensive, especially in hardback, but even in paperback, and neither is available in a lower-priced Kindle version, either. That said, per both reviews I’ve seen from others and my own impressions, they’re well worth the investments.

Another worthwhile read, this time complete with plenty of humor scattered throughout, is Schiit Happened: The Story of the World’s Most Improbable Start-Up, in this case available in both inexpensive paperback and even more cost-effective Kindle formats. Written by Jason Stoddard and Mike Moffat, the founders of Schiit Audio, whom I’ve already mentioned several times this year, it’s also available for free on the Head-Fi Forum, where Jason has continued his writing. But c’mon, folks, drop $14.99 (or $4.99) to support a scrappy U.S. audio success story.

As far as audio-related magazines go, I first off highly recommend a subscription to audioXpress. Generalist electronics design publications like EDN are great, of course, but topic-focused coverage like that offered by audioXpress for audio design makes for an effective information companion.

On the other end of the product development chain, where gear is purchased and used by owners, there’s Stereophile, for which I’ve also been a faithful reader for more years than I care to remember. And as for the creation, capture, mastering, and duplication of the music played on those systems, I highly recommend subscriptions to Sound on Sound and, if your budget allows for a second publication, Recording. Consistently great stuff, all of it.

Finally, as an analogy to my earlier EDN-plus-audioXpress pairing, back in 2021 I recommended memberships to generalist ACM and/or IEEE professional societies. This time, I’ll supplement that suggestion with an audio-focused companion, the AES (Audio Engineering Society).

Back when I was a full-time press guy with EDN, I used to be able to snag complimentary admission to the twice-yearly AES conventions along with other organization events, which were always rich sources of information and networking connection cultivation.

To my dying day, I will always remember one particularly fascinating lecture, which correlated Ludwig van Beethoven’s progressive hearing degradation and its (presenter-presumed) emotional and psychological effects to the evolution of the music styles that he composed over time. Then there were the folks from Fraunhofer that I first-time met at an AES convention, kicking off a longstanding professional collaboration. And…

Audio gear

For a number of years, my Drop- (formerly Massdrop)-sourced combo of the x Grace Design Standard DAC and Objective 2 Headphone Amp Desktop Edition afforded me with a sonically enhanced alternative to my computer’s built-in DAC and amp for listening to music over plugged-in headphones and powered speakers:

As I’ve “teased” in a recent writeup, however, I recently upgraded this unbalanced-connection setup to a four-component Schiit stack, complete with a snazzy aluminum-and-acrylic rack:

Why?

Part of the reason is that I wanted to sonically experience a tube-based headphone amp for myself, both in an absolute sense and relative to solid-state Schiit amplifiers also in my possession.

Part of it is that all these Schiit-sourced amps also integrate preamp outputs for alternative-listening connection to an external power amp-plus-passive speaker set:

Another part of the reason is that I’ve now got a hardware equalizer as an alternative to software EQ, the latter (obviously) only relevant for computer-sourced audio. And relatedly, part of it is that I’ve also now got a hardware-based input switcher, enabling me to listen to audio coming not only from my PC but also from another external source. What source, you might ask?

Why, one of the several turntables that I also acquired and more broadly pressed into service this past year, of course!

I’ve really enjoyed reconnecting with vinyl and accumulating a LP collection (although my wallet has admittedly taken a beating in the process), and encourage you (and yours) to do the same. Stand by for a more detailed description of my expanded office audio setup, including its balanced “stack” counterpart, in an upcoming dedicated topic to be published shortly.

For sonically enhancing the rest of the house, where a computer isn’t the primary audio source, companies such as Bluesound and WiiM sell various all-in-one audio streamers, both power amplifier-inclusive (for use with traditional passive speakers) and amp-less (for pairing with powered speakers or intermediary connection to a standalone external amp).

A Bluesound Node N130, for example, has long resided at the “man cave” half of my office:

And the class D amplifier inside the “Pro” version of the WiiM Amp, which I plan to press into service soon in my living room, even supports the PFFB feature I recently discussed:

(Apple-reminiscent Space Gray shown and self-owned; Dark Gray and Silver also available)

More developer hardware

Here’s the other area where, as I alluded to in the intro, I’m going to overlap a bit with a past-year Holiday Shopping Guide. Two years ago, I recommended some developer kits from both the Raspberry Pi Foundation and NVIDIA, including the latter’s then-$499 Jetson Orin Nano:

It’s subsequently been “replaced”, as well as notably priced-decreased, by the Orin Nano Super Developer Kit at $249.

Why the quotes around “replaced”? That’s because, as good news for anyone who acted on my earlier recommendation, the hardware’s exactly the same as before: “Super” is solely reflective of an enhanced software suite delivering claimed generative AI performance gains of up to 1.7x, and freely available to existing Jetson Orin Nano owners.

More recently, last month, NVIDIA unveiled the diminutive $3,999 DGX Spark:

with compelling potential, both per company claims and initial hands-on experiences:

As a new class of computer, DGX Spark delivers a petaflop of AI performance and 128GB of unified memory in a compact desktop form factor, giving developers the power to run inference on AI models with up to 200 billion parameters and fine-tune models of up to 70 billion parameters locally. In addition, DGX Spark lets developers create AI agents and run advanced software stacks locally.

albeit along with, it should also be noted, an irregular development history and some troubling early reviews. The system was initially referred to as Project DIGITS when unveiled publicly at the January 2025 CES. Its application processor, originally referred to as the N1X, is now renamed the GB10. Co-developed by NVIDIA (who contributed the Grace Blackwell GPU subsystem) and MediaTek (who supplied the multi-core CPU cluster and reportedly also handled full SoC integration duties), it was originally intended for—and may eventually still show up in—Arm-based Windows PCs.

But repeated development hurdles have (reportedly) delayed the actualization of both SoC and system shipment aspirations, and lingering functional bugs preclude Windows compatibility (therefore explaining the DGX Spark’s Linux O/S foundation).

More generally, just a few days ago as I write these words, MAKE Magazine’s latest issue showed up in my mailbox, containing the most recent iteration of the publication’s yearly “Guide to Boards” insert. Check it out for more hardware ideas for your upcoming projects.

A smart ring

Regular readers have likely also noticed my recent series of writeups on smart rings, comprising both an initial overview and subsequent reviews based on fingers-on evaluations.

As I write these words in mid-November, Ultrahuman’s products have been pulled from the U.S. market due to patent-infringement rulings, although they’re still available elsewhere in the world. RingConn conversely concluded a last-minute licensing agreement, enabling ongoing sales of its devices worldwide, including in the United States.

And as for the instigator of the patent infringement actions, market leader Oura, my review of the company’s Gen3 smart ring will appear at EDN shortly after you read these words, with my eval of the latest-generation Ring 4 (shown above) to follow next month.

Smart rings’ Li-ion batteries, like those of any device with fully integrated cells, won’t last forever, so you need to go into your experience with one of them eyes-open to the reality that it’ll ultimately be disposable (or, in my case, transform into a teardown project).

That said, the technology is sufficiently mature at this point that I feel comfortable recommending them to the masses. They provide useful health insights, even though they tend to notably overstate step counts for those who use computer keyboards a lot. And unlike a smart watch or other wrist-based fitness tracker, you don’t need to worry (so much, at least) about color- and style-coordinating a smart ring with the rest of your outfit ensemble.

(Not yet a) pair of smart glasses

Conversely, alas, I still can’t yet recommend smart glasses to anyone but early adopters (like me; see above). Meta’s latest announced device suite, along with various products from numerous (and a growing list of) competitors, suggests that this product category is still relatively immature, therefore dynamic in its evolutionary nature. I’d hate to suggest something for you to buy for others that’ll be obsolete in short order. For power users like you, on the other hand…

Happy holidays!

And with that, having just passed through 2,500 words, I’ll close here. Upside: plenty of additional presents-to-others-and/or-self ideas are now littering the cutting-room floor, so I’ve already got no shortage of topics for next year’s edition! Until then, sound off in the comments, and happy holidays!

 Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A holiday shopping guide for engineers: 2025 edition appeared first on EDN.

Ohio State University buys Aixtron CCS MOCVD system

Semiconductor today - Thu, 11/20/2025 - 14:16
The Ohio State University (OSU) has purchased a Close Coupled Showerhead system for metal-organic chemical vapor deposition (CCS MOCVD) from Aixtron SE of Herzogenrath, near Aachen, Germany. The tool will be used for epitaxy of gallium oxide (GaO) and aluminum gallium oxide (AlGaO) for materials and device development on 100mm substrates...

Pulse-density modulation (PDM) audio explained in a quick primer

EDN Network - Thu, 11/20/2025 - 09:57

Pulse-density modulation (PDM) is a compact digital audio format used in devices like MEMS microphones and embedded systems. This compact primer eases you into the essentials of PDM audio.

Let’s begin by revisiting a ubiquitous PDM MEMS microphone module based on MP34DT01-M—an omnidirectional digital MEMS audio sensor that continues to serve as a reliable benchmark in embedded audio design.

Figure 1 A MEMS microphone mounted on a minuscule module detects sound and produces a 1-bit PDM signal. Source: Author

When properly implemented, PDM can digitally encode high-quality audio while remaining cost-effective and easy to integrate. As a result, PDM streams are now widely adopted as the standard data output format for MEMS microphones.

On paper, the anatomy of a PDM microphone boils down to a few essential building blocks like:

  • MEMS microphone element, typically a capacitive MEMS structure, unlike the electret capsules found in analog microphones.
  • Analog preamplifier boosts the low-level signal from the MEMS element for further processing.
  • PDM modulator converts the analog signal into a high-frequency, 1-bit pulse-density modulated stream, effectively acting as an integrated ADC.
  • Digital interface logic handles timing, clock synchronization, and data output to the host system.

Next is the function block diagram of T3902, a digital MEMS microphone that integrates a microphone element, impedance converter amplifier, and fourth-order sigma-delta (Σ-Δ) modulator. Its PDM interface enables time multiplexing of two microphones on a single data line, synchronized by a shared clock.

Figure 2 Functional block diagram outlines the internal segments of the T3902 digital MEMS microphone. Source: TDK

The analog signal generated by the MEMS sensing element in a PDM microphone—sometimes referred to as a digital microphone—is first amplified by an internal analog preamplifier. This amplified signal is then sampled at a high rate and quantized by the PDM modulator, which combines the processes of quantization and noise shaping. The result is a single-bit output stream at the system’s sampling rate.

Noise shaping plays a critical role by pushing quantization noise out of the audible frequency range, concentrating it at higher frequencies where it can be more easily filtered out. This ensures relatively low noise within the audio band and higher noise outside it.

The microphone’s interface logic accepts a master clock signal from the host device—typically a microcontroller (MCU) or a digital signal processor (DSP)—and uses it to drive the sampling and bitstream transmission. The master clock determines both the sampling rate and the bit transmission rate on the data line.

Each 1-bit sample is asserted on the data line at either the rising or falling edge of the master clock. Most PDM microphones support stereo operation by using edge-based multiplexing: one microphone transmits data on the rising edge, while the other transmits on the falling edge.

During the opposite edge, the data output enters a high-impedance state, allowing both microphones to share a single data line. The PDM receiver is then responsible for demultiplexing the combined stream and separating the two channels.

As a side note, the aforesaid microphone module is hardwired to treat data as valid when the clock signal is low.

The magic behind 1-bit audio streams

Now, back in the driveway. PDM is a clever way to represent a sampled signal using just a stream of single bits. It relies on delta-sigma conversion, also known as sigma-delta, and it’s the core technology behind many oversampling ADCs and DACs.

At first glance, a one-bit stream seems hopelessly noisy. But here is the trick: by sampling at very high rates and applying noise-shaping techniques, most of that noise is pushed out of the audible range—above 20 kHz—where it no longer interferes with the listening experience. That is how PDM preserves audio fidelity despite its minimalist encoding.

There is a catch, though. You cannot properly dither a 1-bit stream, which means a small amount of distortion from quantization error is always present. Still, for many applications, the trade-off is worth it.

Diving into PDM conversion and reconstruction, we begin with the direct sampling of an analog signal at a high rate—typically several megahertz or more. This produces a pulse-density modulation stream, where the density of 1s and 0s reflects the amplitude of the original signal.

Figure 3 An example that renders a single cycle of a sine wave as a digital signal using pulse density modulation. Source: Author

Naturally, the encoding relies on 1-bit delta-sigma modulation: a process that uses a one-bit quantizer to output either a 1 or a 0 depending on the instantaneous amplitude. A 1 represents a signal driven fully high, while a 0 corresponds to fully low.

And, because the audio frequencies of interest are much lower than the PDM’s sampling rate, reconstruction is straightforward. Passing the PDM stream through a low-pass filter (LPF) effectively restores the analog waveform. This works because the delta-sigma modulator shapes quantization noise into higher frequencies, which the low-pass filter attenuates, preserving the desired low-frequency content.

Inside digital audio: Formats at a glance

It goes without saying that in digital audio systems, PCM, I²S, PWM, and PDM each serve distinct roles tailored to specific needs. Pulse code modulation (PCM) remains the most widely used format for representing audio signals as discrete amplitude samples. Inter-IC Sound (I²S) excels in precise, low-latency audio data transmission and supports flexible stereo and multichannel configurations, making it a popular choice for inter-device communication.

Though not typically used for audio signal representation, pulse width modulation (PWM) plays a vital role in audio amplification—especially in Class D amplifiers—by encoding amplitude through duty cycle variation, enabling efficient speaker control with minimal power loss.

On a side note, you can convert a PCM signal to PDM by first increasing its sample rate (interpolation), then reducing its resolution to a single bit. Conversely, a PDM signal can be converted back to PCM by reducing its sampling rate (decimation) and increasing its word length. In both cases, the ratio of the PDM bit rate to the PCM sample rate is known as the oversampling ratio (OSR).

Crisp audio for makers: PDM to power simplified

Cheerfully compact and maker-friendly PDM input Class D audio power amplifier ICs simplify the path from microphone to speaker. By accepting digital PDM signals directly—often from MEMS mics—they scale down both complexity and component count. Their efficient Class D architecture keeps the power draw low and heat minimal, which is ideal for battery-powered builds.

That is to say, audio ICs like MAX98358 require minimal external components, making prototyping a pleasure. With filterless Class D output and built-in features, they simplify audio design, freeing makers to focus on creativity rather than complexity.

Sidewalk: For those eager to experiment, ample example code is available online for SoCs like the ESP32-S3, which use a sigma-delta driver to produce modulated output on a GPIO pin. Then with a passive or active low-pass filter, this output can be shaped into clean, sensible analog signal.

Well, the blueprint below shows an active low-pass filter using the Sallen & Key topology, arguably the simplest active two-pole filter configuration you will find.

Figure 4 Circuit blueprint outlines a simple active low-pass filter. Source: Author

Echoes and endings

As usual, I feel there is so much more to cover, but let’s jump to a quick wrap-up.

Whether you are decoding microphone specs or sketching out a signal chain, understanding PDM is a quiet superpower. It is not just about 1-bit streams; it’s about how digital sound travels, transforms, and finds its voice in your design. If this primer helped demystify the basics, you are already one step closer to building smarter, cleaner audio systems.

Let’s keep listening, learning, and simplifying.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Pulse-density modulation (PDM) audio explained in a quick primer appeared first on EDN.

First PCB

Reddit:Electronics - Thu, 11/20/2025 - 03:24
First PCB

Got my first PCB delivered from JLCPCB

submitted by /u/movelikepro
[link] [comments]

MES meets the future

EDN Network - Wed, 11/19/2025 - 23:08
A data-centric MES.

Industry 4.0 focuses on how automation and connectivity could transform the manufacturing canvas. Manufacturing execution systems (MES) with strong automation and connectivity capabilities thrived under the Industry 4.0 umbrella. With the recent expansion of AI usage through large language models (LLMs), Model Context Protocol, agentic AI, etc., we are facing a new era where MES and automation are no longer enough. Data produced on the shop floor can provide insights and lead to better decisions, and patterns can be analyzed and used as suggestions to overcome issues.

As factories become smarter, more connected, and increasingly autonomous, the intersection of MES, digital twins, AI-enabled robotics, and other innovations will reshape how operations are designed and optimized. This convergence is not just a technological evolution but a strategic inflection point. MES, once seen as the transactional layer of production, is transforming into the intelligence core of digital manufacturing, orchestrating every aspect of the shop floor.

MES as the digital backbone of smart manufacturing

Traditionally, MES is the operational execution king: tracking production orders, managing work in progress, and ensuring compliance and traceability. But today’s factories demand more. Static, transactional systems no longer suffice when decisions are required in near-real time and production lines operate with little margin for error.

The modern MES is evolving and assuming a role as an intelligent orchestrator, connecting data from machines, people, and processes. It is not just about tracking what happened; it can explain why it happened and provide recommendations on what to do next.

Modern MES ecosystems will become the digital nervous system of the enterprise, combining physical and digital worlds and handling and contextualizing massive streams of shop-floor data. Advanced technologies such as digital twins, AI robotics, and LLMs can thrive by having the new MES capabilities as a foundation.

A data-centric MES.A data-centric MES delivers contextualized information critical for digital twins to operate, and together, they enable instant visibility of changes in production, equipment conditions, or environmental parameters, contributing to smarter factories. (Source: Critical Manufacturing) Digital twins: the virtual mirror of the factory

A digital twin is more than a 3D model; it is a dynamic, data-driven representation of the real-world factory, continuously synchronized with live operational data. It enables users to simulate scenarios and test improvements before they ever touch the physical production line. It’s easy to understand how dependent on meaningful data these systems are.

Performing simulations of complex systems as a production line is an impossible task when relying on poor or, even worse, unreliable data. This is where a data-driven MES comes to the rescue. MES sits at the crossroads of every operational transaction: It knows what’s being produced, where, when, and by whom. It integrates human activities, machine telemetry, quality data, and performance metrics into one consistent operational narrative. A data-centric MES is the epitome of abundance of contextualized information crucial for digital twins to operate.

Several key elements made it possible for the MES ecosystems to evolve beyond their transactional heritage into a data-centric architecture built for interoperability and analytics. These include:

  • Unified/canonical data model: MES consolidates and contextualizes data from diverse systems (ERP, SCADA, quality, maintenance) into a single model, maintaining consistency and traceability. This common model ensures that the digital twin always reflects accurate, harmonized information.
  • Event-driven data streaming: Real-time updates are critical. An event-driven MES architecture continuously streams data to the digital twin, enabling instant visibility of changes in production, equipment conditions, or environmental parameters.
  • Edge and cloud integration: MES acts as the intelligent gateway between the edge (where data is generated) and the cloud (where digital twins and analytics reside). Edge nodes pre-process data for latency-sensitive scenarios, while MES ensures that only contextual, high-value data is passed to higher layers for simulation and visualization.
  • API-first and semantic connectivity: Modern MES systems expose data through well-defined APIs and semantic frameworks, allowing digital twin tools to query MES data dynamically. This flexibility provides the capability to “ask questions,” such as machine utilization trends or product genealogy, and receive meaningful answers in a timely manner.
Robotics: from automation to autonomous optimization

It is an established fact that automation is crucial for manufacturing optimization. However, AI is bringing automation to a new level. Robotics is no longer limited to executing predefined movements; now, capable robots may learn and adapt their behavior through data.

Traditional industrial robots operate within rigidly predefined boundaries. Their movements, cycles, and tolerances are programmed in advance, and deviations are handled manually. Robots can deliver precision, but they lack adaptability: A robot cannot determine why a deviation occurs or how to overcome it. Cameras, sensors, and built-in machine-learning models provide robots with capabilities to detect anomalies in early stages, interpret visual cues, provide recommendations, or even act autonomously. This represents a shift from reactive quality control to proactive process optimization.

But for that intelligence to drive improvement at scale, it must be based on operational context. And that’s precisely where MES comes in. As in the case of digital twins, AI-enabled robots are highly dependent on “good” data, i.e., operational context. A data-centric MES ecosystem provides the context and coordination that AI alone cannot. This functionality includes:

  • Operational context: MES can provide information such as the product, batch, production order, process parameters, and their tolerances to the robot. All of this information provides the required context for better decisions, aligned with process definition and rules.
  • Real-time feedback: Robots send performance data back to the MES, validating it against known thresholds, and log results for traceability and future usage.
  • Closed-loop control: MES can authorize adaptive changes (speed, temperature, or torque) based on recommendations inferred from past patterns while maintaining compliance.
  • Human collaboration: Through MES dashboards and alerts, operators can monitor and oversee AI recommendations, combining human judgment with machine precision.

For this synergy to work, modern MES ecosystems must support:

  • High-volume data ingestion from sensors and vision systems
  • Edge analytics to pre-process robotic data close to the source
  • API-based communication for real-time interaction between control systems and enterprise layers
  • Centralized and contextualized data lakes storing both structured and unstructured contextualized information essential for AI model training
MES in the center of innovation

Every day, we see how incredibly fast technology evolves and how instantly its applications reshape entire industries. The wave of innovation fueled by AI, LLMs, and agentic systems is redefining the boundaries of manufacturing.

MES, digital twins, and robotics can be better interconnected, contributing to smarter factories. There is no crystal ball to predict where this transformation will lead, but one thing is undeniable: Data sits at the heart of it all—not just raw data but meaningful, contextualized, and structured information. On the shop floor, this kind of data is pure gold.

MES, by its very nature, occupies a privileged position: It is becoming the bridge between operations, intelligence, and strategy. Yet to leverage from that position, the modern MES must evolve beyond its transactional roots to become a true, data-driven ecosystem: open, scalable, intelligent, and adaptive. It must interpret context, enable real-time decisions, augment human expertise, and serve as the foundation upon which digital twins simulate, AI algorithms learn, and autonomous systems act.

This is not about replacing people with technology. When an MES provides workers with AI-driven insights grounded in operational reality, and when it translates strategic intent into executable actions, it amplifies human judgment rather than diminishing it.

The convergence is here. Technology is maturing. The competitive pressure is mounting. Manufacturers now face a defining choice: Evolve the MES into the intelligent heart of their operations or risk obsolescence as smarter, more agile competitors pull ahead.

Those who make this leap, recognizing that the future belongs to factories where human ingenuity and AI work as a team, will not just modernize their operations; they will secure their place in the future of manufacturing.

The post MES meets the future appeared first on EDN.

КПІшники — переможці Huawei Student Tech Challenge 2025!

Новини - Wed, 11/19/2025 - 22:44
КПІшники — переможці Huawei Student Tech Challenge 2025!
Image
kpi ср, 11/19/2025 - 22:44
Текст

Під час щорічного командного змагання серед студентів технічних спеціальностей учасники створювали MVP-розробки для реальних бізнес-кейсів під менторством експертів Huawei.

Участь КПІ ім. Ігоря Сікорського в Українському тижні «Відкривай Україну: тиждень знань і культури» у Великій Британії

Новини - Wed, 11/19/2025 - 22:40
Участь КПІ ім. Ігоря Сікорського в Українському тижні «Відкривай Україну: тиждень знань і культури» у Великій Британії
Image
kpi ср, 11/19/2025 - 22:40
Текст

🇺🇦🇬🇧 Проректор з наукової роботи Сергій Стіренко і директорка НН ІЕЕ Оксана Вовк у складі делегації з України відвідали університети Великої Британії задля розвитку подальшої співпраці у сфері вищої освіти та науки (у межах UK-Ukraine Twinning Initiativе, спільні проєкти у Horizon Europe, спільні проєкти тощо). Були відвідані: Cardiff University, Birkbeck University of London, University College London (UCL).

How to design a digital-controlled PFC, Part 1

EDN Network - Wed, 11/19/2025 - 15:00
Shifting from analog to digital control

An AC/DC power supply with input power greater than 75 W requires power factor correction (PFC) to:

  • Take the universal AC input (90 V to 264 V) and rectify that input to a DC voltage.
  • Maintain the output voltage at a constant level (usually 400 V) with a voltage control loop.
  • Force the input current to follow the input voltage such that the electronics load appears to be a pure resistor with a current control loop.

Designing an analog-controlled PFC is relatively easy because the voltage and current control loops are already built into the controller, making it almost plug-and-play. The power-supply industry is currently transitioning from analog control to digital control, especially in high-performance power-supply design. In fact, nearly all newly designed power supplies in data centers use digital control.

Compared to analog control, digital-controlled PFC provides lower total harmonic distortion (THD), a better power factor, and higher efficiency, along with integrated housekeeping functions.

Switching from analog control to digital control is not easy; however, you will face new challenges where continuous signals are represented in a discrete format. And unlike an analog controller, the MCU used in digital control is essentially a “blank” chip; you must write firmware to implement the control algorithms.

Writing the correct firmware can be a headache for someone who has never done this before. To help you learn digital control, in this article series, I’ll provide a step-by-step guide on how to design a digital-controlled PFC, using totem-pole bridgeless PFC as a design example to illustrate the advantages of digital control.

A digital-controlled PFC system 

Among all PFC topologies, totem-pole bridgeless PFC provides the best efficiency. Figure 1 shows a typical totem-pole bridgeless PFC structure.

Figure 1 Totem-pole bridgeless PFC where Q1 and Q2 are high-frequency switches and will work as either a PFC boost switch or synchronous switch based on the VAC polarity. Source: Texas Instruments

Q1 and Q2 are high-frequency switches. Based on VAC polarity, Q1 and Q2 work as a PFC boost switch or synchronous switch, alternatively.

At a positive AC cycle (where the AC line is higher than neutral), Q2 is the boost switch, while Q1 works as a synchronous switch. The pulse-width modulation (PWM) signal for Q1 and Q2 are complementary: Q2 is controlled by D (the duty cycle from the control loop), while Q1 is controlled by 1-D. Q4 remains on and Q3 remains off for the whole positive AC half cycle.

At a negative AC cycle (where the AC neutral is higher than line), the functionality of Q1 and Q2 swaps: Q1 becomes the boost switch, while Q2 works as a synchronous switch. The PWM signal for Q1 and Q2 are still complementary, but D now controls Q1 and 1-D controls Q2. Q3 remains on and Q4 remains off for the whole negative AC half cycle.

Figure 2 shows a typical digital-controlled PFC system block diagram with three major function blocks:

  • An ADC to sense the VAC voltage, VOUT voltage, and inductor current for conversion into digital signals.
  • A firmware-based average current-mode controller.
  • A digital PWM generator.

Figure 2 Block diagram of a typical digital-controlled PFC system with three major function blocks. Source: Texas Instruments

I’ll introduce these function blocks one by one.

The ADC

An ADC is the fundamental element for an MCU; it senses an analog input signal and converts it to a digital signal. For a 12-bit ADC with a 3.3-V reference, Equation 1 expresses the ADC result for a given input signal Vin as:

Conversely, based on a given ADC conversion result, Equation 2 expresses the corresponding analog input signal as:

To obtain an accurate measurement, the ADC sampling rate must follow the Nyquist theorem, which states that a continuous analog signal can be perfectly reconstructed from its samples if the signal is sampled at a rate greater than twice its highest frequency component.

This minimum sampling rate, known as the Nyquist rate, prevents aliasing, a phenomenon where higher frequencies appear as lower frequencies after sampling, thus losing information about the original signal. For this reason, the ADC sampling rate is set at a much higher rate (tens of kilohertz) than the AC frequency (50 or 60 Hz).

Input AC voltage sensing

The AC input is high voltage; it cannot connect to the ADC pin directly. You must use a voltage divider, as shown in Figure 3, to reduce the AC input magnitude.

Figure 3 Input voltage sensing that allows you to connect the high AC input voltage to the ADC pin. Source: Texas Instruments

The input signal to the ADC pin should be within the measurement range of the ADC (0 V to 3.3 V). But to obtain a better signal-to-noise ratio, the input signal should be as big as possible. Hence, the voltage divider for VAC should follow Equation 3:

where VAC_MAX is the peak value of the maximum VAC voltage that you want to measure.

Adding a small capacitor (C) with low equivalent series resistance (ESR) in the voltage divider can remove any potential high-frequency noise; however, you should place C as close as possible to the ADC pin.

Two ADCs measure the AC line and neutral voltages; subtracting the two readings using firmware will obtain the VAC signal.

Output voltage sensing

Similarly, resistor dividers will attenuate the output voltage, as shown in Figure 4, then connect to an ADC pin. Again, adding C with low ESR in the voltage divider removes any potential high-frequency noise, with C placed as close as possible to the ADC pin.

Figure 4 Resistor divider for output voltage sensing, where C removes any potential high-frequency noise. Source: Texas Instruments

To fully harness the ADC measurement range, the voltage divider for VOUT should follow Equation 4:

where VOUT_OVP is the output overvoltage protection threshold.

AC current sensing

In a totem-pole bridgeless PFC, the inductor current is bidirectional, requiring a bidirectional current sensor such as a Hall-effect sensor. With a Hall-effect sensor, if the sensed current is a sine wave, then the output of the Hall-effect sensor is a sine wave with a DC offset, as shown in Figure 5.

Figure 5 The bidirectional hall-effect current sensor output is a sine wave with a DC offset when the input is a sine wave. Source: Texas Instruments

The Hall-effect sensor you use may have an output range that is less than what the ADC can measure. Scaling the Hall-effect sensor output to match the ADC measurement range using the circuit shown in Figure 6 will fully harness the ADC measurement range.

Figure 6 Hall-effect sensor output amplifier used to scale the Hall-effect sensor output to match the ADC measurement range. Source: Texas Instruments

Equation 5 expresses the amplification of the Hall-effect sensor output:

Firmware-based average current-mode controller

As I mentioned earlier, because the digital controller MCU is a blank chip, you must write firmware to mimic the PFC control algorithm used in the analog controller. This includes voltage loop implementation, current reference generation, current loop implementation, and system protection. I’ll go over these implementations in Part 2 of this article series.

Digital compensator

In Figure 7, GV and GI are compensators for the voltage loop and current loop. One difference between analog control and digital control is that in analog control, the compensator is usually implemented through an operational amplifier, whereas digital control uses a firmware-based proportional-integral-derivative (PID) compensator.

For PFC, its small-signal model is a first-order system; therefore, a proportional-integral (PI) compensator is enough to obtain good bandwidth and phase margin. Figure 7 shows a typical digital PI controller structure.

Figure 7 A digital PI compensator where r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. Source: Texas Instruments

In Figure 7, r(k) is the reference, y(k) is the feedback signal, and Kp and Ki are gains for the proportional and integral, respectively. The compensator output, u(k), clamps to a specific range. The compensator also contains an anti-windup reset logic that allows the integral path to recover from saturation.

Figure 8 shows a C code implementation example for this digital PI compensator.

Figure 8 C code example for a digital PI compensator. Source: Texas Instruments

For other digital compensators such as PID, nonlinear PID, and first-, second-, and third-order compensators, see reference [1].

S/Z domain conversion

If you have an analog compensator that works well, and you want to use the same compensator in digital-controlled PFC, you can convert it through S/Z domain conversion. Assume that you have a type II compensator, as shown in Equation 6:

Replace s with bilinear transformation (Equation 7):

where Ts is the ADC sampling period.

Then H(s) is converted to H(z), as shown in Equation 8:

Rewrite Equation 8 as Equation 9:

To implement Equation 9 in a digital controller, store two previous control output variables: un-1, un-2, and two previous error histories: en-1, en-2. Then use current error en and Equation 9 to calculate the current control output, un.

Digital PWM generation

A digital controller generates a PWM signal much like an analog controller, with the exception that a clock counter generates the RAMP signal; therefore, the PWM signal has limited resolution. The RAMP counter is configurable as up count, down count, or up-down count.

Figure 9 shows the generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation.

Figure 9 Generated RAMP waveforms corresponding to training-edge modulation, rising-edge modulation, and triangular modulation. Source: Texas Instruments

Programming the PERIOD resistor of the PWM generator will determine the switching frequency. For up-count and down-count mode, Equation 10 calculates the PERIOD register value as:

where fclk is the counter clock frequency and fsw is the desired switching frequency.

For the up-down count mode, Equation 11 calculates the PERIOD register value as:

Figure 10 shows an example of using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC.

Figure 10 Using training-edge modulation to generate two complementary PWM waveforms for totem-pole bridgeless PFC. Source: Texas Instruments

Equation 12 shows that the COMP equals the current loop GI output multiplied by the switching period:

The higher the COMP value, the bigger the D.

To prevent short through between the top switch and the bottom switch, adding a delay on the rising edge of PWMA and the rising edge of PWMB inserts dead time between PWMA and PWMB. This delay is programmable, which means that it’s possible to dynamically adjust the dead time to optimize performance.

Blocks in digital-controlled PFC

Now that you have learned about the blocks used in digital-controlled PFC, it’s time to close the control loop. In the next installment, I’ll discuss how to write firmware to implement an average current-mode controller.

Bosheng Sun is a system engineer and Senior Member Technical Staff at Texas Instruments, focused on developing digitally controlled high-performance AC/DC solutions for server and industry applications. Bosheng received a Master of Science degree from Cleveland State University, Ohio, USA, in 2003 and a Bachelor of Science degree from Tsinghua University in Beijing in 1995, both in electrical engineering. He has published over 30 papers and holds six U.S. patents.

Reference

  1. C2000™ Digital Control Library User’s Guide.” TI literature No. SPRUID3, January 2017.

Related Content

The post How to design a digital-controlled PFC, Part 1 appeared first on EDN.

CEA-Leti launches multi-lateral program to accelerate AI with micro-LED data links

Semiconductor today - Wed, 11/19/2025 - 12:18
At the SEMICON Europa 2025 event in Munich, Germany (18–21 November), micro/nanotechnology R&D center CEA-Leti of Grenoble, France has launched a three-year, multi-lateral program on micro-LED technology for ultra-fast data transfer, with a particular focus on accelerating artificial intelligence (AI) growth. The lab- to-fab initiative draws on the institute’s expertise in micro-LED process technology. Beginning in January, it aims to engage manufacturers of micro-LEDs, optical fibers, photodiodes and interconnects, as well as chipmakers, system integrators, and hyperscalers...

AlixLabs raises €15m in Series A funding round to accelerate APS beta testing

Semiconductor today - Wed, 11/19/2025 - 12:02
Sweden-based AlixLabs AB (which was spun off from Lund University in 2019) has closed a €15m (~SEK165m) Series A funding round led by long-term investors Navigare Ventures, Industrifonden, and FORWARD.one, and joined by Sweden-based STOAF as well as Global Brain (an independent Japanese venture capital firm that manages strategic funds and invests in semiconductor startups), further strengthening AlixLabs’ international reach and industry partnerships...

onsemi authorizes $6bn share repurchase program

Semiconductor today - Wed, 11/19/2025 - 11:53
Intelligent power and sensing technology firm onsemi of Scottsdale, AZ, USA says that its board of directors has authorized a new share repurchase program of up to $6bn over the next three years, launching on 1 January 2026 after the previous $3bn authorization expires on 31 December. Under the prior authorization, onsemi has repurchased $2.1bn of its common stock over the last three years, in particular spending about 100% of the company’s free cash flow in 2025 for share repurchase...

Mojo Vision adds Dr Anthony Yu to advisory board

Semiconductor today - Wed, 11/19/2025 - 11:43
Mojo Vision Inc of Cupertino, CA, USA — which is developing and commercializing micro-LED display technology for consumer, enterprise and government applications — has appointed Dr Anthony Yu to its advisory board. The firm is applying its micro-LED technology to the development of high-speed optical interconnects for AI infrastructure. The addition of Yu to the board brings decades of silicon photonics leadership experience to support the firm’s product strategy and go-to-market execution...

Optical combs yield extreme-accuracy gigahertz RF oscillator

EDN Network - Wed, 11/19/2025 - 09:44

It may seem at times that there is a divide between the optical/photonic domain and the RF one, with the terahertz zone between them as a demarcation. If you need to make a transition between the photonic and RF words, you use electrooptical devices such as LEDs and photodetectors of various types. Now, all or most optical systems are being used to perform functions in the optical band where electric comments can’t fulfill the needs, even pushing electronic approaches out of the picture.

In recent years, this divide has also been bridged by newer, advanced technologies such as integrated photonics where optical functions such as lasers, waveguides, tunable elements, filters, and splitters are fabricated on an optically friendly substrate such as lithium niobate (LiNbO3). There are even on-chip integrated transceivers and interconnects such as the ones being developed by Ayar Labs. The capabilities of some of these single- or stacked-chip electro-optical devices are very impressive.

However, there is another way in which electronics and optics are working together with a synergistic outcome. The optical frequency comb (OFC), also called optical comb, was originally developed about 25 years ago—for which John Hall and Theodor Hänsch received the 2005 Nobel Prize in Physics—to count the cycles from optical atomic clocks and for precision laser-based spectroscopy.

It has since found many other uses, of course, as it offers outstanding phase stability at optical frequencies for tuning or as a local oscillator (LO). Some of the diverse applications include X-ray and attosecond pulse generation, trace gas sensing in the oil and gas industry, tests of fundamental physics with atomic clocks, long-range optical links, calibration of atomic spectrographs, precision time/frequency transfer over fiber and through free space, and precision ranging.

Use of optical components is not limited to the optical-only domain. In the last few years, researchers have devised ways to use the incredible precision of the OFC to generate highly stable RF carriers in the 10-GHz range. Phase jitter in the optical signal is actually reduced as part of the down-conversion process, so the RF local oscillator has better performance than its source comb.

This is not an intuitive down-conversion scheme (Figure 1).

Figure 1 Two semiconductor lasers are injection-locked to chip-based spiral resonators. The optical modes of the spiral resonators are aligned, using temperature control, to the modes of the high-finesse Fabry-Perot (F-P) cavity for Pound–Drever–Hall (PDH) locking (a). A microcomb is generated in a coupled dual-ring resonator and is heterodyned with the two stabilized lasers. The beat notes are mixed to produce an intermediate frequency, fIF, which is phase-locked by feedback to the current supply of the microcomb seed laser (b). A modified uni-traveling carrier (MUTC) photodetector chip is used to convert the microcomb’s optical output to a 20-GHz microwave signal; a MUTC photodetector has response to hundreds of GHz (c). Source: Nature

But this simplified schematic diagram does not reveal the true complexity and sophistication of the approach, which is illustrated in Figure 2.

Figure 2 Two distributed-feedback (DFB) lasers at 1557.3 and 562.5 nm are self-injection-locked (SIL) to Si3N4 spiral resonators, amplified and locked to the same miniature F-P cavity. A 6-nm broad-frequency comb with an approximately 20 GHz repetition rate is generated in a coupled-ring resonator. The microcomb is seeded by an integrated DFB laser, which is self-injection-locked to the coupled-ring microresonator. The frequency comb passes through a notch filter to suppress the central line and is then amplified to 60 mW total optical power. The frequency comb is split to beat with each of the PDH-locked SIL continuous wave references. Two beat notes are amplified, filtered and then mixed to produce fIF, which is phase-locked to a reference frequency. The feedback for microcomb stabilization is provided to the current supply of the microcomb seed laser. Lastly, part of the generated microcomb is detected in an MUTC detector to extract the low-noise 20-GHz RF signal. Source: Nature

At present, this is not implemented as a single-chip device or even as a system with just a few discrete optical components; many of the needed precision functions are only available on individual substrates. A complete high-performance system takes a rack-sized chassis fitting in a single-height bay.

However, there has been significant progress on putting multiple functional locks into single-chip substrate, so it wouldn’t be surprising to see a monolithic (or nearly so) device within a decade or perhaps just a few years.

What sort of performance can such a system deliver? There are lots of numbers and perspectives to consider, and testing these systems—at these levels of performance—to assess their capabilities is as much of a challenge as fabricating them. It’s the metrology dilemma: how do you test a precision device? And how do you validate the testing arrangement itself?

One test result indicates that for a 10-GHz carrier, the phase noise is −102 dBc/Hz at 100 Hz offset and decreases to −141 dBc/Hz at 10 kHz offset. Another characterization compares this performance to that of other available techniques (Figure 3).

Figure 3 The platforms are all scaled to 10-GHz carrier and categorized based on the integration capability of the microcomb generator and the reference laser source, excluding the interconnecting optical/electrical parts. Filled (blank) squares are based on the optical frequency division (OFD) standalone microcomb approach: 22-GHz silica microcomb (i); 5-GHz Si3N4 microcomb (ii); 10.8-GHz Si3N4 microcomb (iii) ; 22-GHz microcomb (iv); MgF2 microcomb (v); 100-GHz Si3N4 microcomb (vi); 22-GHz fiber-stabilized SiO2 microcomb (vii); MgF2 microcomb (viii); 14-GHz MgF2 microcomb pumped by an ultrastable laser (ix); and 14-GHz microcomb-based transfer oscillator (x). Source: Nature

There are many good online resources available that explain in detail the use of optical combs for RF-carrier generation. Among these are “Photonic chip-based low-noise microwave oscillator” (Nature); “Compact and ultrastable photonic microwave oscillator” (Optics Letters via ResearchGate); and “Photonic Microwave Sources Divide Noise and Shift Paradigms” (Photonics Spectra).

In some ways, it seems there’s a “frenemy” relationship between today’s advanced photonics and the conventional world of RF-based signal processing. But as has usually been the case, the best technology will win out, and it will borrow from and collaborate with others. Photonics and electronics each have their unique attributes and bring something to the party, while their integrated pairing will undoubtedly enable functions we can’t fully envision—at least not yet.

Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.

Related Content

The post Optical combs yield extreme-accuracy gigahertz RF oscillator appeared first on EDN.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator