Українською
  In English
EDN Network
A “free” ADC

Despite the increasing availability and declining cost of on-chip analog peripherals, the humble PWM DAC retains its appeal as a “free” DAC that can repurpose an uncommitted DIO pin and counter/timer module, add a simple low pass ripple filter, and become an (albeit imperfect but still useful) analog output.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Okay, but what about the other end of the analog/digital/analog signal chain? How close can we come to a (albeit imperfect but still useful) zero cost ADC? Figure 1, with its two transistors, four resistors, and one capacitor is my “free” (< $0.50 in singles) ADC.
Figure 1 Circuit of a “free” (approximately) ADC.
Here’s how it works.
Tri-stateable I/O pin DIO1, when programmed for high impedance, allows the top end of C1 to charge through R1 and acquire input voltage Vin, as shown in the ACQUIRE phase of Figure 2.
Figure 2 Acquire, convert, and calibrate phases of “free” ADC.
The minimum duration of the DIO1 = high-Z acquisition phase is determined by N (the desired number of bits of conversion precision) and the R1C1 time-constant.
Minimum acquisition interval = R1C1 ln(2N)
For example, for the RC values shown and N = 8, the minimum interval = ~1.5 ms. If N = 12, the minimum interval would be ~2 ms. While C1 is charging, Q1’s forward-biased and saturated base provides a low impedance (~1 Ω) path to ground with an offset (Vq1b) of ~650 mV. The acquisition phase ends with DIO1 being reprogrammed for a 0 output. This drives the top end of C1 to ground, and Vq1b negative, turning Q1 off. Q1 turning off allows DIO2 = 1, which is intended to be programmed so to enable a microcontroller counter/timer peripheral to begin counting clock cycles (e.g., 1 MHz) and thus measure the duration of Q1 = OFF.
Q1 = OFF (and therefore counting) continue until C1’s negative charge dissipates and allows Vq1b to return to the 650 mV. The time that elapses (and therefore cycles counted) while this happens is directly proportional to Vin and inversely proportional to current source Q2’s collector current.
C1 recharge interval = C1 * Vin / Q2ic
Q2ic = (5v – Vq3e) / R3 * Q2alpha ~ 430 µA
So that…
Counting interval = 51 µs / Volt and conversion count = 51 * Fclk(MHz) * Vin,
approximately!
R2 is provided to avoid Q2 saturation. R4 is Q1’s collector and DIO2’s pullup. Combining acquisition (1.5 ms) and conversion (256 µs for 8 bits with 1 MHz clock) times predicts a max conversion rate of ~560 samples/sec. Okay so far.
But how to cope with that “approximately” thing? It covers a multitude of “free” circuitry limitations, including tempco’s of inexpensive resistors, capacitors, and transistor bias voltages and current gains so simply ignoring it simply won’t do.
Fortunately, as suggested by the right side of Figure 2, this “free” ADC incorporates a self-calibration feature.
To self calibrate, DIO1 is programmed for output, set to 1 to charge C1, then to 0 to generate a counting interval and Ncal count value. Subsequent conversion results are then scaled as…
Vin = 5V * conversion_count / Ncal
…which corrects for most of the error sources listed above. But unfortunately, not quite all.
One that remains is an uncorrected zero offset due to the minimum Vq1b excursion needed to turn Q1 off and generate a non-zero counting interval. The least Vin required to do so is approximately 10mV = .01 / 5 = 1 / 500 = ~1/2 lsb of an 8 bit conversion result of full-scale 5V.
Which leaves just one obvious source of potential inaccuracy: the 5V supply. Logic supplies are not the best choice for an analog reference and the accuracy of this “free” ADC will ultimately depend on how good the one used actually turns out to be.
Of course, the classic PWM DAC suffers from exactly the same logic-supply-limitation malady, but this hasn’t negated its utility or popularity.
Which sort of takes the topic of “free” analog peripherals back to where it began. Albeit imperfect—but still useful?
Stephen Woodward’s relationship with EDN’s DI column goes back quite a ways. In all, a total of 64 submissions have been accepted since his first contribution was published in 1974.
Related Content
- Three paths to a free DAC
- The Shannon decoder: A (much) faster alternative to the PWM DAC
- Fast PWM DAC has no ripple
- Cancel PWM DAC ripple with analog subtraction but no inverter
The post A “free” ADC appeared first on EDN.
Aspired European Nvidia aims DPUs at embedded AI

The “Technological Maturation and Demonstration of Embedded Artificial Intelligence Solutions” call for projects under the “France Relance 2030 – Future Investments” initiative has selected the IP-CUBE project led by Kalray for accelerating AI and edge computing designs. The IP-CUBE project aims to establish the foundations of a French semiconductor ecosystem for edge computing and embedded AI designs.
These AI solutions, deployed in embedded systems as well as local data centers, also known as edge data centers, aim to process data closer to where it’s generated. For that, embedded AI and edge computing designs require new types of processors and semiconductor technologies to process and accelerate AI algorithms and address new technological challenges relating to high performance, low power consumption, low latency, and robust security.
The €36.7 million IP-CUBE project is led by Kalray, and its Dolomites data processing unit (DPU) processor is at the heart of this design initiative. DPU is a new type of low-power, high-performance programmable processor capable of processing data on the fly while catering to multiple applications in parallel. Other participants in the IP-CUBE project include network-on-chip IP supplier Arteris, security IP supplier Secure-IC, and low-power RISC-V component supplier Thales.
“In the current geopolitical context, the semiconductor industry has become essential, both in terms of production tools and technological know-how for designing processors,” said Eric Baissus, CEO of Kalray. “France and Europe need production plants, but they also need companies capable of designing the processors that will be manufactured in these plants.”
Kalray claims it’s the only company in France and Europe to offer DPUs. Its DPU processors and acceleration cards are based on the company’s massively parallel processor array (MPPA) architecture. The French suppliers of DPU processors is also part of other collaborative projects such as the European Processor Initiative (EPI).
Arteris, another participant in the IP-CUBE project, has recently licensed its high-speed network-on-chip (NoC) interface IP to Axelera AI, the Eindhoven, Netherlands-based supplier of AI solutions at the edge. The Dutch company’s Metis AI processing unit (AIPU) is equipped with four homogeneous AI cores built for complete neural network inference acceleration. Each AI core is self-sufficient and can execute all standard neural network layers without external interactions.
The four cores—integrated into an SoC—encompass a RISC-V controller, PCIe interface, LPDDR4X controller, and a security arrangement connected via a high-speed NoC interface. Here, Arteris FlexWay interconnect IP uses area-optimized interconnect components to address a smaller class of SoC.
The above developments highlight some of the AI-related advancements in Europe and a broad realization that a strategic and open semiconductor ecosystem should be built around AI applications. Here, smaller SoC designs targeted at edge computing and embedded AI will be an important part of this technology undertaking.
Related Content
- SoC Interconnect: Don’t DIY!
- What is the future for Network-on-Chip?
- Multiprocessing #5: Dataplane Processor Units
- Nvidia DPU brings hardware-based zero trust security
- Nvidia Presents the DPU, a New Type of Data Center Processor
The post Aspired European Nvidia aims DPUs at embedded AI appeared first on EDN.
The 2023 Google I/O: It’s all about AI, don’t cha know

As longstanding readers may already recall, I regularly cover the yearly Apple Worldwide Developers Conference, with the 2023 version scheduled for next month, June 5-9 to be exact. Stay tuned for this year’s iteration of my ongoing event analysis! Beginning this year, EDN is also kicking off a planned yearly coverage cadence from yours truly for Google’s developer conference, called Google I/O (or is it parent company Alphabet’s? I’ll use the more recognizable “Google” lingo going forward in this writeup). Why, might you ask? Well:
- Google’s Linux-derived Android and ChromeOS operating systems are best known for their implementations, respectively, in the company’s and partners’ smartphones and tablets, and in netbooks (i.e., Chromebooks) and nettops (Chromeboxes). But the OSs’ open-source foundations also render them applicable elsewhere. This aspiration is also the case for the Linux-abandoning but still open-source Fuschia O/S sibling (successor?).
- Although Google’s been developing coprocessor ICs ever since the Pixel 2 smartphone generation’s Visual Core, with added neural network processing capabilities in the Pixel 4’s Neural Core, the company significantly upped its game beginning with the Pixel 6 generation with full-featured Tensor SoCs, supplanting the application processors from Qualcomm used in prior Pixel phone generations. And beginning in 2016, Google has also developed and productized multiple generations of Tensor Processing Units (TPUs) useful in accelerating deep learning inference and (later also) training functions, initially for the “cloud” and more recently expanding to network edge nodes.
- Speaking of deep learning and other AI operations, they unsurprisingly were a regularly repeated topic at Wednesday morning’s keynote and, more generally, throughout the multi-day event. Google has long internally developed various AI technologies and products based on them—the company invented the transformer (the “T” in “GPT”) deep learning model technique now commonly used in natural language processing, for example—but productizing those research projects gained further “code red” urgency when Microsoft, in investment partnership with OpenAI, added AI-based enhancements to its Bing search service, which competes with Google’s core business. AI promises, as I’ve written before, to revolutionize how applications and the functions they’re based on are developed, implemented and updated. So, Google’s ongoing work in this area should be of interest even if your company isn’t one of Google’s partner or customers.
AI everywhere
Let’s focus on that last bullet first in diving into the details of what the company rolled out this week. AI is a category rife with buzzwords and hype, which a planned future post by me will attempt to dissect and describe in more detail. For purposes of this piece, acting among other things as a preamble, I’ll try to keep things simple. The way I look at AI is by splitting up the entire process into four main steps:
- Input
- Analysis and identification
- Appropriate-response discernment, and
- Output
Take, for example, a partially-to-fully autonomous car in forward motion, in front of which another vehicle, a person or some other object has just seemingly appeared:
- Visible light image sensors, radar, LiDAR, IR and/or other sensing technologies detect the object’s presence and discern details such as its size, shape, distance, speed (and acceleration-or-deceleration trend) and path of travel.
- All of this “fused” sensor-generated data is passed on a processing subsystem, which determines what the object is including whether it’s a “false positive” (glare or dirt on a camera lens, for example, or fog or other environmental effects).
- That same or a subsequent processing subsystem further down the “chain” then determines what the appropriate response, if any, should be.
- Possible outputs of the analysis and response algorithms, beyond “nothing”, are actions such as automated takeover of acceleration, braking and steering to prevent a collision, and visual, audible, vibration and other alerts for the vehicle driver and other occupants.
Much media attention of late is focused on large language models (LLMs), whether text-only or audible in conjunction with speech-to-text (voice input) and text-to-speech (output) conversion steps. This attention is understandable, as language is an already-familiar means by which we interact with each other, and therefore is also a natural method of interacting with an AI system.
Note, however, that LLMs represent only steps 1 and 4 of my intentionally oversimplified process. While you can use them as a natural-language I/O scheme for a search engine, as Microsoft has done with OpenAI’s ChatGPT in Bing, or as Google is now beta-testing, you can also use an LLM input in combination with generative AI to create a synthesized still image, video clip, music track (such as MusicLM, which Google announced this week) or even code snippet (Google’s just-announced Codey and Studio Bot, for example), whose output paths include data files, displays and speakers.
This brief-but-spectacular discernment will, I hope, help you sort out the flurry of AI-based and enhanced technology and product announcements that Google made this week. One of the highlights was version 2 of PaLM (Pathways Language Model), the latest version of the company’s core LLM, which has seemingly superceded its BERT predecessor. When Microsoft announced its OpenAI partnership and ChatGPT-based products at the beginning of this year, it didn’t immediately reveal that they were already running on the latest GPT-4-based version of ChatGPT; OpenAI’s GPT-4 unveil came more than a month later.
Similarly, although Google announced its Bard AI-based chatbot back in early February, it waited until this week (in conjunction with revealing service enhancements and the end of the prior public-access waitlist) to reveal that Bard was PaLM 2-based. And like Microsoft, Google is adding LLM- and more general AI-based enhancements to its Workspace office suite, branding them as Duet. Bigger picture, there’s the Labs, where Google will going-forward be rolling out various AI-based “experiments” for the public to try before they “go gold” (or are eventually canned), including the aforementioned search enhancements.
A new mainstream smartphone
Roughly a half-year after launching each new high-end Pixel smartphone offering, Google unveils a more cost-effective and somewhat feature-reduced mainstream “a” derivative. The company’s followed this pattern ever since 2019’s Pixel 3a, and “a” Pixel phones have been my “daily drivers” ever since. The Pixel 7a is the latest-and-greatest, coming in at $500, roughly $100 lower-priced than the Pixel 7, and normally I’d be planning on transitioning to it once my Pixel 4a 5G times out and falls off the supported-device list later this year…but Ars Technica also makes compelling ongoing arguments for the Pixel 6a (which I also own and planned on using as a backup), which continues to be sold and whose price has been cut by $100 to $350. Now that Google’s using its own Tensor SoCs, as I mentioned earlier, the company promises security updates for five years, and the Pixel 6a was launched only a year ago. The biggest arguments in favor of the Pixel 7 line, ironically, are that its cellular radio subsystem is seemingly less buggy than with Pixel 6 precursors, and that its fingerprint-unlock scanning also seems more reliable.
A tablet revisit
I was quite enamored with my Google-branded, ASUS-developed and Android-based Nexus 7 tablet of a half-decade-plus back, and apparently I wasn’t the only one. Its multiple successors, including the ChromeOS-based Pixel Slate, didn’t replicate its success, but Google’s trying again to recapture its past glory with the new Pixel Tablet. It’s based on the same Tensor G2 SoC that powers the entire Pixel 7 line, including the just-introduced 7a mentioned previously in this piece, and Google curiously seems to be positioning it as (among other things) a successor to its Home (now Nest) Hub products, along with an optional $129.99 docking station (complete with speakers and charging capabilities). The screen size (11”) is heftier than I’d prefer bedside but spot-on elsewhere in the home. And at $500, it’s priced competitively with Apple iPad alternatives. If at first you don’t succeed, try, try again? We shall see if this time’s the charm.
Google’s reveal of its first foldable smartphone, the aptly named Pixel Fold, is bittersweet on a personal level. A bit more than a year ago, I told you about my experiences with Microsoft’s also-Android-based first-generation Surface Duo, for which initial reviews were quite abysmal but which improved greatly thanks to software evolutions:
Unfortunately, things returned to “bad” (if not “worse”) shortly thereafter. There have been no significant software-experience enhancements since the Android 12L update, only Android security patches in rough cadence with their releases by Google. To date, specifically, both Surface Duo generations have yet to receive otherwise-mainstream Android 13; ironically, this week Google rolled out the second public beta of Android 14. And even the security patches are increasingly getting delayed; March’s didn’t show up until month end, and April’s didn’t arrive until just a couple of days ago (May, mind you), after Google released the May security updates! The dual-screen Surface Duo 3 was reportedly canceled in January, and more generally, rumor has it that the team within Microsoft has been gutted and essentially disbanded.
With that as a backdrop, what do I think of a Samsung-reminiscent foldable with a $1,800 (starting) price tag? Google probably won’t sell many of them at that price, but the company has arguably got deep enough pockets that it doesn’t need to do so at least for this initial go-around. You had to know, after all, that when Google announced it was developing a widescreen variant of its showcase Android O/S, it wasn’t doing so just out of the goodness of its own heart for its licensees: it had product plans of its own. Specifics include the same Tensor G2 SoC as that found on the Pixel 7 smartphone line and the Pixel Tablet, a 7.6” (unfolded) 1840 x 2208-pixel OLED display, and 12 GBytes of system DRAM along with both 256 GByte and 512 GByte flash memory storage options. Microsoft’s Surface Duo misfires aside, I remain bullish on the foldable form factor (and remain amused that I am, given my historical fondness for small-screen smartphones), and once again I’m seemingly not alone.
But wait, there’s more
I’ve hit what I think are the highlights, but there’s plenty more that came out of Shoreline Amphitheater this week; Googlers themselves even came up with a list of 100 things they announced. I’ll briefly touch on just one more; the way-cool (IMHO) Project Starline hologram-based virtual conferencing booth system announced two years ago:
has now been significantly slimmed down and otherwise simplified:
With that, I’ll close here in order to avoid crossing the 2,000-word threshold which would undoubtedly ensure that my colleague and friend Aalyia would never speak to me again (just kidding…I think…). What else caught your eyes and ears at Google I/O this year? Let me know in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- Apple’s latest product pronouncements: No launch events, just news announcements
- Google’s Pixel Buds Pro earbuds dissected: Noise is (finally) actively detected
- A tech look back at 2022: We can’t go back (and why would we want to?)
- 2023: A technology forecast for the year ahead
The post The 2023 Google I/O: It’s all about AI, don’t cha know appeared first on EDN.
HDR image sensors aim to make cars safer

Hyperlux image sensors from onsemi deliver a 150-dB high dynamic range (HDR) with LED flicker mitigation (LFM) across the full automotive temperature range. Targeting advanced driver assistance systems (ADAS), Hyperlux sensors are also expected to provide a smooth transition to Level 2+ driving automation, which requires the driver to only take over when alerted by the technology.
Hyperlux CMOS digital image sensors feature a 2.1-µm pixel size and serve both sensing and viewing camera applications. The first two devices in the Hyperlux family are the AR0823AT and AR0341AT. The AR0823AT is an 8.3-Mpixel, 1/1.8-in. sensor, while the AR0341AT is a 3-Mpixel sensor in a 1/3.6-in. format.
The simultaneous HDR and LFM capabilities of these devices enable them to capture high-quality images under extreme lighting conditions without sacrificing lowlight sensitivity. LFM also ensures that pulsed light sources do not appear to flicker, avoiding flicker-induced machine vision issues. Further, onsemi claims that the automotive image sensors consume up to 30% less power and occupy up to a 28% smaller footprint that competing devices.
The AR0823AT and AR0341AT are now sampling to early access customers. Learn more about Hyperlux technology here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post HDR image sensors aim to make cars safer appeared first on EDN.
5G module elevates AIoT terminal connectivity

Fibocom’s SC151 5G smart module enhances AI-based applications with premium 5G NR and Wi-Fi 6E connectivity and high-performance processing. The module is powered by a Qualcomm QCM4490 octa-core processor with a 3GPP Release 16-compliant 5G NR sub-6 GHz modem offering global carrier support.
The SC151 can be used in a wide range of 5G AIoT scenarios, including industrial handhelds, point-of-sale devices, body-worn cameras, and push to talk over cellular (PoC) systems. This smart module extends 5G connectivity by supporting downlink 4×4 MIMO, uplink 2×2 MIMO, and roaming under both 5G SA and NSA network architectures, allowing backward compatibility with 4G/3G bands. It also enables 2.4-GHz/5-GHz WLAN and Wi-Fi 6E communications, plus dual band simultaneous (DBS) operation to increase overall capacity and performance.
Along with a rich set of interfaces, the SC151 leverages multi-constellation GNSS to improve position accuracy in mobile scenarios and simplify product design. The module is equipped to run the Android 13 operating system and subsequent OS upgrades.
Engineering sample of the SC151 5G smart module will be available starting Q2 2023. A datasheet was not available at the time of this announcement.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 5G module elevates AIoT terminal connectivity appeared first on EDN.
SiC gate driver extends EV driving range

A 20-A isolated gate driver from TI, the UCC5880-Q1, enables powertrain engineers to build more efficient traction inverters and maximize EV driving range. SPI programmability allows the device to drive nearly any SiC MOSFET or IGBT, while integrated monitoring and protection features reduce design complexity.
To improve system efficiency and increase the driving range of electric vehicles, designers can use the UCC5880-Q1 to vary the gate-drive strength in real time. This can be done in steps between 20 A and 5 A and can reduce SiC switching power losses by up to 2%. As a result, drivers get up to 7 more miles of range per battery charge, which is equivalent to over 1000 additional miles per year for someone who charges their vehicle three times per week.
On-chip diagnostics of the UCC5880-Q1 include built-in self-test (BIST) for protection comparators, gate threshold voltage measurement for power device health monitoring, and fault alarm and warning outputs. The part also packs an active Miller clamp and a 10-bit ADC for monitoring purposes.
The UCC5880-Q1 comes in a 10.5×7.5-mm, 32-pin SSOP. Preproduction quantities of the automotive-grade, ISO26262-compliant driver are available now, only on TI’s website, with prices starting at $5.90 in lots of 1000 units. A UCC5880-Q1 evaluation module is available for $249.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post SiC gate driver extends EV driving range appeared first on EDN.
Soft-switching controller SoCs allow remote updating

Pre-Flex soft-switching motor/inverter controller SoCs from Pre-Switch are now reprogrammable to permit updating products during development or in the field. These AI-based chips have also been outfitted with an embedded digital oscilloscope for transistor-level analysis.
Pre-Flex ICs contain all the AI algorithms required for soft switching across all operating voltages, load conditions, and temperatures. Adaptations are made on a cycle-by-cycle basis to minimize losses and maximize efficiency. Aimed at EVs and other e-mobility applications requiring high inverter efficiency levels across a wide load range, Pre-Switch technology achieves efficiency of 99.57% peak and 98.5% at 5% load—both measured at a switching frequency of 100 kHz. The result is an increased EV range of 5% to 12%.
The Pre-Flex IC’s embedded digital oscilloscope, Deep View, gives users 12 channels and a sample rate of 160 MSPS to analyze the timings of the switching. Traces can be recorded and exported out to analyze system performance. If there are any issues, Deep View enables developers and even remote program managers at Pre-Switch to understand why, so that actions can be taken.
“We are continually developing the AI and infrastructure that enables us to deploy true soft-switching and hence achieve such outstanding efficiency performance,” said Bruce Renouard, Pre-Switch CEO. “By incorporating a remote boot code on the chip, we can update the AI at any point in the EV or other product’s lifetime, ensuring that performance is always optimal.”
To enable engineers to employ Pre-Flex AI technology, Pre-Switch offers the CleanWave inverter reference system and the PDS-2 development system with Deep View.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Soft-switching controller SoCs allow remote updating appeared first on EDN.
650-V SiC diodes boost efficiency

Fifth-generation GeneSiC silicon carbide diodes from Navitas boast low forward voltage and ‘low built-in voltage biasing’ (‘low knee’) for high efficiency across all loads. GExxMPS06x series diodes are intended for demanding data center, industrial motor-drive, solar, and consumer applications ranging from 300 W to 3000 W.
The merged-PiN Schottky (MPS) design of the GeneSiC devices combines the best features of both PiN and Schottky diode structures. This design produces a forward voltage drop of just 1.3 V, high surge current capability, and minimized temperature-independent switching losses. Proprietary thin-chip technology further reduces forward voltage and improves thermal dissipation for cooler operation.
According to Navitas, the GExxMPS06x series of MPS diodes provides forward current ratings from 4 A to 24 A. The devices will be available in low-profile surface-mount QFN packages for the first time. Additional packaging options include D2-PAK, TO-220, and TO-247. With a common-cathode configuration, the TO-247-3 package affords flexibility for high power density and bill-of-material reduction in interleaved PFC topologies.
GeneSiC parts are now available to qualified customers. Please contact sicsales@navitassemi.com for more information.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 650-V SiC diodes boost efficiency appeared first on EDN.
Artificial intelligence for wireless networks
The AI revolution is here. With the release of AI applications like ChatGPT, we are seeing firsthand the power and potential of deep neural networks (DNNs) and machine learning (ML). While ChatGPT is a language model that is trained to generate human-like text, the same underlying concept will change all aspects of technology during the next decade. For example, one of the biggest appeals of AI is its ability to optimize complex scenarios with large amounts of data. Wireless systems have also been growing in complexity for the last decade and struggle to process the vast amount of data that is produced, making them an ideal candidate for AI and ML.
AI in 5G Networks
As 5G matures, AI and ML are already being introduced for study by the 3rd Generation Partnership Project (3GPP), the standardization body that maintains cellular specifications. Applications of AI under consideration are primarily in the air interface, including network energy saving, load balancing, and mobility optimization. Potential use cases in the air interface are so numerous that a small subset has been selected for study in the upcoming 3GPP Release 18, including channel state information (CSI) feedback, beam management, and positioning. It is important to note that 3GPP is not developing AI / ML models. Rather it seeks to create common frameworks and evaluation methods for AI/ML models being added into different air interface functions [1].
Outside of the 3GPP and the air interface, O-RAN ALLIANCE is exploring how AI/ML can be used to improve network orchestration. For example, the O-RAN ALLIANCE has a unique feature in its architecture called the RAN Intelligent Controller (RIC) that is designed to host AI/ML optimization applications (Figure 1). The RIC can host xApps, which run in near-real-time, and rApps, which run in non-real-time. The xApps for improving spectral efficiency and energy efficiency and, rApps for network orchestration, that leverage AI already exist today. More xApps/rApps and applications using AI /ML in the RIC will become available as the O-RAN ecosystem grows and matures.
Figure 1 An illustration of the O-RAN network architecture with the RIC that hosts AI/ML optimization applications that can either run in near-real-time or non-real-time. Source: Keysight
AI-native 6G Networks
6G is in its infancy, but it is already clear that AI/ML will be a fundamental part of all aspects of future wireless systems. On the network side, the term “AI native” is used widely in the industry despite not being officially defined. One way of looking at these AI-native networks is to extrapolate from the diagram above (Figure 1) based on current trends of virtualization and disaggregation of the Radio Access Network (RAN). Each block of the network is likely to contain AI/ML models that will vary from vendor to vendor and application to application (Figure 2).
Figure 2 A basic diagram of the O-RAN 6G network where each block in that system incorporates AI models that might vary based on vendor or application. Source: Keysight
AI-native networks can also mean networks that were built to natively run AI/ML models. Consider the design flow below (Figure 3). In traditional 5G networks, the air interface is made up of different processing blocks, each designed by humans. In 5G Advanced, each block will leverage ML to optimize a specific function. In 6G, it’s possible that AI will design the entire air interface using DNNs.
Figure 3 A design flow showing the progression from AI infused to AI native networks [2]. Source: Keysight
AI / ML power consumption optimization based on real-time operating conditions
Building on the idea that AI/ML can be used to improve network management orchestration, 6G looks to leverage AI/ML to solve optimization challenges. For example, AI could be used to optimize the power consumption of the network by turning on and off components based on real-time operating conditions. Today, xApps and rApps accomplish this at a base station level by turning on and off power-hungry components like power amplifiers when they are not in use. However, the ability of AI to quickly solve challenging compute problems and analyze large amounts of data opens the possibility of optimizing our networks at a larger, city-wide, or national scale. Entire base stations could be turned off during low use and cells could be reconfigured to service real-time demand in an energy optimized way using the least possible resources. It is not possible to reconfigure base stations and city-wide networks in this way today—it takes days or weeks to reconfigure and test any changes made to network configurations. Though, advances in different AI techniques remain promising and are top of mind for infrastructure providers.
Reshaping next-generation cellular networks with the AI revolution
Wireless networks will not wait for 6G to start leveraging the power of AI. Active research is happening across the entire ecosystem to develop new models and integrate them into the wireless systems of both today and tomorrow. However, these models are still new and need to be evaluated for rigor and reliability. Properly training AI models on diverse data sets, quantifying their improvement over traditional techniques, and defining new test methodology for AI enabled modules are critical steps that must be taken as this new tech is adopted. As AI models and testing best practices mature, there is no doubt that AI will revolutionize wireless communications in the next 5-10 years.
Sarah LaSelva is the director of 6G marketing at Keysight Technologies and has a background in microwave and millimeter wave technology. She has over a decade of experience in test and measurement concentrating on wireless communications, both studying and promoting the latest wireless technologies.
Related Content
- Keysight’s technology predictions for 2023—company-wide insights
- The aspects of 6G that will matter to wireless design engineers
- When less is more: Introducing 5G RedCap
- Creating more energy-efficient mobile networks with O-RAN
- Sub-terahertz research in 6G wireless: Where should you start?
Sources
- Lin, “An Overview of 5G Advanced Evolution in 3GPP Release 18,” in IEEE Communications Standards Magazine, vol. 6, no. 3, pp. 77-83, September 2022, doi: 10.1109/MCOMSTD.0001.2200001.
- Hoydis, F. A. Aoudia, A. Valcarce and H. Viswanathan, “Toward a 6G AI-Native Air Interface,” in IEEE Communications Magazine, vol. 59, no. 5, pp. 76-81, May 2021, doi: 10.1109/MCOM.001.2001187.
The post Artificial intelligence for wireless networks appeared first on EDN.
Low-cost driver for thermoelectric coolers (TECs)

Thermoelectric coolers (TECs), also called Peltier-coolers, are used for stabilizing optical benches, filters, lenses, laser diodes, photodiodes, and other parts in electro-optical systems. TECs are often integrated into the off-the-shelf parts like pump laser diodes [1].
Typical TECs used in these modules have 2 connections, the maximum voltages across these connections and the maximum currents through the TECs are around +/- 2-3 V and +/- 1.5-3 A. Depending upon the direction of the current, heat is moved from one side of the TEC to the other side. The absolute value of the current determines the cooling power or the heating power.
The impedance of the TEC, roughly looks like a resistor with a value of 1-2 Ω. In order to build a temperature controller, one needs a temperature sensor (typically an NTC), a temperature controller (either analog or digital) and a two-quadrant current driver (or voltage driver) that is able to deliver the aforementioned current of up to +/- 3 A at voltages of up to +/- 3 V. The output of the temperature controller steers the input of the driver stage. (It is also possible, and makes sense at least for small thermal loads, to operate the TEC steadily in cooling mode only and to use a heater resistor for heating against the cold side [2].) This driver stage is the subject of this article.
Some ten years ago, a TEC driver was built, at least in principle, from two buck converters working in an H-bridge mode. The TEC was connected between the half-bridges, so that the voltage difference between the half-bridge outputs determined the direction of the TEC current. Examples for this are the LTC1923 [3] and the DRV592 [4].
These older TEC drivers used two storage inductors, as shown in Figure 1. Inductors of this kind are relatively pricey, lossy, heavy and they consume quite a lot of board space, so it made sense to look for a solution that allowed one to save a storage inductor.
Figure 1 Classical TEC power stage with two buck converters in an H-bridge arrangement, using two power inductors.
One of these solutions comprises an H-bridge, where one side operates in a pure switch mode and the other side in a linear mode. Only the switch mode side requires an inductor. Some parts that use this principle are the MP8833 [5] and the ADN8834 [6].
Another possibility is to use a buck-boost-converter. Instead of connecting the TEC to the output of the converter, the TEC is connected between input and output. Depending on the height of the output voltage, the difference between the input voltage and output voltage can be positive or negative and the current through the TEC flows in one direction or the other.
Examples of suitable buck-boost converters are the TPS63020 (Application Report SLVA677 [7]) or the TPS63070 (PMP9796 Test Report TIDUCA8 [8]). Here, too, only one storage inductor is required.
The following circuit also requires only one storage inductor and does not require any special components whatsoever. In this circuit, a commercially available adjustable step-down converter produces an adjustable output voltage by intervening in the feedback network.
The direction of current flow in the TEC is controlled by a downstream H-bridge, in the middle of which the TEC is located: when the top left switch and the bottom right switch are on and the other two switches of the H-bridge are off, then the current flows from left to right (or vice versa from right to left when the other diagonal pair of switches is on or off).
But there is a problem: The buck converters that are common today—which can be found in practically all products and are manufactured in massive quantities by countless companies—have minimum output voltages of around 0.6 to 1.25 V. The modern types usually have the smaller minimum output voltages.
To regulate a TEC precisely, however, you need voltages down to 0 V. There are two different approaches to bridge this gap between the minimum output voltage of the buck converter and 0 V:
- Set the minimum output voltage of the buck converter and use the H-bridge as a switch to set an average voltage on the TEC between 0 V and the minimum buck converter voltage via pulse width modulation (PWM) with adjustable duty cycle. (Contrary to some statements that you can read about TECs over and over again, PWM operation does not harm a TEC as long as you operate it within its specifications. In our case, the TEC is only operated with PWM in the lower current range anyway. It remains important that the PWM frequency is sufficiently large so that the TEC itself does not experience any significant thermal expansion or contraction during the on and off times of the pulses and thus becomes mechanically fatigued.) The low-pass effect on the heat capacity of the object that is temperature-controlled ensures that the temperature remains smooth.
If larger TEC voltages are requested, the PWM operation of the half-bridge is switched off and the half-bridge again functions as a simple pole-changing circuit. - One can also set the minimum output voltage of the step-down converter and use a low-dropout linear regulator (LDO), i.e., a linear regulator with a low minimum voltage difference between input and output, to bridge the gap down to 0 V. The TEC is thus supplied with an unpulsed DC voltage in all operating states. For higher output voltages, the LDO is set to minimum dropout and causes only minimal losses.
Version 1
The first version is extremely simple and requires no additional power components (Figure 2). The existing H-bridge transistors are also used for PWM operation. If the components had ideal properties, then this variant would have an efficiency of 100% due to the pure switching operation of the components.
Figure 2 Version 1 of the circuit described in the text, using only one power inductor and PWM control of the H-bridge.
Because the PWM frequencies at the bridge do not have to be particularly high—at least compared to the switching frequency of the buck converter—you do not need high drive powers at the gates of these transistors either. PWM frequencies in the kHz range should be sufficient for most applications! (However, very small thermal masses such as laser diodes could be influenced (e.g., a laser diode could be frequency modulated) by small temperature fluctuations up to the kHz range. Of course, whether this is relevant depends on the application.)
One could object that the pulsed currents cause high interference voltages on the input voltage and require extra buffer capacitors or filters there. However, a small calculation shows that the situation is relatively harmless: With a minimum output voltage of the step-down controller of, for example, 1 V and a TEC impedance of 1 Ω, peak values of 1000 mA result through the TEC. At input to step-down controller with a voltage of 12 V, the current is only 83.3 mA (ideally calculated to 1V/12V x 1000 mA = 83.3 mA)!
However, in very sensitive applications, for example when stabilizing laser diodes or photodiodes, a pulsed current of a few hundred milliamps can induce interference in the photocurrents or in the laser currents, despite the twisted-pair cable to the TEC. In these sensitive applications it is much better to drive the TEC with a DC voltage or DC current over the entire operating range. (Whether a TEC should be operated with a constant current or a constant voltage is always a point of discussion. In electro-optical applications, one tends to deal with small temperature differences between the hot and cold sides, which means that the Seebeck effect is small, and the series resistance dominates. In these applications, the difference between current and voltage control is therefore negligibly small.)
Version 2
In the second version, an LDO is used in addition to the H-bridge (Figure 3). This LDO is inserted between the output of the buck converter and the supply voltage of the H-bridge and provides an appropriate voltage drop to allow the TEC to be DC powered over its entire operating range.
Figure 3 Version 2 of the circuit described in the text, using only one power inductor and an LDO for bridging the gap at low voltages.
If the TEC voltage is to be higher than the minimum output voltage U1(min) of the buck converter, then the LDO is set to minimum voltage drop and the TEC voltage is set via the control voltage Vctrl1. The sign of the TEC voltage is determined by the H-bridge and is set via a (digital) control input.
If, on the other hand, the TEC should have a lower voltage than U1(min), the LDO comes into action and generates the necessary voltage drop between U1 and U2 up to U1(min). In contrast to version 1, this voltage drop will no longer be lossless because of the LDO.
However, as the example calculation should show, the actual losses are quite small. So, which is the operating state with the greatest loss due to the LDO? That’s easy to answer, because the greatest loss of a source on a resistive load occurs when the source resistance is equal to the load resistance. With a minimum output voltage of the buck regulator of, for example, 1 V and a TEC impedance of 1 Ω, the greatest loss would be achieved if the LDO and the TEC each dropped 0.5 V.
In this case, a TEC current of 0.5V/1 Ω = 0.5 A flows, which of course also flows through the LDO. The power loss in the LDO is then 0.5 V x 0.5 A = 250 mW. This is a value that is easy to master.
In practice, however, step-down converters with smaller minimum output voltages will be used. For instance, with a converter that has a minimum output voltage of 0.8 V, the maximum power loss in the LDO with a 1 Ω TEC is only 0.4 V x 0.4 V / 1 Ω = 160 mW and with a converter with U1(min) = 0 .6 V it is even only 0.3 V x 0.3 V / 1 Ω = 90 mW.
In total, however, the maximum power dissipation of the circuit is not determined by the LDO, but by the buck converter: If you use a good buck converter with 90% efficiency at an output current of, say, 3 A, then a 1 Ω TEC is a power load of 3A x 3A x 1 Ω = 9 W and the power loss in the step-down converter is 9 W x (1/0.9 – 1) = 1 W which is converted into heat. At a TEC voltage of 3.3 V, the LDO is in short-circuit mode and ideally does not produce a voltage drop. If you assume a practical Rds(on) value of the LDO MOSFET of 20 mΩ, for example, then the losses are 3A x 3A x 20 mΩ = 180 mW and thus contribute relatively little to the total losses.
The measured curve of the power loss of a test circuit is shown in Figure 4.
Figure 4 Comparison of the power losses caused by the buck converter and the total power losses of the whole circuit of Version 2 at different load currents.
The blue curve shows the power dissipation of the entire circuit and the red curve shows the power dissipation of the buck converter alone. Because of the resistive load, the overall shapes of the curves are roughly parabolic.
What is striking is the small difference between the two curves. The blue curve is not far above the red curve, which means that the circuit parts (LDO and H-bridge) connected to the buck converter cause relatively small losses. The relative difference is only slightly larger in the lower left part: There you can see a second, smaller parabolic piece superimposed, whose apex is at 0.3 V and which points downwards. This parabola stems from the previously described power loss of the LDO, which comes into action in the range below 0.6 V.
The schematic of the test circuit on which these measurements were made is shown in Figure 5.
Figure 5 Schematic of the Version 2 test circuit that was used for measuring the loss curves in Figure 4.
In this example circuit, an attempt was made to keep the number of components as small as possible. For this reason, a four-pack (QQUAD) instead of individual transistors was used for the H-bridge. An integrated MOS driver (U5) was used to drive the gates.
Buck Converter
The buck regulator can be selected from a plethora of products. Suitable buck regulators with integrated MOSFETs for a 12 V input voltage and 3 A output current are available from around US$/€ 0.20. The matching storage inductors are usually significantly more expensive.
A step-down regulator module with an integrated storage choke was used in the test circuit (MUN12AD03-SH), which is a bit more expensive than a construction from individual parts, but also much more compact because the choke and the silicon are placed on top of each other. There are modules from different manufacturers with compatible footprints.
In order to influence the buck regulator via a control voltage, an intervention in the feedback network must be made. The easiest way to do this is with an additional resistor that is connected to the controller’s feedback input and connected to a DAC output on the other side. The necessary resistor values can be found by solving a system of equations, or you can just get them with the following few lines of this Python script:
Uomin = 0.6 # Minimum output voltage of the regulator at a DAC voltage of Udmax
Uomax = 3.3 # Maximum output voltage of the regulator at a DAC voltage of Udmin
Udmin = 0.0 # Minimum output voltage of the DAC
Udmax = 2.7 # Maximum output voltage of the DAC
D = -Ufb*(Uomax+Udmax-Udmin-Uomin) + Uomax*Udmax – Uomin*Udmin
Rfb1 = D / (Ufb*(Udmax-Udmin))
Rfb3 = D / (Ufb*(Uomax-Uomin))
Rfb2=100/3.5 # Scale factor for all resistances
Rfb1*=Rfb2
Rfb3*=Rfb2
print(“Rfb1 = “, Rfb1)
print(“Rfb2 = “, Rfb2)
print(“Rfb3 = “, Rfb3)
The variable settings in the code above give the following output:
Rfb2 = 28.571428571428573
Rfb3 = 100.0
LDO
The LDO consists of only 3 components: U2, RG and QLDO. The operational amplifier ensures that the output voltage of the LDO mirrors the control voltage. If it can no longer do this because a higher output voltage is required than is available at the drain of QLDO, then the output of the operational amplifier goes to the full operating voltage (in this case to 12 V) and switches QLDO on completely.
The requirements for U2 are low: It should be an op-amp whose maximum operating voltage range is at least as high as the maximum operating voltage of the circuit; the input voltage range should reach down to GND and the maximum output voltage should be as close as possible to the operating voltage so that QLDO can become as low-impedance as possible.
The demands on the MOSFET QLDO are also manageable: The Rds(on) should be significantly smaller than the Rds(on) of the sync FET in the buck regulator so that the overall efficiency does not suffer significantly. The maximum allowable gate-source voltage should be at least as high as the maximum operating voltage. If this were not the case, you would need additional protection of the gate with a Zener diode. The IRFHS8242 was used in the test circuit, whose Rds(on) is specified as 13 mΩ at 10 V gate-source voltage. Don’t forget to look at the SOA curves, but with maximum drain-source voltages of 0.6 V and currents of 3 A, this is well within the safe range.
H-bridge
The H-bridge can be built from single transistors or as shown in Figure 5, from a four-pack of MOSFETs in one package (DMHT3006LFJ). To control the gates, you can use a MOS driver or simply 2 smaller MOSFETs (e.g., 2N7002, BSS138) with relatively high-impedance pull-up resistors. If you connect the gate of one MOSFET to the drain of the other MOSFET, you also have an inverter function for “free”, which prevents all four bridge MOSFETs from conducting at the same time. The H-bridge does not need to be switched quickly; it only serves to select the polarity.
If you want to keep the circuit flexible and use as many alternative components as possible, then of course the use of individual semiconductors in the most common packages would generally be preferable to the use of rather less common components such as QUAD MOSFETs or MOS drivers. Sufficiently high values of the maximum permissible gate-source voltages must be ensured.
Control of the output stage
The power stage can be controlled with 2 DAC outputs and a digital output for switching from heating to cooling mode. If you have a microcontroller with a sufficient number of DAC outputs, such as an STM32G4x4 (7×12-bit DACs) or AduCM320 (8×12-bit DACs), then direct control via these DACs makes sense. If you are short on DACs, you can also use the clamp circuit with U3 and U4: The control voltage goes to U3, is limited there, inverted by U4 and goes to the buck converter. At the same time, the control voltage also goes to pin 3 of U2, the LDO input. The transition between fully driven inactive LDO and LDO operation is then automatic and seamless. A potentiometer was also included in the test circuit to check the full operating range without connecting a DAC.
Figure 6 shows the layout of the test circuit. The TEC driver with the specified components needs a PCB area of 14 x 10 mm². The clamp circuit mentioned above is at the upper left of this board and could of course be made with smaller components if necessary.
Figure 6 Version 2 test board. The actual power stage fits into an area of 14×10 mm² in the center of the PCB.
Lower operating voltages (<12V):
If you want to use the circuit for lower operating voltages or if you want to generate TEC voltages that are closer to the operating voltages, then it must be ensured that both the op-amp for the LDO and the MOS drivers are supplied with a sufficiently high voltage, to entirely turn on the upper three NMOS transistors.
For example, when operating at 5 V, you should provide a voltage doubler that can power the op-amp and MOS driver. Alternatively, you can also use a PWM output of the microcontroller and thus generate a 3.3 V square-wave signal to generate a voltage of about 8 V via a booster capacitor and two diodes.
Larger operating voltages (>12V):
With larger operating voltages, care must be taken not to exceed the maximum permissible gate-source voltages of the MOSFETs. This also applies to lower voltages, because the dielectric strength of the gates of modern MOSFETs is sometimes well below +/-20 V. In this case, Zener diodes should be used to limit the voltage.
Higher currents:
The circuit is easily scalable to higher currents. It should be sufficient to use a higher rated buck-converter and lower-resistance MOSFETs if one wants to go to twice the maximum current or even more than that. In some cases, especially when one wants to go to the limits, it may be wise to measure the TEC current with some high-side or low-side current monitor in order to limit the current precisely to the value given in the data sheet of the TEC.
Christian Rausch is a director of R&D for TOPTICA AG, a laser photonics company that develops high-end laser systems for scientific and industrial applications.
Related Content
- Thermoelectric-cooler unipolar drive achieves stable temperatures
- Increase the efficiency of a low-noise analog TE cooler driver
- Achieve precision temperature control with TEC Seebeck-voltage sampling
- Use thermoelectric coolers with real-world heat sinks
- Active heat removal cools electronics hot spots
- Thermal electric coolers offer advantages for thermal testing
References
- “460 mW Fiber Bragg Grating Stabilized 980 nm Pump Modules.” Lumentum, n.d. Web. https://www.lumentum.com/en/products/460-mw-fiber-bragg-grating-stabilized-980-nm-pump-modules.
- Rausch, Christian. “LED Driver Controls Thermoelectric Cooling.” Electronic Design, 6 May 2008. Web. https://www.electronicdesign.com/power-management/article/21792076/led-driver-controls-thermoelectric-cooling.
- Analog Devices. LTC1923 – High Efficiency Thermoelectric Cooler Controller, from https://www.analog.com/en/products/ltc1923.html.
- Texas Instruments. DV592 – High-Efficiency H-Bridge (Requires External PWM), from https://www.ti.com/product/DRV592.
- Monolithic Power Systems. MP8833 – 1.5A Thermoelectric Cooler Controller, from https://www.monolithicpower.com/en/mp8833.html.
- Analog Devices. ADN8834 – Ultracompact 1.5 A Thermoelectric Cooler (TEC) Controller, from https://www.analog.com/en/products/adn8834.html.
- Neuhaeusler, Juergen. “Application Report: Low-Power TEC Driver” Texas Instruments, 2014, ti.com/lit/an/slva677/slva677.pdf.
- Neuhaeusler, Juergen and Michael Helmlinger. ” PMP9796 – 5V Low-Power TEC Driver Reference Design ” Texas Instruments, 2016, https://www.ti.com/lit/ug/tiduca8/tiduca8.pdf.
The post Low-cost driver for thermoelectric coolers (TECs) appeared first on EDN.
Obtaining a patent in a corporate environment

There have been many articles on patents that answer questions such as “How do I file a patent?”, “Should I patent my idea?”, or “How much does a patent cost?” All these are good to read if you’re working for yourself and have a great idea, but most of us work for a company, and filing for a patent within a company yields different questions and a different path.
(Let me put a caveat here before we begin: I am not a patent lawyer, patent attorney, or a patent agent. I’m just an engineer who has submitted about 30 patents within a corporate environment. Also, I have spent a period working for a patent attorney, offering technical assistance and being tutored on writing patents. I am presenting information from my experiences, but not giving legal advice.)
Listing inventors on the patent and taking notes from the start
So, let’s talk about this path from idea to patent. It all starts with a problem or project you have been assigned to work on. You and the team you are working with have meetings, work on individual tasks, and share ideas back and forth. It may be that one of your tasks does not seem to fit typical solutions so you think “if I just modify the usual way of solving this it will fit our requirements”. Or perhaps you come up with an entirely new solution to your problem. Perhaps you may now have a patentable idea. This new idea may elicit some discussions with other teammates. This is the time where you should keep good notes—one note being the date you thought of the general idea of a solution. Also, you need to keep notes of discussions you have with others and record suggestions they have made to fine-tune your solution. This is important since, if other people add ideas to your solution, they should be added to the patent submission. If someone contributes significantly to the solution and is not included on the submission, there may be hard feelings.
Patent law provides guidance on who should be listed as inventors on the patent. It essentially requires that all actual inventors must be named in a patent application. These inventors are defined as individuals who have contributed to the conception of at least one claim (we’ll discuss claims later). On the flip side, others may have helped on the way but will not be included, such as someone who builds a device your group invented. Perhaps more awkward is that your managers are not included, unless they contributed the idea in one of the claims. I have seen managers automatically being added to the submission, but this is not legally correct.
Filling out a submission form and invention disclosure review
Ok, so you have your idea documented and inventors identified, now you can fill out a submission form for a patent request and send it to your company’s General Counsel, Corporate Counsel, or Patent Counsel (or other keeper of the companies IP). This is not a formal legal document; it is a form your company created to capture information so they can take the next steps on obtaining a patent. This submission form, often referred to as an invention disclosure, will record things such as: the invention name, the inventors’ names and contact information, first date of the concept, a description of the invention, any useful document and drawings you may have, as well as some other pieces of information. The description should be a short simple description of the overarching ideas of how your invention works and why it is useful. Don’t think of it as the legal description of the patent–that comes later and will not need to be written by you.
Some invention’s disclosure forms have a section for discussion of prior art (similar solutions that are already known or published). Be careful here. The legal community is split on the idea of the inventor searching for prior art. You should discuss this with your General Counsel before doing any searches.
The invention disclosure will then be reviewed. Your company may have a group of employees that review patents and provide technical insight into the validity of the invention as well as the relative worth to the company. If your company does not have such a group, I would suggest you work on getting one formed, they are very valuable in filtering ideas. Your submission will then pass through this group who will report their opinion back to the General Counsel (you may or may not be informed of the review committee’s result).
Handing it off to the patent attorney
Let’s assume the concept made it through the patent review committee and the General Counsel concurs; the General Counsel will then pass your submission to someone to actually write and submit the patent. This could be a patent attorney, patent lawyer, or patent agent. Typically, this would be from an outside private agency although large corporations may have these individuals in-house.
(Some definitions: A patent lawyer can write the patent and deal with the patent office to get your patent through but can also defend it in court if needed. A patent agent is not a lawyer but can help write a patent and work with the patent office to get it granted.)
An informal meeting discussing patent with the attorney
You may now have to wait for a few weeks or months before you get involved again. But then, after reading your invention disclosure and documents, the patent lawyer (or whoever your company uses) will set up a meeting with you to discuss your invention. This is usually informal, and you may find it interesting and, dare say it, fun. Assuming your company selected a good patent agency, you’ll be pleasantly surprised to see that the patent lawyer will be very knowledgeable in engineering. The ones I have worked with typically had master’s degrees in electrical engineering along with their law degrees and usually have fully grasped the concepts of the invention. This first meeting is usually less than an hour long and by the end the patent lawyer has a firm grasp of the invention and you may be asked to create a small document or two (usually a drawing or sketch such as circuit diagram, flow diagram, or mechanical sketch). Note that the lawyer will not ask for a working model as it is typically not required to obtain a patent. The patent lawyer and associates will now create the patent text and any drawings needed. This will typically be for a nonprovisional utility patent which uses things like product, process, or machine. A provisional patent is a short-term (1 year) patent.
Review of the patent document
Next comes the review of the patent document. The first draft will be sent to you and all inventors for review and will require some hours of work. This part, although interesting, can be very difficult. The text can be very slow to read as it is written in legalese (when was the last time you read documents with phrases such as “being one of a plurality of available…” or “according to specific examples of instant disclosures, embodiments are directed to or involve a…” in a sentence). My first piece of advice, if you’re not used to reading patents, is to read a few pages at a time then take a break. It takes a while to get through it. (Typical patents run from about 20 to 40 pages but I had one that ran 70 pages and the longest US patent was over 3000 pages.) You, and the other inventors, are reading to check that what is written fully, and correctly, describes your invention. But you are also doing a general document review, checking for spelling, grammar, etc. As you read, markup the text, and drawings, with changes or questions for the lawyer. Remember, you need to check each part of the patent.
A typical patent typically contains the following parts:
- Title: This is a brief and concise statement of the invention. It seems simple, but my experience is that the patent attorney has some tactical reasons for the naming.
- Background or Overview: This does not describe your invention but describes the general field of the invention. It’s used by the patent office to classify the patent.
- Brief description of the drawings: Here each drawing, now referred to as Figure, gets a very short description.
- Summary of the invention: Just what it sounds like, a few paragraphs or pages describing the invention.
- Claims: This section defines the scope of the invention. This is the legal heart of the patent. Everything else hangs on the items in the claims. This is the first thing the lawyer will write.
- Detailed description: This is the meaty part of the patent as it is many pages long detailing all aspects of your invention. It is built on the claims by expanding on each claim into a “slightly” more readable style. Note that this section makes heavy reference to the drawings and as text of drawing items are used, they are always followed by the item number on the drawing. This number will appear in bold. Each claim should be covered in the detailed description.
- Abstract: This is usually a short description of the first claim.
- Drawings: Any drawings that are needed to aid in describing the invention. These can be mechanical drawings, schematics, system block diagrams, flowcharts, or anything that is helpful. Items of interest in each drawing will be marked with a number and line or arrow. Item numbering is typically created by using the drawing (figure) number as the hundreds and the item numbered within those hundreds. As an example, Figure 3 may have items marked 300 through 399.
Proofreading the claims
Since claims are the heart of the patent, let’s talk a little bit more about them. The claims section is actually the first section you should read as everything is based on these. Note that these are, by far, the hardest part to read. They are written in full unadulterated legalese and, just to make it more obfuscated, each claim is required to be only one sentence. I think lawyers only see this as a challenge and they can create extremely long sentences filled with “and”, “including:”, and semicolons. The longest claim in a US patent is over 17,000 words–all in one sentence.
An important thing to note is that there are two types of claims, independent and dependent. The first claim is always independent, and all subsequent claims can be independent or dependent. As you might have figured out, dependent claims follow and add some more detail to another claim. You can tell a dependent claim as it usually starts with something like “The system of claim 9 wherein…”
You may also see what appears to be repetition in the claims, but careful reading will find a single word change. An example is that some patents involve a device and also a method of using that device. In this case you may see a set of claims describing the device and a very similar set of claims describing a method.
Proofreading the description of the drawings
When proofreading the “Description of the Drawings” you may want to have the drawings on hand also. This is because this is one of the areas which I usually find errors in. Oftentimes, the numbers on the items in the drawings don’t match the numbers in the text within the “Description of the Drawings”. It could be the number is wrong, is missing, or just doesn’t exist. Also check that each item on the drawings is used in at least one place in the “Description of the Drawings”.
You may notice the lawyer has broadened the scope of your invention. For example, if your invention is a sensor for monitoring the metal frame of an automobile, the document may describe the sensors use in a truck, airplane, ship, spacecraft, etc. This should be a welcome result as it makes the patent more valuable.
Final review of the patent with the attorney
After you and the other inventors complete proofreading the patent document, you’ll have a meeting with the lawyer to discuss all the changes. (Note that lawyers like to reference areas of the document using paragraph markers such as “[27]” for paragraph 27. Don’t reference page numbers.) After you reach an agreement on the changes, the lawyer will go back and modify the document and send back the document to review again. Hopefully the changes will be marked so you don’t have to reread the entire document. If you find more errors, go through the loop again. After a good text is agreed on, you are done for a long time. The patent document will be filed and a lot of background things will happen as the lawyer works with the patent examiner to get it granted and issued. For the patents I worked on, the time from filing to the patent being granted has run from 18 months to just under 5 years so be patient. You won’t hear much for some time. The patent application will be published 18 months after it is filed, so you’ll find it in a search. This does not mean it is granted or if it will be granted, it is just a standard part of the procedure.
Eventually, when everything works out, your patent will be granted. You may read online that only around 50% of filings make it to a granted patent. I believe this number is much higher for corporate filings. I have submitted over 30 patent applications and have never had one rejected, although the lawyer earned his money in a few cases.
After the patent is granted, your company will have you sign a document making them the assignee or owner of the patent but you will continue being listed as the inventor on the patent. Hopefully your company rewards you for your effort. My experience varied from $1 to sign it over to $4000 and a nice plaque.
So now you know what to expect. Go ahead and give it a try, you may find it interesting, your company will be happy building a patent portfolio, and patents on your resume are not a bad thing.
Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.
Phoenix Bonicatto is a freelance writer.
Related Content
- Simple GPS Disciplined 10MHz Reference uses Dual PWMs
- Time for a second 3D printer in the lab
- The patent process: preparing your application
- What were they thinking: A crazy patent makes money
The post Obtaining a patent in a corporate environment appeared first on EDN.
Developing and verifying 5G designs: A unique challenge

In general, the largest system-on-chip (SoC) designs in industries such as automotive, artificial intelligence/machine learning (AI/ML), and graphics/computing/storage share complexity and size, measured in billions of transistors. They also reveal a multitude of common functional blocks duplicated several times, including central processing units (CPUs), graphic processing units (GPUs), digital signal processing units (DSPs), arithmetic logic units (ALUs), memory blocks, and standard protocol interfaces. Specific functionality required by end-use applications is mostly implemented in software and sometimes via limited specialized hardware.
That is not the case for 5G infrastructure designs and soon 6G designs. While 5G SoCs include common processing units such as CPUs, DSPs and AI/ALUs, most of the fabric is made of sizeable, complex algorithmic blocks implementing a robust set of unique and not repeatable communication functions. They also combine digital blocks with unusually large analog blocks. Furthermore, they employ software to customize their deployment with service providers globally to comply with different standards.
5G wireless communications challenges
The 5G standard became necessary to support ubiquitous, worldwide wireless communications beyond user-to-user communication that drove all preceding standards: 2G, 3G, 4G and a few intermediate versions. The need was dictated as a way to serve the broadest range of user-to-machine and machine-to-machine types of wireless communications including Internet of things (IoT) devices, autonomous vehicles/drones, industrial, medical, military robots, cloud-based AI/ML services, edge applications, and more.
Expectedly, the dramatic expansion in applications produced an explosion of data traffic, particularly in urban areas. Figure 1 compares the growth in Exabytes from 2010 to 2030.
Figure 1 Dramatic expansion in applications produces an explosion of data traffic in Exabytes from 2010 to 2030. Source: International Telecommunications Union-Radio Communication Sector (ITU) Report ITU-R M.2370-0
The combination of escalating data traffic with faster data communication compounded with significant shorter latency led to the perfect storm in the form of challenges never seen in the wireless technologies before.
Among the challenges, 5G bandwidth increased more than 20-fold to more than 20 gigabits per second, speed expanded from a few gigahertz to hundreds, latency dropped from tens of milliseconds (ms) to 1 ms, and the number of concurrent users jumped by orders of magnitude. All combined, the extreme attributes called for upgrading the existing communications technologies, and to a larger extent, for new design verification/validation tools and methodologies.
To address the dauting specifications, the industry devised a series of new technologies. Among the most relevant are:
- Millimeter waves to support many simultaneous high-bandwidth channels each transmitting and receiving immense amounts of data, and to reduce or eliminate the lag in wireless video calls or fast responses in driverless vehicles to avert accidents
- Beamforming to optimize transmission power and increase network capacity
- Massive multiple-input-multiple-output (mMIMO) radio to increase the efficiency of a network while reducing transmission errors
- Carrier aggregation to augment the efficiency of a communication
- Small cells to create a denser infrastructure to ensure broad and consistent service
The long list of features and capabilities encapsulated in the 5G standard imposed the redesign from the ground up of the 5G infrastructure implemented in the base station unit (BBU). Figure 2 portrays nine major challenges that the 5G infrastructure had to address.
Figure 2 Nine main issues hinder the development of the 5G infrastructure. Source: Marvell Technology
Brief description of 5G base station
To size the 5G BBU development task, one can peek into any literature describing a 5G base station. While in all previous wireless generations, the BBU consisted of two monolithic blocks—core network (CN) and radio access network (RAN)—the 5G BBU needs an acronym soup of functional blocks to be described.
First off, the architects “disaggregated” hardware and software in three main blocks: front-haul (FH), mid-haul (MH) and back-haul (BH). Then, they moved sections of the core network of older wireless generations into the FH and split the FH in the central unit (CU), distributed Units (DU) plus several Radio Units (RU), now served by arrays of antennas (see Figure 3).
Figure 3 A 4G base station diagram looks different than the 5G BBU, a completely new design. Source: Siemens EDA
The disaggregated 5G Central-RAN (C-RAN) allows for installing CUs, DUs, and RUs remotely from each other, leading to several advantages:
- Decentralized units provide a cloud-based RAN that covers a much larger area around cities than the BUs used by 4G networks.
- 5G RU can drive up to 64×64 MIMO antennas that can support beamforming and achieve massive increase in bandwidth at significantly lower latency.
- Multiple DUs work together to dynamically allocate resources to RUs based on network conditions.
- The wireless RU network connects wireless devices similar to access points or towers.
Other benefits of the C-RAN network embrace the ability to flexibly pool resources, reuse infrastructure, simplify network operations and management, and support multiple technologies. They lower energy consumption, avoid rebuilding the transport network, shrink capital and operational expenses, and dramatically reduce total cost of ownership (TCO)
Because the 5G network structure is more heterogeneous and self-organizing, it’s easier to evolve to meet changing market conditions and new opportunities. A 5G network can be customized as needed in cities and rural areas, leading to new hardware/software configurations and use cases available to cellular operators.
Architecting the 5G BBU presented the opportunity to break the traditional monopoly, benefiting a few wireless providers and open the market to 5G BBU developers. Today, any wireless design house can develop any 5G BBU components as long as they comply with interface protocols.
To this end, a large alliance of telecommunication industry members sponsored the open radio access network (O-RAN) standard, an open standard defined by the O-RAN Alliance. The goal of the standard is to utilize the same physical cables used within an Ethernet network.
5G wireless BBU verification challenges
The complexity of the architecture is rooted in the requirement to support a broad range of wireless functionality and several different types of use cases like centimeter waves, millimeter waves and dynamic changes such as carrier configurations, among others. Not to mention, compatibility with all previous wireless standards.
The entire base station is physically constrained to a single PCB populated with a few large SoC devices in either 7-nm or 5-nm process technology. These sizes are in the ballpark of the largest silicon chips found today in the semiconductor industry.
The multiple BBU chips must be verified in isolation from one another; it’s critical because some of the chips are sourced from different design houses. Non-standard interfaces between sections of a design and between chips at the full system level hinder stimulus generation. The scenario must be verified for more than 10 ms of real time, a long stretch for verification, impacting the bottom-up or divide-and-conquer approach, thwarting exhaustive verification and ruling out accurate gate-level simulation for lacking the necessary performance.
Vis-à-vis such a demanding load, the allocated time for debugging falls in the ballpark of one or two weeks. Recent focus on energy efficiency and security issues intensifies the verification efforts, pushing it to new limits.
Since the lifespan of 5G infrastructure products ranges from 10 to 15 years, design verification does not stop after pre-silicon and post-silicon are fully validated, or even when products are delivered. It is not uncommon to get requests for customer support several years after the product was shipped. For example, a request to debug some new software features when the original development team has been reassigned to new tasks.
Dr. Lauro Rizzatti is a verification consultant and industry expert on hardware emulation.
Author’s Note: Special thanks to Dr. Ashish Darbari, CEO of Axiomise; Anil Deshpande, associate director at Samsung Semiconductor India; Axel Jahnke, Nokia’s SoC R&D manager; Oren Katzir Real Intent’s vice president of applications engineering; and Herbert Taucher, head of research group at Siemens AG. They participated in “5G Chip Design Challenges and their Impact on Verification,” a DVCon Europe 2022 panel moderated by Gabriele Pulini, product marketing and market development at Siemens EDA. Each graciously took the time to talk with me after the panel to offer more insight into the challenges of 5G chip design and verification.
Related Content
- Design and verify 5G systems, part 1
- Design and verify 5G systems, part 2
- Qualcomm Teases Its Latest 5G Chipset – The Snapdragon 888
- IC design: A short primer on the formal methods-based verification
- Signal Conversion Chip Startup Targeting 5G and Radar Raises €10.5m
The post Developing and verifying 5G designs: A unique challenge appeared first on EDN.
Designing for an uncertain future
Back in April 2016, I bought a BÖHM B2 60 Watt 40″ soundbar on sale for $71.39 plus tax, 40% off (otherwise stated: $47.60 off) the normal $118.99 price. I suspect it was a closeout promotion, because a writeup that appeared two months later had the BÖHM B2 further discounted (at a different retailer) to $55, versus a supposed original $200 MSRP. And the BÖHM B2 is seemingly no longer available for sale anywhere; the original Amazon listing has been nuked, and I can’t find an active website for the manufacturer, either:
The BÖHM B2’s sound was admittedly passable at best, but it served its primary purpose, to redirect and amplify audio coming out of a geriatric large-screen computer monitor in my office. Plus, in addition to the 1.8” (3.5mm) TRS analog audio input that I mostly leveraged, it also offered an array of other input options convenient for periodic product testing purposes:
- Dual RCA analog
- S/PDIF digital optical
- S/PDIF digital coaxial, and
- Bluetooth
To wit, I recently hooked up the soundbar to several streaming music receivers that I was evaluating. I realize that my elementary transducer choice was a bit odd, since the Audiolab 6000N Play and Bluesound NODE N130 are primarily targeted at audiophiles. However, my fundamental motivation with this project was to test device functionality versus the outer limits of delivered audio quality.
Initially, I connected both devices—one at a time, of course—to the soundbar’s dual-RCA analog input (thereby leveraging the devices’ built-in DACs), which worked fine. But when I then transitioned to tapping into either of the soundbar’s digital audio inputs (thereby instead leveraging the soundbar’s own DAC), I got nothing but silence from several music sources, specifically Amazon Music and Tidal. In both service cases, I pay extra each month for “audiophile” quality source content, an upgrade which ended up being a critical detail.
Cutting to the chase, after a bit more experimentation I figured out what was going on. Not all the music I tried to play failed, only much of it, specifically the content that was encoded and delivered as “HD”, i.e., beyond Red Book Audio CD quality, with larger-than-16-bit per-channel sample sizes and/or higher-than-44.1 kHz and 48 kHz sample rates. Conversely, all the content I streamed from Pandora and Sirius XM played through the digital interconnect between receiver and soundbar just fine, as did “Red Book Audio”-only content from Amazon Music and Tidal.
Clearly, the soundbar’s integrated DAC didn’t know how to handle incoming HD audio bitstreams, even if “handle” simply meant “downsample”. Fundamentally, this situation was the result of the legacy S/PDIF interface’s unidirectional nature; there’s no support for an upfront “handshake” that would allow the transmitter and receiver to communicate their respective capabilities and negotiate a mutually compatible compromise transport format.
The soundbar’s remote control was starting to get flaky, anyway, and I couldn’t find a replacement anywhere. Plus, speaking of the remote control, its companion IR receiver was located on the far-right end of the soundbar, a not-conveniently-line-of-site placement in my office setup. So, the BÖHM B2 is currently sitting on my to-donate (with upfront disclosure) pile. I’ve replaced it with a more modern Hisense HS205, which I recently snagged on sale at Amazon for $34.99. Its IR receiver is centrally located, and it handles HD digital audio sources just fine:
Speaking of audio streaming, this time of a wireless nature, I’m reminded of the frustrations with my two Android-based smartphones, Google’s Pixel 4a (5G) and Microsoft’s Surface Duo, and my various Bluetooth headphone sets. When I listen to “Ultra HD” content sourced from Amazon, for example, it reliably streams just fine from both handsets to my Jabra Elite 75ts, Beats Studio Buds, Powerbeats Pros, and Google Pixel Buds Pros. What they all have in common is support for only the SBC and newer AAC codecs. Here’s an example screenshot, taken from the Android’s Developer Options of my Surface Duo’s Bluetooth Audio settings when connected to the Pixel Buds Pros:
My Sennheiser Momentum True Wireless 2 earbuds, however, are a different matter. Like my Sennheiser Momentum 2 Wireless headset, they also support the aptX codec, which delivers claimed higher quality albeit at the tradeoff of tangibly higher computational demands than those of its AAC and (especially) SBC siblings. Here’s what the Surface Duo’s settings look like with the smartphone connected to the Sennheiser Momentum True Wireless 2 earbuds:
Fortunately, unlike with the prior S/PDIF case study, Bluetooth does support an upfront handshake between an audio “source” and “sink” to, requoting what I previously wrote, “communicate their respective capabilities and negotiate a mutually compatible compromise transport format”. That’s why with most earbuds, my smartphones select AAC as the streaming codec, picking aptX only for the Sennheisers. But there seems to be no subsequent ongoing monitoring of the stream to assess audio quality (as measured by packet dropouts, etc.) and up- or down-throttle the settings (alter lossy compression quality to modulate the bitrate, as well as tweak the audio codec in use, the sample size and sampling rate, etc.) to compensate. To wit, both smartphones struggle when streaming Ultra HD content to the Sennheisers, the Surface Duo seemingly stuttering more in my subjective experience—a variance that also compels me to point the blame finger at the smartphones, not at their in-common earbuds.
To my earlier comment that aptX has “significantly higher computational demands than its AAC and (especially) SBC siblings,” this comparative result might not make sense at first glance, considering that the Surface Duo’s SoC is beefier than that of the Pixel 4a (5G). Both handsets also contain the same allocation of system RAM. That said, keep in mind, that the Surface Duo is (as its name reflects) a dual-screen device, capable of juggling multiple applications (including the Android home screen) at once, albeit with incremental demands on its processing and memory resources. Plus, although recent software enhancements have significantly reduced the Surface Duo’s bugginess, some amount of erratic behavior remains, along with overall seeming reduced software execution efficiency versus that of the Pixel 4a (5G). Therein lie, I suspect, the root causes of the audio dropouts I experience. I could even argue that Microsoft should have never offered aptX support on the Surface Duo at all…but I digress…
My last case study is also audio-themed, albeit this time not streaming-related. I still regularly use my geriatric iPod classic—which post-SSD upgrade has sufficient capacity to house the entirety of my voluminous music library—when listening to tunes both in my Volvo (where I rely on a cigarette lighter power adapter along with the vehicle’s sound system’s analog audio-input support) and my Jeep (via a third-party 30-pin adapter that directly taps into the factory sound system and handles not only audio transfer but also iPod remote control and charging).
(mine’s black in color)
For in-house charging and to-iPod music transfer purposes, on the other hand, I’m still clinging to a few legacy USB-to-30-pin cables:
Along with (the key focus of this piece) a few legacy Apple 5W power adapters:
Why the ongoing attachment to no-longer-sold, limited-current AC-to-USB chargers? Don’t I alternatively have plenty of higher wattage single- and multi-port chargers also lying around the home office? Yes, I do, in fact…but the iPod classic doesn’t work with the bulk of them. Its early, now overly-rigid charging-circuit—and/or software, I’m not sure which—implementation doesn’t seemingly handshake correctly with a higher-power source to negotiate 1A-max output current; instead, the iPod classic flat-out won’t charge at all, as a means of “protecting” itself.
Forecasting and designing for the future—accurately forecasting, to be precise—is difficult at best and often essentially impossible, especially if the standards you’re designing to aren’t similarly forward-looking. I get it. But as these examples hopefully get across, not doing so can doom the product into which you’ve poured significant development time and effort to a premature demise once it gets into customers’ hands. If you’re involved in developing industry standards, please strive to make them as flexible and forward-looking as possible. And if you’re developing hardware and/or software products based on those standards, please design them—within reasonable bill-of-materials and time costs, of course—as backwards-compatible supersets of those standards, again to maximize their usable life. Your customers—not to mention the otherwise consumption-crazed world at large—will thank you for keeping stuff out of the landfill. And longer term, your company will be rewarded with repeat-customer loyalty.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Streaming music reception: implementation differentiation
- Obsolescence by design, defect, or corporate decree
- Microsoft embraces obsolescence by design with Windows 11
- Obsolescence by design hampers computer systems
- Component obsolescence: Is your process up to the challenge?
The post Designing for an uncertain future appeared first on EDN.
Predictive transient simulation analysis for the GPUs

Nowadays, graphics processing units (GPUs) feature tens of billions of transistors. With each new generation of GPUs, the number of transistors in GPUs continues to increase to improve processor performance. However, the growing number of transistors is also resulting in an exponential increase in power demand, which makes it more difficult to meet transient response specifications.
This article demonstrates how to use the SIMPLIS simulator from SIMPLIS Technologies to predict and optimize the behavior of power supplies for the next generation of GPUs, where high slew rate requirements and current levels exceeding 1,000 A demand faster transient response.
Constant-on-time (COT) control
The constant-on-time (COT) architecture of multi-phase buck converters replaces the error amplifier (EA) in the compensation network with a high-speed comparator. The output voltage (VOUT) is sensed via the feedback resistors and compared to a reference voltage (VREF). When VOUT drops below VREF, the high-side MOSFET (HS-FET) turns on. The MOSFET’s on time is fixed, meaning that the converter can achieve constant frequency in steady state. If there are load step transients, the converter can also significantly increase its pulse rate to minimize the output undershoot. In this scenario, however, the nonlinear loop control complicates loop tuning.
Figure 1 shows COT control for fast transient response.
Figure 1 COT control achieves fast transient response. Source: Monolithic Power Systems
The converter’s behavior and the power delivery network (PDN) must be accurately modeled to emulate the transient buck performance and validate various GPU-based systems without having to go through a long, costly iterative process.
Power delivery network (PDN)
The PDN is comprised of the components connected to the voltage and ground rail, including the power and ground plane layout, decoupling capacitors used for power stability, and any other copper features that connect or couple to the main power rails. The PDN design’s primary objective is to minimize the voltage fluctuations and ensure normal GPU operation.
Figure 2 shows the PDN architecture of a typical GPU power delivery network.
Figure 2 The PDN architecture of a typical GPU power delivery network comprises components connected to the voltage and ground rail. Source: Monolithic Power Systems
The components in the PDN display parasitic behaviors, such as the equivalent series inductance (ESL) and equivalent series resistance (ESR) of the capacitor. These parasitic elements must also be considered when modeling the system response. Increasing the slew rates generates more powerful high-frequency harmonics. The PDN’s resistor, inductor, ad capacitor (RLC) components create resonant tanks that designers may not be aware of, with resonant frequencies that amplify the high-frequency harmonics created by the converter’s switching, leading to unexpected converter behavior.
Table 1 shows the typical power rail requirements for artificial intelligence (AI) applications.
Table 1 The above numbers highlight design specifications for power rail. Source: Monolithic Power Systems
This analysis has been performed using an evaluation board that combines MP2891, a 16-phase digital controller, and MPC22163-130, a 130 A, two-phase, non-isolated, step-down power module. The evaluation board can reach up to 2,000 A (Figure 3).
Figure 3 The evaluation board combines digital controller and step-down power module. Source: Monolithic Power Systems
PCB modeling
The complexity of the power and ground polygon shapes and the multi-layer stack-up make it difficult to manually calculate the resistance and inductance from the layout. Instead, the PCB’s scattering parameters (S-parameters) can be extracted using Cadence Sigrity PowerSI, with a 0 MHz to 700 MHz frequency range. The ports are defined as follows: Port 1 includes the vertical modules on the top side; Port 2 includes the vertical MPC22163-130 modules on the bottom side; Port 3 includes the capacitor connection; and Port 4 includes the connection to load.
Figure 4 Extracting the PCB’s S-parameters requires specific port configurations. Source: Monolithic Power Systems
It is important to allocate special ports for the capacitor connections since their effectiveness in mitigating fast transients from the GPU depends on both the quantity and placement. Different capacitor positions affect the PCB’s S-parameters, where ineffective positioning can lead to poor transient mitigation and inefficient power. Generally, it is recommended to place capacitors in a row to minimize differences in path length and to select the capacitance based on the resonant frequency required to meet the target impedance specification.
Two different capacitor types are used in this PDN board design: bulk capacitors and MLCC capacitors. Parameters such as voltage, temperature rating, and construction materials impact the frequency at which the capacitors are effective at filtering. Therefore, to optimize the design, designers must consider the capacitor’s impedance profile using a lumped-capacitance model in the simulations (see Figure 5).
Figure 5 The equivalent bulk capacitor model and frequency response evaluate the capacitor’s impedance profile. Source: Monolithic Power Systems
CBYPASS, ESL, and ESR in the lumped-capacitance model define the frequency response of the capacitor’s impedance. The resonance frequency (fO), or the minimum impedance point, can be determined with Equation (1):
fo = 1/2π√L×C (1)
The primary objective of these capacitors is to maintain a low impedance when subjected to high frequencies at which the voltage regulator module (VRM) is inefficient. This inefficiency occurs because the VRM’s effective bandwidth and phase margin are at low frequencies (<1MHz). Thus, the capacitors must filter out the signals with frequencies outside of the VRM’s bandwidth, typically ranging between a few hundred kHz and a few MHz, which can affect the PDN’s operation.
Figure 6 shows a typical PDN impedance profile that can be divided into three regions: low frequency (0 MHz to 1 MHz), mid-frequency (1 MHz to 100 MHz), and high frequency (above 100 MHz). This correlation only considers the VRM and the motherboard, which are in the low- to mid-frequency range, and the transient load is applied on the ball grid array (BGA) connector.
Figure 6 The PDN impendence profile shows three different frequency ranges. Source: Monolithic Power Systems
Time domain simulation and correlation
Transient simulation is conducted using the SIMPLIS simulator, a switching power systems circuit simulation software that enables nonlinear features such as COT control. The MP2891 digital controller’s SIMPLIS model is combined with the MPC22163-130 step-down module and the PCB’s S-parameters that were previously extracted. The S-parameters must be converted to an RLGC model using IdEM from Dassault Systems before being used in the SIMPLIS simulator for transient analysis.
Figure 7 shows the SIMPLIS model of the MP2891 and MPC22163-130, where the S-parameters are added to the schematic as series inductors (L9 and L3) and resistors (R1 and R2).
Figure 7 The SIMPLIS model conducts transient simulation of the MP2891 and MPC22163-130. Source: Monolithic Power Systems
The SIMPLIS simulation combines the MP2891 digital controller’s nonlinearity with accurate power delivery modeling to enable accurate prediction of transient behavior on the motherboard. Figure 8 shows a comparison of the SIMPLIS simulation and lab measurement, where the difference is only 5 mV.
Figure 8 There is only a 5 mV difference between the SIMPLIS simulation and lab measurement. Source: Monolithic Power Systems
Why transient simulation?
This article modelled predictive transient simulation using a multi-phase controller and a two-phase, non-isolated, high-efficiency step-down power block on an evaluation board. Precise converter models and power delivery network parameters allow for accurate prediction of the multi-phase buck converter’s performance, transient droop, and overshoot.
As a result, it is possible to optimize the processor design in the early stages by reducing the number of output capacitors and determining their effective placement. Furthermore, if the design specifications change, accurate simulation enables making a quick assessment of the impact of these changes, as well as identifying any potential issues.
Marisol Cabrera is applications engineering supervisor at Monolithic Power Systems.
Tomas Hudson is applications engineer at Monolithic Power Systems.
Marlon Eguia is applications engineer at Monolithic Power Systems.
Related Content
- GPU-Powered SPICE Simulator
- Modeling and Simulation in Power Electronics
- A Comparison of Power-Electronics Simulation Tools
- GPU-Based Analytics Platform Interprets Large Datasets
The post Predictive transient simulation analysis for the GPUs appeared first on EDN.
Process design kit leverages POI substrate for RF filters

Sawnics announced a process design kit (PDK) based on Soitec’s Connect piezo-on-insulator (POI) substrates to accelerate RF filter design for 5G smartphones. The South Korean foundry, with expertise in surface acoustic wave (SAW) filters, expects the Connect POI PDK will create new opportunities for the development and production of state-of-the-art 5G smartphone filters.
The PDK simplifies the development and production of filters built on Connect POI engineered substrates by reducing the number of design iterations, while meeting increasingly stringent 5G requirements. It also enables fabless companies to gain easier access to this RF filter technology.
Produced with Soitec’s Smart Cut layer transfer technology, Connect POI substrates are well-suited for the manufacture of new-generation SAW filters. Compared with technologies based on conventional materials, Connect POI products provide built-in temperature compensation, facilitate the integration of multiple filters on a single die, and help to reduce power consumption.
“Soitec’s Connect POI substrates were chosen for their unique value for RF filters,” said Jason Chung, vice president of Sawnics’ foundry business unit. “We are excited to work with Soitec’s substrates, which multiply filter performance when compared with filters built using bulk piezoelectric wafers. The volume production ramp on Connect POI products is expected to start in the second half of 2023.”
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Process design kit leverages POI substrate for RF filters appeared first on EDN.
Handheld spectrum analyzer expands frequency range

Spectrum Connect, a family of handheld spectrum analyzers from SAF Tehnika, now offers a version with a frequency range of 10 MHz to 3000 MHz. The lineup comprises seven handheld spectrum analyzers spanning 10 MHz to 87 GHz, with each device dedicated to a specific frequency range.
Spectrum Compact field-ready analyzers for RF parameter measuring and troubleshooting can be used in a wide range of communication applications, from public safety and broadcasting to transportation and electronic warfare. The series also performs RF physical layer measurements for wireless public communications (P25/APCO-25, DMR) and utilities (SCADA).
The Spectrum Compact 10-3000 MHz model provides a resolution bandwidth of 10 kHz, 30 kHz, 100 kHz, and 300 kHz. It guarantees a displayed average noise level of ≤125 dBm at a resolution bandwidth of 10 kHz. Intended for outdoor use, the analyzer operates over a temperature range of -15°C to +55°C and carries an IP54 rating for dust and water resistance. A resistive LCD touchscreen features an intuitive interface and allows users to operate the analyzer with gloves on.
Visit the dedicated Spectrum Compact website to learn more about these handheld spectrum analyzers and to request a price quote.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Handheld spectrum analyzer expands frequency range appeared first on EDN.
ModalAI shrinks drone autopilot size and weight

At 42×42 mm, the VOXL 2 Mini drone autopilot from ModalAI achieves a 30% reduction in area compared to the VOXL 2 to enable even smaller drones. Despite being the size of an Oreo cookie and weighing just 11 g, the redesigned autopilot packs all of the autonomous AI and computing capabilities of its larger predecessor.
The VOXL 2 Mini is powered by the Qualcomm QRB5165 SoC with an 8-core CPU, Hexagon Tensor accelerator, GPU, DSP, and NPU. Its AI engine is capable of delivering 15 TOPS to run complex AI and deep learning workloads. The SoC also furnishes 8 GB of LPDDR5 memory and 128 GB of flash.
VOXL 2 Mini accepts four MIPI-CSI image sensor inputs, while preconfigured accessories provide WiFi, 4G/5G, and Microhard connectivity. The diminutive autopilot pairs with the VOXL ESC Mini, a 5.8-g 4-in-1 electronic speed controller with an integrated power management system and closed- or open-loop RPM control with feedback. Coupled with perception sensors, this computing-dense autonomous stack enables obstacle avoidance, obstacle detection, and GPS-denied navigation.
Supported by ModalAI’s VOXL SDK, the VOXL 2 Mini is a Blue UAS Framework autopilot with a 30.5×30.5-mm industry-standard frame mount. It is available for order, with prices starting at $1169.99.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post ModalAI shrinks drone autopilot size and weight appeared first on EDN.
120-GHz radar transceiver enables in-cabin monitoring

The Silicon Radar TRX_120_067 from indie Semiconductor is a 120-GHz radar frontend (RFE) transceiver with on-chip antennas for automotive radar systems. As vehicle OEMs look to continually improve sensing resolution, radar is becoming increasingly important for in-cabin driver and occupant monitoring systems (DMS/OMS). At 120 GHz, the TRX_120_067 RFE meets the requirements for vital sign detection, such as heartbeat and respiration, to ensure occupant safety.
According to indie, the higher license-free frequencies of the ISM band support the use of antenna-on-chip techniques that simplify PCB design, minimize sensor form factor, and reduce cost. These factors are particularly important for DMS/OMS applications where external antennas impose industrial design limitations unacceptable to automotive manufacturers.
In addition to transmit and receive antennas, the TRX_120_067 RFE transceiver integrates a low-noise amplifier, quadrature mixers, a poly-phase filter, voltage-controlled oscillator, and divide-by-32 outputs in its 8×8-mm QFN56 plastic package. It operates from a single 3.3-V supply with power consumption of 380 mW in continuous operating mode. The ESD-protected device supports both frequency modulated continuous wave (FMCW) and continuous wave (CW) operating modes.
The SiGe BiCMOS TRX_120_067 RFE transceiver is sampling now, with high-volume ramp slated for Q2 2024.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 120-GHz radar transceiver enables in-cabin monitoring appeared first on EDN.
Thermal camera core meets challenging imaging demands

BAE Systems’ Athena 1920 thermal camera core now supports missions requiring 360° situational awareness, vehicle protection, and space-based surveillance. The uncooled 12-µm LWIR sensor delivers high-definition 1920×1200-pixel infrared imagery with minimal blur and sharp detail at night and in harsh weather.
Small and lightweight, the updated Athena 1920 has protective coatings that resist humidity, heat, and corrosion. For operation in space and high-altitude environments, the sensor provides redundant software-based single-event upset (SEU) mitigation to help reduce the impacts of harmful radiation. It also offers two frame rate options (30 Hz and 60 Hz) and frame synchronization for more image depth.
Image captures from multiple Athena 1920 camera cores running at the same time can be chained together for broader situational awareness, including real-time 360° sensing for a ground vehicle. Sensor hardening also enables broad-view night vision images from various platforms, including aircraft, unmanned aerial vehicles, and satellites.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Thermal camera core meets challenging imaging demands appeared first on EDN.
Meditations on photomultipliers

A photomultiplier is a light detection device that consists of a specialized vacuum tube, which is constructed along the lines of Figure 1. Responding to as little as a single photon, these devices allow meaningful measurements to be made on very low light level inputs.
There is a collection of elements called dynodes placed between the photocathode and the anode which are biased at individual voltages by a string of resistors connected from the high voltage to ground. Each dynode is coated with a material which, when impinged upon by an incoming photon or an electron, emits secondary electrons that get accelerated by the electrostatic field between dynodes. Going from dynode to dynode yields more and more secondary electrons which, in the case of the last dynode, get accelerated to the anode where they get collected.
A single incoming photon striking the first dynode can yield a detectable output signal at the anode.
Figure 1 An illustration of the operation of a photomultiplier converting incoming photons/electrons into amplified electrical signals through the electrostatic field between the dynodes. Source: John Dunn
I always felt that the term “photomultiplier” is rather a misnomer. Each incoming photon sets off a cascade of events leading to a whole lot of electrons reaching the anode. This is more of an electron multiplying process, but the word is in our vocabulary so that’s that.
The University of Michigan once undertook an experiment to look for proton decay. The setup was in an abandoned Morton salt mine, a huge cavern, which was lined with photomultipliers and then filled with water. It was felt that the hydrogen atoms of each water molecule provided convenient exposure of the hydrogen atom’s nucleus which was a proton. If a proton were to spontaneously decay, there would be a light signature for the event which the photomultipliers would detect. I was employed at Bertan High Voltage at the time and the high voltage power supplies for the photomultipliers were made by you know who.
The experiment ran for a number of years during which no proton decay event was ever seen. However, the experiment was considered a success anyway because it set a new minimum value for proton decay half life. So far as I have read, there has never been any subsequent experiment that has detected proton decay either, but the present best theoretical estimate of the time to such an event was raised to 1029 years.
I don’t think I’ll wait.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Measure of neutrinos, nature’s most elusive particles
- Photodiodes and other Light Sensors, Part 1
- You did what?
- Slideshow: MINOS neutrino study hunts nature’s “ghost particles”
The post Meditations on photomultipliers appeared first on EDN.