EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 14 min ago

14-bit scopes boost resolution for general use

Fri, 09/06/2024 - 17:30

The InfiniiVision HD series oscilloscopes from Keysight employ a 14-bit ADC, offering four times the vertical resolution of 12-bit scopes. With a noise floor of 50 µV RMS, they also reduce noise by half compared to other general-purpose scopes.

 

Covering bandwidths between 200 MHz and 1 GHz, the HD3 series enables engineers to detect even the smallest signal anomalies. Its 14-bit resolution, low noise floor, and update rate of 1.3 million waveforms/s ensure fast and precise debugging across all measurements.

The oscilloscopes provide two or four analog channels, along with 16 digital channels. A deep memory of up to 100 million points captures longer time spans at the full sample rate of 3.2 Gsamples/s, enhancing measurement and analysis results. Additionally, the HD3 series introduces Fault Hunter software, which automatically analyzes signal characteristics based on user-definable criteria.

Prices for the InfiniiVision HD3 series start at $8323 for a 2-channel oscilloscope and $9187 for a 4-channel model.

InfiniiVision HD3 product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 14-bit scopes boost resolution for general use appeared first on EDN.

Interconnect underdogs steering chiplet design bandwagon

Fri, 09/06/2024 - 14:33

The chiplets movement is gaining steam, and it’s apparent from how this multi-die silicon premise is dominating the program of the AI Hardware and Edge AI Summit to be held in San Jose, California from 10 to 12 September 2024. The annual summit focuses on deep tech and machine learning ecosystems to explore advancements in artificial intelligence (AI) infrastructure and edge deployments.

At the event, Alphawave Semi’s CTO Tony Chan Carusone will deliver a speech on chiplets and connectivity while showing how AI has emerged as the primary catalyst for the rise of chiplet ecosystems. “The push for custom AI hardware is rapidly evolving, and I will examine how chiplets deliver the flexibility required to create energy-efficient systems-in-package designs that balance cost, power, and performance without starting from scratch,” he said while talking about his presentation at the event.

Figure 1 Chiplets have played a vital role in creating silicon solutions for AI, and that’s extending to 6G communication, data center networking, and high-performance computing (HPC). Source: Alphawave Semi

At the summit, Alphawave Semi will showcase an advanced HBM3 sub-system designed for AI workloads as well as AresCORE, a 3-nm 24-Gbps UCI integrated with TSMC CoWoS advanced packaging. There will also be a live demonstration of die-to-die (D2D) traffic at 24 Gbps per lane.

LG’s chiplet design

Another chiplets-related announcement involves leading consumer electronics manufacturer LG Electronics, which has created a system-in-package (SiP) encompassing chiplets with processors, DDR memory interfaces, AI accelerators, and D2D interconnect. Blue Cheetah Analog Design provided its BlueLynx D2D interconnect subsystem IP for this chiplet-based design.

Figure 2 Chiplet designs demand versatile interconnect solutions that minimize die-to-die latency and support a variety of packaging requirements. Source: Blue Cheetah

BlueLynx D2D interconnect provides customizable physical (PHY) and link layer chiplet interfaces and supports both Universal Chiplet Interconnect Express (UCIe) and Bunch of Wires (BoW) standards. Moreover, the PHY IP solutions can be integrated with on-die buses using popular standards such as AMBA, CHI, AXI, and ACE.

The D2D interconnect IP is available for 16 nm, 12 nm, 7 nm, 6 nm, 5 nm, and 4 nm process nodes and works on multiple fabs. It also facilitates both standard and advanced packaging while supporting multiple bump pitches, metal stacks, and orientations.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Interconnect underdogs steering chiplet design bandwagon appeared first on EDN.

Fuse failures

Thu, 09/05/2024 - 16:27

Fuses placed in series with the path of some current flow are protective of excessive current flow taking place in that current path. Although fuses are rated for clearing in terms of “I²t” where “I” is in amperes and “t” is time, my personal view is that such ratings are of dubious value from a protective calculation standpoint. To me, a fuse is either a “fast blow” device or a “slow blow” device at whatever amperage applies and which type of fuse to select isn’t always cut and dried, straight forward, or unambiguously obvious.

Some fuses contain their innards inside of glass which allows you to see the current carrying element and some do not. Where glass lets you see that element, actual fuse blowouts can be instructive in terms of the overload condition that led to that blowout and can perhaps lead you to reselecting that fuse’s rating or to fixing some other problem.

Figure 1 An intact fuse and two blown fuses: one from a moderate current overload and one from a massive current overload.

The middle case in the above figure is a fuse that is probably underrated for whatever application it has been serving. Using a somewhat higher I²t device might be a good idea.

However, the lower case shows a fuse that got hit with a blast of overload current that was way, way, way beyond reason and something elsewhere in the system in which this fuse played a role had just plain better be corrected.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Fuse failures appeared first on EDN.

Brute force mitigation of PWM Vdd and ground “saturation” errors

Wed, 09/04/2024 - 17:33

An excerpt from Christopher Paul’s “Parsing PWM (DAC) performance: Part 1—Mitigating errors”:

 “I was surprised to discover that when an output of a popular µP I’ve been using is configured to be a constant logic low or high and is loaded only by a 10 MΩ-input digital multimeter, the voltage levels are in some cases more than 100 mV from supply voltage VDD and ground…Let’s call this saturation errors.”

Wow the engineering world with your unique design: Design Ideas Submission Guide

The accuracy of PWM DACs depends on several factors, but none is more important than their analog switching elements’ ability to reliably and precisely output zero and reference voltage levels in response to the corresponding digital states. Sometimes however, as Christopher Paul observes in the cited design idea (Part 1 of a 4-part series), they don’t. The mechanism behind these deviations isn’t entirely clear, but if they could be reliably eradicated, the impact on PWM performance would have to be positive. Figure 1 suggests a (literally) brute-force fix.

Figure 1 U1 is a multi-pole (e.g., 74AC04 hex inverter) PMW switch where op-amp A1 forces switch zero state to accurately track 0 = zero volts, op-amp A2 does the job for 1 = Vdd.

U1 pin 5’s connection to pin 14 drives pin 6 to logic 0, sensed by A1 pin 6. A1 pin 7’s connection to U1 pin 7 forces the pin 6 voltage to exactly zero volts, and thereby forces any U1 output to the same accurate zero level when the associated switch is at logic 0.

Similarly, U1 pin 13’s connection to pin 7 drives pin 12 to logic 1, sensed by A2 pin 2. A2 pin 1’s connection to U1 pin 14 forces the pin 12 voltage to exactly Vdd, and thereby forces any U1 output to the same accurate Vref level when the associated switch is at logic 1.

Thus, any extant “saturation errors” are forced to zero, regardless of the details of where they’re actually coming from.

Vdd will typically be c.a. 5.00V. And V+ and V- can come from a single 5-V supply via any of a number of discrete or monolithic rail boost circuits. Figure 2 is one practical possibility.

Figure 2 A practical source for V+ and V- set R1 and R2 = 200k for ∆ = 1volt.

The Figure 2 circuit was originally described in “Efficient digitally regulated bipolar voltage rail booster”.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Brute force mitigation of PWM Vdd and ground “saturation” errors appeared first on EDN.

Raspberry Pi Products Now Available at TME

Wed, 09/04/2024 - 15:00
Raspberry Pi.

View more 

Raspberry Pi, globally recognized for its single-board computers, has revolutionized the education and hobbyist sectors and found applications in industrial equipment. Their versatile products are now included in the TME product catalogue, making advanced technological solutions more accessible. 

Diverse Product Range 

Raspberry Pi’s offerings extend well beyond their renowned single-board computers. Their motherboards feature essential ports like USB, HDMI, and Ethernet, along with SD card slots and GPIO connectors for versatile project integration. The latest Raspberry Pi 5 model introduces a dual-core 64-bit Broadcom BCM2712 processor, up to 8 GB RAM, and enhanced features such as PCIe extension ports, a power switch, and an RTC clock system.  

Raspberry Pi.

Innovative Models 

Raspberry Pi 5: It is equipped with the dual-core, 64-bit Broadcom BCM2712 system, based on the Arm Cortex-A76 architecture and clocked at 2.4 GHz. So it deivers a 2-3x increase in CPU performance relative to Raspberry Pi 4. Moreover, the computer can be fitted with operating memory up to 8 GB RAM and a graphic processor (GPU VideoCore 7) supporting the OpenGL and Vulkan technologies.  

Raspberry Pi 5.

Raspberry Pi 400: This model integrates the RPi 4 board into a keyboard housing, reminiscent of classic microcomputers. It comes with a mouse, power supply, pre-installed operating system, and a detailed manual, making it particularly appealing to beginners and younger users.

Raspberry Pi 400.

Raspberry Pi Zero: Known for its compact size and energy efficiency, the Raspberry Pi Zero is ideal for mobile devices and IoT projects. Despite its smaller form factor, it includes essential features like an HDMI connector, USB output, SD card slot, CSI port, and a built-in wireless communication module (Zero W variant).

For projects requiring different formats or more powerful processing capabilities, Raspberry Pi offers Compute Modules. These miniaturized versions provide the core motherboard components without additional ports, allowing for custom configurations via high-density connectors. The CM4 variants offer SMD board-to-board connectors for an even lower profile, enhancing their flexibility for various applications.

Raspberry Pi Zero.

RP2040 Microcontroller

Recognizing the need for simpler projects, Raspberry Pi developed the RP2040 microcontroller, based on the Cortex M0+ architecture. This microcontroller, featured in the Raspberry Pi Pico module, includes 264 kB RAM, supports external memory up to 16 MB, and integrates various peripherals such as serial bus controllers, ADC converters, and PWM generators. The Pico module, with its small size and ease of use, is ideal for a wide range of applications.

RP2040 Microcontroller.

Comprehensive Accessories

Raspberry Pi also offers a wide range of accessories, including power supply modules and enclosures designed to ensure trouble-free operation and practical usability. Enclosures protect the PCB and components while providing access to all necessary ports and connectors. Raspberry Pi also manufactures peripherals like mice and keyboards, as well as the Raspberry Pi Touch Display, a 7-inch screen with a touch panel that connects via DSI and is powered through GPIO.

The inclusion of Raspberry Pi products in the TME catalogue significantly broadens the availability of cutting-edge technology for educators, hobbyists, and industrial designers. With TME’s extensive inventory, the latest Raspberry Pi solutions are now within easy reach, ready to bring your ideas to life.

Learn More

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Raspberry Pi Products Now Available at TME appeared first on EDN.

Will 2024 be the year of advanced packaging?

Wed, 09/04/2024 - 13:21

Advanced packaging technology continues to make waves this year after being a prominent highlight in 2023 and is closely tied to the fortunes of a new semiconductor industry star: chiplets. IDTechEx’s new report titled “Advanced Semiconductor Packaging 2024-2034: Forecasts, Technologies, Applications” explores advanced packaging’s current landscape while going into detail about emerging technologies such as 2.5D and 3D packaging.

Figure 1 2.5D and 3D packaging facilitate greater interconnection densities for chips serving applications like AI, data centers, and 5G. Source: IDTechEx

After fabs manufacture chips on silicon wafers through various advanced processes, packaging facilities receive completed wafers from fabs, cut them into individual chips, assemble or “package” them into final products, and test them for performance and quality. These packaged chips are then shipped to original equipment manufacturers (OEMs).

That’s part of the traditional semiconductor manufacturing value chain in which engineers build system-on-chips (SoCs) on silicon wafers and then move them to conventional packaging processes. Enter chiplets, manufactured of individual system modalities as standalone chips or chiplets on a wafer, then integrating these separate functionalities into a system through advanced packaging.

This premise brings advanced packaging to the forefront of semiconductor manufacturing innovation. In fact, the future of chiplets is intertwined with advancements in advanced packaging, where 2.5D and 3D technologies are rapidly taking shape to facilitate the commercial realization of chiplets.

2.5D and 3D packaging

While 1D and 2D semiconductor packaging technologies continue to dominate many applications, future advancements relate to 2.5D and 3D packaging to achieve the realization of more-than-Moore semiconductor realm. These technologies leverage wafer-level integration for miniaturization of components, leading to greater interconnection densities.

Figure 2 Advanced packaging techniques like 2.5D and 3D improve system bandwidth and power efficiency by increasing I/O routing density and reducing I/O bump size. Source: Siemens EDA

2.5D technology, which facilitates larger packaging areas, mandates a shift from silicon interposers to silicon bridges or other alternatives such as high-density fan-out. But packaging components of different materials together also leads to many challenges. The IDTechEx report asserts that finding the right materials and manufacturing techniques is critical for 2.5D packaging adoption.

Next, 3D packaging brings new structures into play. That includes integrating one active die on top of another active die and reducing bump pitch distance. This 3D technique—called hybrid bonding—can be used for applications such as CMOS image sensors, 3D NAND flash and HBM memory, and chiplets. However, like 2.5 packaging, 3D packaging entails manufacturing and cost challenges as techniques like hybrid bonding demand new high-quality tools and materials.

OSAT and EDA traction

The development of an ecosystem often offers vital clues about the future of a nascent technology like advanced packaging. While challenges abound, recent semiconductor industry announcements bode well for IC packaging capabilities in the 2.5 and 3D eras.

Amkor, a major outsourced semiconductor assembly and test (OSAT) service provider, is investing approximately $2 billion to build an advanced packaging and test facility in Peoria, Arizona. The 55-acre site will be ready for production in a couple of years.

Then there is Silicon Box, an advanced panel-level packaging foundry focusing on chiplet integration, packaging, and testing. After setting up an advanced packaging facility in Singapore, the company is building a new site in Northen Italy to better serve fabs in Europe.

EDA toolmakers are also paying attention to this promising new landscape. For instance, Siemens EDA is working closely with South Korean OSAT nepes to expand its IC packaging capabilities for the 3D-IC era. Siemens EDA is providing nepes tools to tackle the broad range of complex thermal, mechanical, and other challenges associated with developing advanced 3D-IC packages.

Figure 3 Innovator3D IC software delivers a fast, predictable path for the planning and heterogeneous integration of ASICs and chiplets using 2.5D and 3D packaging technologies. Source: Siemens EDA

Siemens EDA’s Innovator3D IC toolset shown above uses a hierarchical device planning approach to handle the massive complexity of advanced 2.5D/3D integrated designs with millions of pins. Here, designs are represented as geometrically partitioned regions with attributes controlling elaboration and implementation methods. That, in turn, allows critical updates to be quickly implemented while matching analytic techniques to specific regions, avoiding excessively long execution times.

Meanwhile, new materials and manufacturing processes will continue to be developed to confront the challenges facing 2.5D and 3D packaging. Perhaps another update before Christmas will provide greater clarity on where advanced packaging technology stands in 2024 and beyond.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Will 2024 be the year of advanced packaging? appeared first on EDN.

Mighty 555 and ESR-meter

Tue, 09/03/2024 - 16:24

Let’s see how you can effectively double the output sink current of the plain old 555 timer. 

Wow the engineering world with your unique design: Design Ideas Submission Guide

From the block diagram in Figure 1 taken from the datasheet of the ST’s TS555 low power single CMOS timer) we can see that the Discharge pin (pin7) repeats the Output pin (pin3). In reality, they are only in the “Low” state at the same time. This differs for the “High” state where the Output pin can produce a source current while the Discharge pin is of Open Drain (or Open Collector for old 555s).   

Figure 1: Block diagram of the TS555 lower power single CMOS timer. Source: STMicroelectronics

The circuit in Figure 2 combines the sink currents of both the Output and the Discharge pins, which allows us to double the output current. Resistors R3 and R4 are part of the load, they limit the sink current to a safe value.

Figure 2: Circuit that combines the sink currents of the Output and Discharge pin of the TS555, doubling the output current.

The price for this doubling is some accuracy degradation: Now, the circuit is a bit more susceptible to the power voltage variations. Nevertheless, this downside accuracy a satisfactory tradeoff for many applications.

Now, let’s try to use the new circuit of the 555 for something useful. The measurement of a capacitor’s equivalent series resistance (ESR) may become a problem since the ESR can be very low, about tens of milliohms. Hence the current should be sufficiently high to measure it reliably. An application circuit for this is shown in Figure 3.

Figure 3: Application circuit for measuring the ESR of a capacitor using the concept introduced in Figure 2.

The circuit produces short (less than 1 µs) current pulses through the capacitor Cx with a period of about 10 µs; the voltage drop on the capacitor (Vesr) is proportional to its ESR. So, comparing this voltage drop with voltage (V) on R3, Cx you can calculate the ESR: 

r = R3 * Vesr / 2*(V – Vesr),

or you can simply select the capacitor with the lowest ESR amongst several candidates.

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Mighty 555 and ESR-meter appeared first on EDN.

Peering inside a Pulse Oximeter

Mon, 09/02/2024 - 16:53

My longstanding streak of not being infected by COVID-19 (knowingly, at least…there’s always the asymptomatic possibility) came to an end earlier this year, alas, doubly-unfortunately timed to coincide with the July 4th holiday weekend:

I’m guessing I caught one of the latest FLiRT variants, which are reportedly adept at evading vaccines (I’m fully boosted through the fall 2023 sequence). Thankfully, my discomfort was modest, at its worst lasting only a few days, and I was testing negative again within a week:

although several weeks later I still sometimes feel like I’ve got razor blades stuck in my throat.

One upside, for lack of a better word, to my health setback is that it finally prompted me to put into motion a longstanding plan to do a few pandemic-themed teardowns. Today’s victim, for example, is a pulse oximeter which I’d actually bought from an eBay seller (listed as a “FDA Finger tip Pulse Oximeter Blood Oxygen meter O2 SpO2 Heart Rate Monitor US”) a year prior to COVID-19’s surge, in late April 2019, for $11.49 as a sleep apnea monitoring aid. A year later, on the other hand…well, I’ll just quote from a writeup published by Yale Medicine in May 2020:

According to Consumer Reports, prices for pulse oximeters range from $25 to $100, if you can find one, as shortages have been reported.

This unit, a Volmate VOL60A, recently began acting wonky, sometimes not delivering definitive results at all and other times displaying data that I knew undershot reality. So, since prices have retracted to normalcy ($5 with free shipping, in this particular case, believe it or not), I’ve replaced it. Therein today’s dissection, which I’ll as-usual kick off with a series of box shots:

Let’s dive inside. The plastic tray houses our patient alongside a nifty protective case:

Underneath the tray is some literature:

The user manual is surprisingly (at least to me) quite info-thorough and informative, but I can’t find it online (the manufacturer seems to no longer be in business, judging from the “dead” website), so I’ve scanned and converted it to PDF. You can access it here

And there’s one more sliver of paper under the case (which also contains a lanyard):

Here’s the guest of honor, as usual alongside a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the VOL60A has dimensions of 62 x 35 x 31 mm and weighs 60 g including batteries):

Before cracking the unit open, and speaking of batteries, I thought I’d pop a couple of AAAs in it so you can see it in action. Here’s the sequence-of-two powerup display cadence, initiated by a press of the grey button at the bottom:

Unless a finger is preinserted in the pulse oximeter prior to powerup, the display (and broader device) will go back to sleep after a couple of seconds. Conversely, with a finger already in place:

As you can see, it measures both oxygen saturation (SpO2), displayed at the top, and pulse rate below. Good news: my actual oxygen saturation is not as low as the displayed 75%, which had it been true would have me in the hospital if not (shortly thereafter) the morgue. Bad news: my actual resting pulse rate is not as low as 28 bpm, which if true would mean I was very fit (not to mention at lower elevation than my usual 7,500’ residence location)…or conversely, I suppose, might also have me in the hospital if not (shortly thereafter) the morgue. Like I said, this unit is now acting wonky, sometimes (like this time) displaying data that I know undershoots reality.

Let’s next flip it over on its back:

The removable battery “door” is obvious. But what I want to focus in on are the labels, particularly the diminutive bright yellow one:

Here’s what it says:

AVOID EXPOSURE

LASER RADIATION IS EMITTED FROM THIS APERTURE

LED Wavelengths

 

Wavelength

Radiant Power

Red

660 ± 2nm

1.5 mW

IR

940 ± 10nm

2.0 mW

I showcase this label because it conveniently gives me an excuse to briefly detour for a quick tutorial on how pulse oximeters work. This particular unit is an example of the most common technique, known as transmissive pulse oximetry. In this approach, quoting Wikipedia:

One side of a thin part of the patient’s body, usually a fingertip or earlobe, is illuminated, and the photodetector is on the other side…other convenient sites include an infant’s foot or an unconscious patient’s cheek or tongue.

The “illumination” mentioned in the quote is dual frequency in nature, as the label suggests:

More from Wikipedia:

Absorption of light at these wavelengths differs significantly between blood loaded with oxygen and blood lacking oxygen. Oxygenated hemoglobin absorbs more infrared light and allows more red light to pass through. Deoxygenated hemoglobin allows more infrared light to pass through and absorbs more red light. The LEDs sequence through their cycle of one on, then the other, then both off about thirty times per second which allows the photodiode to respond to the red and infrared light separately and also adjust for the ambient light baseline.

Here’s what the dual-LED emitter structure looks like in action in the VOL60A; perhaps obviously, the IR transmitter isn’t visible to the naked eye (and my smartphone’s camera also unsurprisingly apparently has an IR filter ahead of the image sensor):

Note that in this design implementation, the LEDs are on the bottom half of the pulse oximeter, with their illumination shining upward through the fingertip and exiting via the fingernail to the photodetector above it. This is different than the conceptual image shown earlier from Wikipedia, which locates the LEDs at the top and the photodetector at the bottom (and ironically matches the locations shown in the conceptual image in the VOL60A user manual!).

Note, too, that the Wikipedia diagram shows a common photodetector for both LED transmitters. I’ll shortly show you the photodetector in this design, which I believe has an identical structure. That said, other conceptual diagrams, such as the one shown here:

have two photodetectors (called “sensors” in this case), one for each LED (IR and red).

In the interest of wordcount efficiency, I won’t dive deep into the background theory and implementation arithmetic that enable the pulse oximeter to ascertain both oxygen saturation and pulse rate. If you’d like to follow in my research footsteps, Google searches on terms and phrases such as pulse oximeter, pulse oximetry and pulse oximeter operation will likely prove fruitful. In addition to the earlier mentioned Wikipedia entry, two other resources I can also specifically recommend come from the University of Iowa and How Equipment Works.

What I will say a few more words about involves the inherent variability of a pulse oximeter’s results and the root causes of this inconsistency, as well as what might have gone awry with my particular unit. These root-cause variables include amount and density of both fat, muscle, skin and bone in the finger, any callouses or scarring of the fingertip, whether the user is unduly cold at the time of device operation, and the amount and composition of any fingernail polish. While, as Wikipedia notes:

Taking advantage of the pulsate flow of arterial blood, it [the pulse oximeter] measures the change in absorbance over the course of a cardiac cycle, allowing it to determine the absorbance due to arterial blood alone, excluding unchanging absorbance [due to the above variables].

Those sample-to-sample unchanging variables can still affect the baseline measurement assumptions, therefore the broader finger-to-finger, user-to-user, and test-to-test results.

And in my particular case, while I don’t think anything went wonky with the arithmetic done on the sensed data, the data itself is suspect in my mind. Note, for example, that oxygenated blood assessment is disproportionately reliant on successful passage of red visible spectrum light. If the red LED has gone dim for some reason, if its transmission frequency has wandered from its original 660 nm center point, and/or if the photosensor is no longer as sensitive to red light as it once was, the pulse oximeter would then deliver lower-than-accurate oxygen saturation results.

Tutorial over, let’s get back to tearing down. Here are left- and right-side views, both with the front and back halves of the device “closed”:

and “open”, i.e., expanded as would be the case when the finger is inserted in-between them:

What I’m about to say might shock my fellow electrical engineers reading these words, but frankly one of the most intriguing aspects of this design (maybe the most) is mechanical in nature; the robust hinge-and-spring structure at the top, supporting both linear expansion and pivot rotation, that dynamically adapts to both finger insertion and removal and various finger dimensions while still firmly clinging to the finger during measurement cycles. You can see more of its capabilities in these top views; note, too, the flex cable interconnecting the two halves:

And, last but not least, here’s a bottom-end perspective of the device:

Accessing the backside battery compartment reveals two tempting screw candidates:

You know what comes next, right?

A couple of retaining tabs also still need to be “popped”:

And voila, our first disassembly step is complete:

As you’ll see, I’ve already begun to displace the slim PCB in the center from its surroundings:

Let’s next finish the job:

This closeup showcases the two transmission LEDs, one red and the other IR and with the cluster protected from the elements by a clear plastic rectangular structure, that shine through the back-half “window” shown in the previous shot and onto the user’s fingertip underside:

Chronologically jumping ahead briefly, here’s a post-teardown re-enactment of what it looks like temporarily back in place (and this time not illuminated):

And here’s another view of that flex PCB, which (perhaps obviously) routes both power and the LEDs’ output signals to (presumably) processing circuitry in the pulse oximeter’s front half:

Speaking of which, let’s try getting inside that front half next. In previous photos, you may have already noticed two holes at the top of the device, along with one toward the top on each side. They’re for, I believe, passive ventilation purposes, to remove heat generated by internal circuitry. But there are two more, this time with visible screw heads within them, potentially providing a pathway to the front-half insides:

Yep, you guessed it:

Again, the spudger comes through in helping complete the task:

The display dominates the landscape on this half of the PCB, along with the switch at bottom:

But I bet you already saw the two screws at the bottom, on either side of the switch, right?

With them removed, we can lift the PCB away from the chassis, exposing its back for inspection:

The large IC at the top (bottom of the PCB when installed) is the STMicroelectronics-supplied system “brains”. Specifically, it’s a STM32F100C8T6B Arm Cortex-M3-based microcontroller also containing 32 KBytes of integrated flash memory. And below it, in the center, is the three-lead photosensor, surrounded by translucent plastic seemingly for both protective and lens-focusing functions. In the previous photo, you’ll see the plastic “window” in the chassis that it normally mates with. And, in closing, here’s another after-the-fact re-assembly reenactment:

Note, too, the “felt” lining this upper-half time, presumably to preclude nail polish damage? Your thoughts on this or anything else in this piece are as-always welcome in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Peering inside a Pulse Oximeter appeared first on EDN.

Antenna subsystem employs beamforming IC

Fri, 08/30/2024 - 16:18

Taoglas and MixComm are co-developing a 5G NR mmWave antenna subsystem that covers a frequency band of 26.5 GHz to 29.5 GHz. The Taoglas KHA16.24C smart antenna subsystem leverages MixComm’s Summit 2629 beamforming front-end IC. This subsystem enables the integration of 5G NR devices for infrastructure applications, such as small cells, repeaters, and customer-premise equipment.

The KHA16.24C features a 2D antenna array integrated into a multilayer PCB, encompassing RFICs and 16 antenna elements within a 53×84-mm footprint. The design includes layers for power optimization, thermal control, digital control, and RF feed lines. It is scalable, with the capability to support arrays of up to 1024 elements depending on the implementation.

MixComm’s Summit 2629 beamforming front-end IC integrates power amplifiers, low noise amplifiers, and an all-passive beamformer, optimized for 5G infrastructure. Its transmitter/receiver array consists of four elements, each capable of handling dual polarizations. Fabricated on GlobalFoundries’ 45RFSOI, the Summit 2629 includes on-chip temperature and power sensing.

“We are excited to showcase our advanced mmWave smart antenna subsystem together with MixComm,” said Dennis Kish, COO of Taoglas. “The 5G NR mmWave market is starting to emerge globally. Our high-performance and cost-competitive subsystem will help solidify a broader and faster deployment of the technology.”

Contact Taoglas for more information, to receive a quote, or order samples.

Taoglas

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Antenna subsystem employs beamforming IC appeared first on EDN.

Hybrid module boosts solar power

Fri, 08/30/2024 - 16:18

onsemi’s Si/SiC hybrid power integrated module (PIM) increases the power output of utility-scale solar string inverters and energy storage systems. Compared to previous generations, the new F5BP-packaged PIM delivers greater power density and improved efficiency within the same footprint, raising the total system power of a solar inverter from 300 kW to 350 kW.

The F5BP-PIM is a flying capacitor boost module that pairs 1000-V, 500-A Field Stop 7 IGBTs with 1200-V, 120-A SiC diodes. FS7 IGBTs reduce turn-off losses and switching losses by up to 8%, while the SiC diodes enhance switching performance and decrease voltage flicker by 15% compared to previous generations.

Featuring an optimized electrical layout and advanced Direct Bonded Copper (DBC) substrates, these modules minimize stray inductance and thermal resistance. A copper baseplate further reduces thermal resistance to the heat sink by 9.3%, ensuring effective cooling under high loads. This robust thermal management maintains efficiency and longevity, making the modules well-suited for demanding applications requiring reliable and sustained power delivery.

To learn more about the NXH500B100H7F5SHG F5BP-PIM, click here.

onsemi

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Hybrid module boosts solar power appeared first on EDN.

Ideal diodes reduce power loss

Fri, 08/30/2024 - 16:18

Nexperia’s NID5100 and NID5100-Q100 ideal diodes provide a lower forward voltage drop than conventional diodes in power OR-ing applications. The NID5100 targets standard industrial and consumer applications, while the NID5100-Q100 is qualified for automotive use.

These PMOS-based devices integrate a MOSFET that regulates the anode-to-cathode voltage to be 8 to 10 times lower than that of similarly rated Schottky diodes. Additionally, the ideal diodes reduce reverse DC leakage current by up to 100 times compared to typical Schottky diodes.

In addition to automatic transitioning between OR-ed power supplies, Nexperia’s ideal diodes provide forward voltage regulation with a typical value of 31 mV and can handle forward currents up to 1.5 A. They operate over a voltage range of 1.2 V to 5.5 V with low current consumption. At 3.3 VIN, shutdown current is just 170 nA, and quiescent current is 240 nA. The devices also feature reverse voltage protection with an absolute maximum rating of -6 V.

The NID5100 and NID5100-Q100 are supplied in small TSSP6/SOT363-2 leaded plastic packages with dimensions of 2.1×1.25×0.95 mm. They can be purchased through Nexperia’s distributor network.

NID5100 product page

Nexperia

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Ideal diodes reduce power loss appeared first on EDN.

Miniature MLCCs maintain high stability

Fri, 08/30/2024 - 16:18

MLCCs in Kyocera AVX’s KGU series use a Class 1 C0G (NP0) ceramic dielectric, ensuring stable operation across a wide temperature range. Offered in four miniature chip sizes, these capacitors have a temperature coefficient of capacitance (TCC) of 0 ±30 ppm/°C and exhibit virtually no voltage coefficient.

KGU series MLCCs come in EIA 01005, 0402, 0603, and 0805 chip sizes, with rated voltages ranging from 16 V to 250 V and capacitances from 0.1 pF to 100 pF. These components offer tolerances as tight as ±0.05 pF and operate across a temperature range of -40°C to +125°C. According to the manufacturer, the KGU parts also provide ultra-low ESR, high power, high Q, and self-resonant frequencies.

Optimized for communications, these capacitors are suitable for filter networks, high-Q frequency sources, coupling, and DC blocking circuits. They can be used in cellular base stations, Wi-Fi networks, wireless devices, as well as broadband wireless, satellite communications, and public safety radio systems.

KGU series capacitors are available through Kyocera AVX’s distributor network, including DigiKey, Mouser, and Richardson RFPD.

KGU series product page

Kyocera AVX 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Miniature MLCCs maintain high stability appeared first on EDN.

Automotive LDO packs watchdog timer

Fri, 08/30/2024 - 16:17

Nisshinbo’s NP4271 LDO regulator features a high-precision watchdog timer and reset functions through window-type output voltage monitoring. Designed for automotive functional safety, the series meets the need for external MCU monitoring and reliable voltage-based reset functions in electronic control units (ECUs).

The LDO operates across a broad input voltage range of 4.0 V to 40 V and offers two output voltage options of 3.3 V or 5.0 V. Output voltage is accurate to within ±2.0% over a range of conditions, including input voltages from 6 V to 40 V, load currents from 5 mA to 500 mA, and temperatures ranging from -40°C to +125°C.

Two reset function options are available based on output voltage monitoring. Version A monitors both the low and high sides, while Version B monitors only the low side. Detection voltage accuracy is ±2.0% for the low side and ±5.0% for the high side, across the full temperature range. Additionally, the NP4271 provides high timing accuracy for both watchdog timer monitoring and reset times.

The NP4271 automotive LDO regulator is available through Nisshinbo authorized distributors, including DigiKey and Mouser.

NP4271 product page

Nisshinbo Micro Devices 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Automotive LDO packs watchdog timer appeared first on EDN.

PQC algorithms: Security of the future is ready for the present

Fri, 08/30/2024 - 15:42

Quantum computing technology is developing rapidly, promising to solve many of society’s most intractable problems. However, as researchers race to build quantum computers that would operate in radically different ways from ordinary computers, some experts predict that quantum computers could break the current encryption that provides security and privacy for just about everything we do online.

Encryption—which protects countless electronic secrets, such as the contents of email messages, medical records, and photo libraries—carries a heavy load in modern digitized society. It does that by encrypting data sent across public computer networks so that it’s unreadable to all but the sender and intended recipient.

However, far more powerful quantum computers would be able to break the traditional public-key cryptographic algorithms, such as RSA and elliptic curve cryptography, that we use in our everyday lives. So, the need to secure the quantum future has unleashed a new wave of cryptographic innovation, making the post-quantum cryptography (PQC) a new cybersecurity benchmark.

Enter the National Institute of Standards and Technology (NIST), the U.S. agency that has rallied the world’s cryptography experts to conceive, submit, and then evaluate cryptographic algorithms that could resist the assault of quantum computers. NIST started the PQC standardization process back in 2016 by seeking ideas from cryptographers and then asked them for additional algorithms in 2022.

Three PQC standards

On 13 August 2024, NIST announced the completion of three standards as primary tools for general encryption and protecting digital signatures. “We encourage system administrators to start integrating them into their systems immediately, because full integration will take time,” said Dustin Moody, NIST mathematician and the head of the PQC standardization project.

Figure 1 The new PQC standards are designed for two essential tasks: general encryption to protect information exchanged across a public network and digital signatures for identity authentication. Source: NIST

Federal Information Processing Standard (FIPS) 203, primarily tasked for encryption, features smaller encryption keys that two parties can exchange easily at a faster speed. FIPS 203 is based on the CRYSTALS-Kyber algorithm, which has been renamed ML-KEM, short for Module-Lattice-Based Key-Encapsulation Mechanism.

FIPS 204, primarily designed for protecting digital signatures, uses the CRYSTALS-Dilithium algorithm, which has been renamed ML-DSA, short for Module-Lattice-Based Digital Signature Algorithm. FIPS 205, also intended for digital signatures, employs the Sphincs+ algorithm, which has been renamed SLH-DSA, short for Stateless Hash-Based Digital Signature Algorithm.

PQC standards implementation

Xiphera, a supplier of cryptographic IP cores, has already started updating its xQlave family of security IPs by incorporating ML-KEM (Kyber) for key encapsulation mechanism and ML-DSA (Dilithium) for digital signatures according to the final versions of the NIST standards.

“We are updating our xQlave PQC IP cores within Q3 of 2024 to comply with these final standard versions,” said Kimmo Järvinen, co-founder and CTO of Xiphera. “The update will be minor, as we already support earlier versions of the algorithms in xQlave products as of 2023 and have been following very carefully the standardisation progress and related discussions within the cryptographic community.”

Xiphera has also incorporated a quantum-resistant secure boot in its nQrux family of hardware trust engines. The nQrux secure boot is based on pure digital logic and does not include any hidden software components, which bolsters security and ensures easier validation and certification.

The nQrux secure boot uses a hybrid signature scheme comprising Elliptic Curve Digital Signature Algorithm (ECDSA), a traditional scheme, and the new quantum-secure signature scheme, ML-DSA, both standardized by NIST. The solution will ensure system security even if quantum computers break ECDSA, or if a weakness is identified in the new ML-DSA standard.

Figure 2 The hybrid system combines a classical cryptographic algorithm with a new quantum-secure signature scheme. Source: Xiphera

The nQrux secure boot, a process node agnostic IP core, can be easily integrated across FPGA and ASIC architectures. Xiphera plans to make this IP core available for customer evaluations in the fourth quarter of 2024.

PQC standards in RISC-V

Next, RISC-V processor IP supplier SiFive has teamed up with quantum-safe cryptography provider PQShield to accelerate the adoption of NIST’s PQC standards on RISC-V technologies. This will allow designers leveraging SiFive’s RISC-V processors to build chips that comply with NIST’s recently published PQC standards.

SiFive will integrate PQShield’s PQPlatform-CoPro security IP in its RISC-V processors to establish a quantum-resistant hardware root-of-trust and thus build a foundation of a secure system. “This collaboration ensures that designers of RISC-V vector extensions will be working with the latest generation of cybersecurity,” said Yann Loisel, principal security architect at SiFive.

Figure 3 PQPlatform-CoPro adds post-quantum cryptography (PQC) to a security sub-system. Source: PQShield

The partnership will also allow PQShield’s cryptographic libraries to utilize RISC-V vector extensions for the first time. On the other hand, RISC-V processors will incorporate a brand-new security technology with a greater level of protection and trust.

No wait for backup standards

Powerful quantum computers are soon expected to be able to easily crack the current encryption standards used to protect software and hardware applications. So, as the above announcements show, hardware and software makers are starting to migrate their semiconductor products to PQC technologies in line with NIST’s new standards for post-quantum cryptography.

While NIST continues to evaluate two other sets of algorithms that could one day serve as backup standards, NIST’s Moody says there is no need to wait for future standards. “Go ahead and start using these three. We need to be prepared in case of an attack that defeats the algorithms in these three standards, and we will continue working on backup plans to keep our data safe. But for most applications, these new standards are the main event.”

It’s important to note that while these PQC algorithms are implemented on traditional computational platforms, they can withstand both traditional and quantum attacks. That’s a vital consideration for long-lifecycle applications in automotive and industrial designs.

Moreover, the landscape of cryptography and cybersecurity will continue shifting amid the ascent of powerful quantum computers capable of breaking the traditional public-key cryptographic algorithms. That poses an imminent threat to the security foundations of global networks and data infrastructures.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PQC algorithms: Security of the future is ready for the present appeared first on EDN.

Beaming solar power to Earth: feasible or fantasy?

Thu, 08/29/2024 - 14:00

It’s always interesting when we are presented with very different and knowledgeable perspectives about the feasibility of a proposed technological advance. I recently had this experience when I saw two sets of articles about the same highly advanced concept within a short time window, but with completely different assessments of their viability.

In this case, the concept is simple and has been around for a long time in science fiction and speculative stories: capture gigawatts of solar energy using orbiting structures (I hesitate to call them satellites) and then beam that energy down to Earth.

The concept has been written about for decades, is simple to describe in principle, and appears to offer many benefits with few downsides. In brief, the plan is to use huge solar panels to intercept some of the vast solar energy impinging on Earth, convert it to electricity, and then beam the resultant electrical energy to ground-based stations from where it could be distributed to users. In theory, this would be a nearly environmentally “painless” source of free energy. What’s not to like?

It’s actually more than just an “on paper” or speculative concept. There are several serious projects underway, including one at the California Institute of Technology (Caltech) which is building a very small-scale version of some of the needed components. They have been performing ground-based tests and have even launched some elements in orbit for in-pace evaluation in January 2023 (“In a First, Caltech’s Space Solar Power Demonstrator Wirelessly Transmits Power in Space”). The Wall Street Journal even had an upbeat article about it, “Beaming Solar Energy From Space Gets a Step Closer”.

There are many technical advances to be resolved in the real world (actually, they are “out of this world”) issues that have to be addressed. Note that the Caltech project is funded thus far by a $100 million grant, all from a single benefactor.

The Caltech Space Solar Power Project launched their Space Solar Power Demonstrator (SSPD) to test several key components of an ambitious plan to harvest solar power in space and beam the energy back to Earth. In brief, it consists of three main experiments, each tasked with testing a different key technology of the project, Figure 1.

Figure 1 Caltech’s Space Solar Power Demonstrator from their Space Solar Power Project has three key subsystems, encompassing structure, solar cells, and power transfer. Source: Caltech

The three segments are:

  • Deployable on-Orbit ultraLight Composite Experiment (DOLCE): A structure measuring 6 feet by 6 feet that demonstrates the architecture, packaging scheme and deployment mechanisms of the modular spacecraft that would eventually make up a kilometer-scale constellation forming a power station, Figure 2;

Figure 2 Engineers carefully lower the DOLCE portion of the Space Solar Power Demonstrator onto the Vigoride spacecraft built by Momentus. Source: Caltech

  • ALBA: A collection of 32 different types of photovoltaic (PV) cells, to enable an assessment of the types of cells that are the most effective in the punishing environment of space;
  • Microwave Array for Power-transfer Low-orbit Experiment (MAPLE): An array of flexible lightweight microwave power transmitters with precise timing control focusing the power selectively on two different receivers to demonstrate wireless power transmission at distance in space.

Scaling a demonstration unit up to useable size is a major undertaking. The researchers envision the system as being designed and built as a highly modular, building-block architecture. Each spacecraft will carry a square-shaped membrane measuring roughly 200 feet on each side. The membrane is made up of hundreds or thousands of smaller units which have PV cells embedded on one side and a microwave transmitter on the other.

Each spacecraft would operate and maneuver in space on its own but also possess the ability to hover in formation and configure an orbiting power station spanning several kilometers with the potential to produce about 1.5 gigawatts of continuous power. A phased-array antenna would aim the 10-GHz power beam to a surface zone about five kilometers in diameter.

The concept is certainly ambitious. Perhaps most challenging is the very harsh reality that scaling up power-related projects from a small-scale bench-size demonstration unit to full-scale functioning system is a highly nonlinear process. This applies the battery storage systems, solar and wind energy harvesting, and other sources.

Experience shows that there’s an exponential increase in difficulties and issues as physical size and power levels; the only question is “what is that exponent value?” Still, the concept makes sense and seems so straightforward; we just have to keep moving the technology along and we’ll get there, right?

I was almost convinced, but then I saw a strong counterargument in an article in the June 2024 issue of IEEE Spectrum (“A Skeptic’s Take on Beaming Power to Earth from Space”). The article’s author, Henri Barde, joined the European Space Agency in 2007 and served as head of the power systems, electromagnetic compatibility, and space environment division until his retirement in 2017; he has worked in the space industry for nearly 30 years and has reality-based insight.

He looked at various proposed and distinctly different approaches to capturing and beaming the power, including CASSIOPeiA from Space Solar Holdings Group; SPS-ALPHA Mark-III from a former NASA physicist; Solar Power Satellite from Thales Alenia Space; and MR-SPS from the China Academy of Space Technology (there’s a brief mention of the Caltech project as well).

He discusses key attributes, presumed benefits, and most importantly, the real obstacles to success as well the dollar and technical cost to overcoming those obstacles—assuming they can be overcome. These include the hundreds, if not thousands, of launches needed to get everything “up there”; the need for robotic in-space assembly and repair; fuel for station-keeping at the desired low earth orbit (LEO), medium earth orbit (MEO), or geostationary orbit (GEO); temperature extremes (there will be periods when satellites are in the dark) and associated  flexing; impacts from thousands of micrometeorites; electronic components capable of handling megawatts in space (none of which presently exist), and many more.

His conclusion is simple: it’s a major waste of resources that could be better spent on improved renewable power sources, storage, and grid on Earth. The problem he points out is that beamed solar power is such an enticing concept. It’s so elegant in concept and seems to solve the energy problem so cleanly and crisply, once you figure it out.

So now I am perplexed. The sobering reality described in Barde’s “downer” article wiped out the enthusiasm I was developing for these projects such as the one at Caltech. At some point, the $100 million seed money (and similar at other projects) will need to be supplemented by more money, and lots of it (easily, trillions), to take any of these ideas to their conclusion, while there will be substantial risk.

Is beamed solar power one of those attractive ideas that is actually impractical, impossible, too risky, and too costly when it meets reality of physics, electronics, space, and more? Do we need to keep pushing it to see where it can take us?

Or will the spigot of money as well as the personal energy of its proponents eventually dry up, since it is not a project that you can do part way? After all, with a project like this one, you’re either all in or you are all out.

I know that when it comes to the paths that technology advances take, you should “never say never.” So, check back in a few decades, and we’ll see where things stand.

Related Content

References

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Beaming solar power to Earth: feasible or fantasy? appeared first on EDN.

USB 3: How did it end up being so messy?

Wed, 08/28/2024 - 16:03

After this blog post’s proposed topic had already been approved, but shortly before I started to write, I realized I’d recently wasted a chunk of money. I’m going to try to not let that reality “color” the content and conclusions, but hey, I’m only human…

Some background: as regular readers may recall, I recently transitioned from a Microsoft Surface Pro 5 (SP5) hybrid tablet/laptop computer:

to a Surface Pro 7+ (SP7+) successor:

Both computer generations include a right-side USB-A port; the newer model migrates from a Mini DisplayPort connector on that same side (and above the USB-A connector) to a faster and more capable USB-C replacement.

Before continuing with my tale, a review: as I previously discussed in detail six years ago (time flies when you’re having fun), bandwidth and other signaling details documented in the generational USB 1.0, USB 2.0, USB 3.x and still embryonic USB4 specifications are largely decoupled from the connectors and other physical details in the USB-A, USB-B, mini-USB and micro-USB, and latest-and-greatest USB-C (formally: USB Type-C) specs.

The signaling and physical specs aren’t completely decoupled, mind you; some USB speeds are only implemented by a subset of the available connectors, for example (I’ll cover one case study here in a bit). But the general differentiation remains true and is important to keep in mind.

Back to my story. In early June, EDN published my disassembly of a misbehaving (on MacOS, at least) USB flash drive. The manufacturer had made the following performance potential claims:

USB 3.2 High-Speed Transmission Interface

 Now there is no reason to shy away from the higher cost of the USB 3.2 Gen 1 interface. The UV128 USB flash drive brings the convenience and speed of premium USB drives to budget-minded consumers.

However, benchmarking showed that it came nowhere close to 5 Gbps baseline USB 3.x transfer rates, far from the even faster 10 and 20 Gbps speeds documented in newer spec versions:

What I didn’t tell you at the time was that the results I shared were from my second benchmark test suite run-through. The first time I ran Blackmagic Design’s Disk Speed Test, I had connected the flash drive to the computer via an inexpensive (sub-$5 inexpensive, to be exact) multi-port USB 3.0 hub intermediary.

The benchmark site ran ridiculously slow that first time: in retrospect, I wish I would have grabbed a screenshot then, too. In trying to figure out what had happened, I noticed (after doing a bunch of research; why Microsoft obscures this particular detail is beyond me) that its USB-C interface specified USB 3.2 Gen2 10 Gbps speeds. Here’s the point where I then over-extrapolated; I assumed (incorrectly, in retrospect) that the USB-A port was managed by the same controller circuitry and therefore was capable of 10 Gbps speeds, too. And indeed, direct-connecting the flash drive to the system’s USB-A port delivered (modestly) faster results:

But since this system only includes a single integrated USB-A port, I’d still need an external hub for ongoing use. So, I dropped (here’s the “wasted a chunk of money” bit) $40 each, nearly a 10x price increase over those inexpensive USB 3.0 hubs I mentioned earlier, on the only 10 Gbps USB-A hub I could find, Orico’s M3H4-G2:

I bought three of them, actually, one for the SP7+, one for my 2018 Mac mini, and the third for my M1 Max Mac Studio. All three systems spec 10 Gbps USB-C ports; those in the latter two systems do double duty with 40 Gbps Thunderbolt 3 or 4 capabilities. The Orico M3H4-G2 isn’t self-powered over the USB connection, as was its humble Idsonix precursor. I had to provide the M3H4-G2 with external power in order for it to function, but at least Orico bundled a wall wart with it. And the M3H4-G2’s orange-dominant paint job was an…umm…“acquired taste”. But all in all, I was still feeling pretty pleased with my acquisition…

…until I went back and re-read that Microsoft-published piece, continuing a bit further in it than I had before, whereupon I found that the SP7+ USB-A port was only specified at 5 Gbps. A peek at the Device Manager report also revealed distinct entries for the USB-A and USB-C ports:

Unfortunately, my MakerHawk Makerfire USB tester only measures power, not bandwidth, so I’m going to need to depend on the Microsoft documentation as the definitive ruling.

And, of course, when I went back to the Mac mini and Mac Studio product sheets, buried in the fine print was indication that their USB-A ports were only 5 Gbps, too. Sigh.

So, what had happened the first time I tried running Blackmagic Design’s Disk Speed Test on the SP7+? My root-case guess is a situation that I suspect at least some of you’ve also experienced; plug in a USB 3.x peripheral, and it incorrectly enumerates as being a USB 1.0 or USB 2.0 device instead. Had I just ejected the flash drive from the USB 3.0 hub, reinserted it and re-run the benchmarks, I suspect I would have ended up with the exact same result I got from plugging it directly into the computer, saving myself $120 plus tax in the process. Bitter? Who, me?

Here’s another thought you might now be having: why does the Orico M3H4-G2 exist at all? Good question. To be clear, USB-A optionally supports 10 Gbps USB 3 speeds, as does USB-C; the only USB-C-specific speed bin is 20 Gbps (for similar reasons, USB4 is also USB-C-only from a physical implementation standpoint). But my subsequent research confirmed that my three computers weren’t aberrations; pretty much all computers, even latest-and-greatest ones and both mobile and desktop, are 5 Gbps-only from a USB-A standpoint. Apparently, the suppliers have decided to focus their high-speed implementation attention solely on USB-C.

That said, I did find one add-in card, Startech’s PEXUSB311AC3, that implemented 10 Gbps USB-A:

I’m guessing there might also be the occasional motherboard out there that’s 10 Gbps USB-A-capable, too. You could theoretically connect the hub to a 10 Gbps USB-C system port via a USB-C-to-USB-A adapter, assuming the adapter can do 10 Gbps bidirectional transfers, too (I haven’t yet found one). And of course, two 10 Gbps USB-A-capable peripherals, such as a couple of SSD storage devices, can theoretically interact with each through the Orico hub at peak potential speeds. But suffice it to say that I now more clearly understand why the M3H4-G2 is one-of-a-kind and therefore pricey, both in an absolute sense and versus 5 Gbps-only hub alternatives.

1,000+ words in, what’s this all have to do with the “Why is USB 3 so messy” premise of this piece? After all, the mistake was ultimately mine in incorrectly believing that my systems’ USB-A interfaces were capable of faster transfer speeds than reality afforded. The answer: go back and re-scan the post to this point. Look at both the prose and photos. You’ll find, for example:

  • A USB flash drive that’s variously described as being “USB 3.0” and with a “USB 3.2 Gen 1” interface and a “USB 3.2 High-Speed Transmission Interface”
  • An add-in card whose description includes both “10 Gbps” and “USB 3.2 Gen 2” phrases
  • And a multi-port hub that’s “USB 3.1”, “USB 3.1 Gen2” and “10Gbps Super Speed”, depending on where in the product page you look.

What I wrote back in 2018 remains valid:

USB 3.0, released in November 2008, is once again backwards compatible with USB 1.x and USB 2.0 from a transfer rate mode(s) standpoint. It broadens the pin count to a minimum of nine wires, with the additional four implementing the two differential data pairs (one transmitter, one receiver, for full duplex support) harnessed to support the new 5 Gbps SuperSpeed transfer mode. It’s subsequently been renamed USB 3.1 Gen 1, commensurate with the January 2013 announcement of USB 3.1 Gen 2, which increases the maximum data signaling rate to 10 Gbps (known as SuperSpeed+) along with reducing the encoding overhead via a protocol change from 8b/10b to 128b/132b.

 Even more recently, in the summer of 2017 to be exact, the USB 3.0 Promoter Group announced two additional USB 3 variants, to be documented in the v3.2 specification. They both leverage multi-lane operation over existing cable wires originally intended to support the Type-C connector’s rotational symmetry. USB 3.2 Gen 1×2 delivers a 10 Gbps SuperSpeed+ data rate over 2 lanes using 8b/10b encoding, while USB 3.2 Gen 2×2 combines 2 lanes and 128b/132b encoding to support 20 Gbps SuperSpeed+ data rates.

But a mishmash of often incomplete and/or incorrect terminology, coupled with consumers’ instinctive interpretation that “larger numbers are better”, has severely muddied the waters as to what exactly a consumer is buying and therefore should expect to receive with a USB 3-based product. In fairness, the USB Implementers Forum would have been perfectly happy had its member companies and compatibility certifiers dispensed with the whole numbers-and-suffixes rigamarole and stuck with high-level labels instead (40 Gbps and 80 Gbps are USB4-specific):

That said:

  • 5 Gbps = USB 3.0, USB 3.1 Gen 1, and USB 3.2 Gen 1 (with “Gen 1” implying single-lane operation even in the absence of an “x” lane-count qualifier)
  • 10 Gbps = USB 3.1 Gen 2, USB 3.2 Gen 2 (with the absence of an “x” lane-count qualifier implying single-lane operation), and USB 3.2 Gen 2×1 (the more precise alternative)
  • 20 Gbps = USB 3.2 Gen 2×2 (only supported by USB-C).

So, what, for example, does “10 Gbps USB 3” mean? Is it a single-lane USB 3.1 device, with that one lane capable of 10 Gbps speed? Or is it a dual-lane USB 3.2 device with each lane capable of 5 Gbps speeds? Perhaps obviously, try to connect devices representing both these 10 Gbps implementations together and you’ll end up with…5 Gbps (cue sad trombone sound).

So, like I said, what a mess. And while I’d like to think that USB4 will fix everything, a brief scan of the associated Wikipedia page details leave me highly skeptical. If anything, in contrast, I fear that the situation will end up even worse. Let me know your thoughts in the comments.

 Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post USB 3: How did it end up being so messy? appeared first on EDN.

Component selection tool employs AI algorithms

Wed, 08/28/2024 - 12:32

An artificial intelligence (AI)-assisted hardware design platform enables engineers to find the right components for their design projects using machine learning and smart algorithms. It selects the ideal set of components while providing deliverables of architectural design, ECAD native schematics, bill of materials, footprints, and project information summary.

The CELUS design platform transforms technical requirements into schematic prototypes in less than an hour, allowing developers and engineers to move from concept to reality with unprecedented efficiency and precision. Moreover, with projects often comprised of anywhere from 200 to 1,000 individual components, it simplifies the complexities of electronic design and accelerates time to market for new products.

The design platform provides an automated way to transform technical requirements into schematic prototypes in record time. Source: CELUS

At a time when there is an increasing need for more efficient design processes, finding the right components for projects can be overwhelming and time-consuming. The CELUS platform streamlines the design process and provides real-time component recommendations that work.

“With more than 600 million components available to electronics designers, the task of identifying and selecting the ones right for any given project is at best a challenge,” said Tobias Pohl, co-founder and CEO of CELUS. “We developed the CELUS design platform to handle the heavy lifting and intricate details of product design to drive innovation and expand demand creation in a fraction of the time required of traditional approaches.”

We were told that such a system was impossible, but we did it and are now expanding its reach to end users and component suppliers around the world, Pohl added. CELUS aims to transform the $1.4 trillion component industry by aiding the circuit board design market through its unique design automation process.

While CELUS minimizes the time engineers spend identifying disparate component pieces, it also allows component suppliers to easily connect with design engineers for faster market integration and broader reach. Furthermore, this connection via engineering tools like CELUS enables component suppliers to reach developers and engineers who may not be accessible through traditional channels.

CELUS, based in Munich, Germany, is expanding the reach of its cloud-based design platform in the United States by setting up a U.S. headquarters in Ausin, Texas. The company has been founded by a team of mechanical, electrical, and aeronautical engineers and is backed by an advisory board of top industry experts.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Component selection tool employs AI algorithms appeared first on EDN.

Digital pot can control gain over a 90 dB span like an electromechanical

Tue, 08/27/2024 - 17:35

A short while back, I published a design idea that uses a single linear pot to control the gain of a high performance OP37 decompensated op-amp over an unusually wide (-30 dB to +60 dB) range.

Figure 1 shows the circuit.

Gain = (R2ccw/(R1 + R2ccw))(R3/R2cw + 1).

Figure 1 Grounded wiper makes R2 serve as both input attenuator and output gain set.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Recently I started wondering whether a digital pot (Dpot) would work in place of Figure 1’s mechanical R2. Figure 2 shows what seemed like a likely Dpot topology.

Gain = (R2ds/(R1 + R2ds + Rw))(R3/(R2(1 – ds) + Rw) + 1)

Figure 2 R2 has the same function as in Figure 1 with DC bias from R4 R5 C2 to accommodate bipolar signals. But what about Rw wiper resistance effects?

On closer inspection, it turned out not to be so very promising after all. This is due to wiper resistance interfering with the isolation of the two halves of R2 that made the original circuit work in the first place. Figure 3 shows the fix I eventually resorted to.

Figure 3 Positive and negative feedback loops around A2 combine to create active negative resistance = -R4.

A2 and its surrounding network are the basis of the trick. They generate an active negative resistance effect that subtracts from Rw and, if adjusted so R4 = Rw, can theoretically (the engineer’s least favorite word) cancel it out completely. 

A quick method for dialing out Rw is to write the Dpot setting to zero, provide a ~1v rms input, then trim R4 for output null.

Here’s some negative resistance math. Note Vp# = voltage signal present at A2 pin #. 

  1. Let Iw = wiper signal current, then
  2. Vp6 = Vp2 – R4*Iw
  3. Vp2 = Vp3 (negative feedback)
  4. Vp3 = Vp6/2 (positive feedback)
  5. Vp6 = Vp6/2 – R4*Iw
  6. Vp6 – Vp6/2 = Vp6/2 = -R4*Iw
  7. Vp6 = -2*R4*Iw
  8. If R4 = Rw, then IR4 = IRw
  9. -2*R4*Iw = -(R4 + Rw)Iw
  10. Vw = Vp6 + (Iw*R4 + Iw*Rw) = -Iw(R4 + Rw) + Iw(R4 + Rw)
  11. Vw = 0 (Rw has been cancelled out!)

 Gain = (R2ds/(R1 + R2ds))(R3/(R2(1 – ds)) + 1)

Figure 4’s red curve compares Figure 2’s behavior with an (uncompensated) Rw = 150 Ω (plausible for the Microchip Dpot illustrated), while the black curve shows what happens if R4 = Rw = 150 Ω. Compare it to the performance of the original (Figure 1) circuit using a mechanical pot as shown in Figure 5.

Of course, how perfect Rw cancellation over the full range of Dpot settings can be is no better than Rw match over the Dpot’s 257 different taps at the 2.5v DC bias provided by R5R6. Typical matching within a given pot’s resistor array seems good, but this is not the manufacturer’s promise, which only speaks to a factor of +/-20% or so. But reducing Rw by a factor of 5 is still useful.

Figure 4 Red curve plots uncompensated Rw (~150 Ω), note the 20 dB loss from both ends of the span. Black curve plots the case where Rw is compensated with negative resistance (R4 = Rw = 150). 

Figure 5 Gain curve using the mechanical pot is identical to Dpot with negative resistance Rw compensation.

Footnote: Subsequent to publishing the mechanical pot version of this idea, I learned that Mr T. Frank Ritter had anticipated it by more than 50 years in his “Controlling op amp gain with one potentiometer,” published in “Electronics Designer’s Casebook”, 1972, McGraw Hill. 

So, I hereby offer a belated but enthusiastic tip of my hat to Mr. Ritter. I’ve always admired pioneers!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Digital pot can control gain over a 90 dB span like an electromechanical appeared first on EDN.

Power Tips #132: A low-cost and high-accuracy e-meter solution

Mon, 08/26/2024 - 16:58
Introduction

Power supplies in data centers that measure the input power in real time and report the measurement to the host are conducting what’s known as electrical metering (e-metering). An e-meter has become a common requirement in power-supply units over the last decade because it brings these advantages to data centers [1]:

  • Identifies abnormally low or high energy usage and potential causes, supporting such practices as peak shaving.
  • Facilitates capacity planning around space and power utilization.
  • Helps track and manage energy costs; verifies energy bills; and prioritizes, validates, and reduces energy costs through improved energy efficiency and energy management.
  • Enables quantitative assessments of data center performance and benchmarking of that performance across a level playing field.
  • Helps develop and validate energy-efficiency strategies, and identifies opportunities to improve energy efficiency by lowering energy and operational costs.
  • Commission and detect faults in physical systems and diagnose their causes.

For all of these reasons, an e-meter must be exceptionally accurate. Figure 1 shows the Modular Hardware System-Common Redundant Power Supply (M-CRPS) e-meter accuracy requirement [2], which requires an input power measurement error within ±1% when the load is greater than 125 W, within ±1.25 W when the load is between 50 W and 125 W, and within ±5 W when the load is below 50 W.

Figure 1 The M-CRPS e-meter accuracy specification which requires an input power measurement error: within ±1% when the load is greater than 125 W; within ±1.25 W when the load is between 50 W and 125 W, and within ±5 W when the load is below 50 W. Source: Texas Instruments

To achieve such high measurement accuracy, traditionally the e-meter function is implemented through a dedicated metering device [3], as shown in Figure 2. A current shunt placed on the power factor correction (PFC) input side senses the input current, with a voltage divider (not shown in Figure 2) across the AC line and AC neutral senses the input voltage. A dedicated metering device receives this current and voltage information and calculates the input power and input root-mean-square (RMS) current information, sending the results to the host.

Figure 2 Traditional e-meter and PFC control configuration where: a current shunt is placed on the PFC input side to sense the input current, a voltage divider (not shown) senses the AC line, and AC neutral senses the input voltage. Source: Texas Instruments

To control the PFC input current, another current sensor, such as the Hall-effect sensor shown in Figure 2, senses the input current, then sends the input current information to an MCU for PFC current-loop control. However, both the Hall-effect sensor and dedicated metering device are expensive.

In this power tip, I’ll discuss a low-cost but highly accurate e-meter solution that uses a single current sensor for both e-metering and PFC current-loop control. Integrating e-meter functionality into PFC control code eliminates the need for a dedicated metering device, not only reducing system cost, but also simplifying printed circuit board (PCB) layout and expediting the design process.

E-meter solution

Figure 3 shows the proposed e-meter configuration. A current shunt senses the input current; an isolated delta-sigma modulator AMC1306 measures the voltage drop across the current shunt. The delta-sigma modulator output is sent to the PFC controller MCU. This current information will be used for both e-metering and PFC current-loop control. A voltage divider senses the input voltage, which is then measured by the MCU’s analog-to-digital converter (ADC) directly, just as in traditional PFC control.

Figure 3 New e-meter and PFC control configuration where: a current shunt senses the input currnet, an isolated delta-sigma modulator measures the voltage dropp acorss the shunt, and the output of the modulator is used to e-metering and PFC current-loop control. Source: Texas Instruments

Delta-sigma modulator

Compared to the successive approximation register (SAR) style ADC, which almost all digital PFC controller MCUs use, a delta-sigma modulator can provide high-resolution data. The modulator samples the input signal at a very high rate to produce a stream of 1-bit codes, as shown in Figure 4.

Figure 4 Delta-sigma modulator input and output; a higher positive input signal produces ones at the output a higher percentage of the time while a lower negative input signal produces ones a lower percentage of the time. Source: Texas Instruments

The ratio of ones to zeros represents the input analog voltage. For example, if the input signal is 0 V, the output has ones 50% of the time. A higher positive input signal produces ones a higher percentage of the time, while a lower negative input signal produces ones a lower percentage of the time. Unlike most quantizers, the delta-sigma modulator pushes the quantization noise to higher frequencies [4] making it suitable for high-precision measurements.

Delta-sigma digital filter

The C2000 MCU has a built-in delta-sigma digital filter which decodes the 1-bit stream. The effective number of bits (ENOB) of the filter output depends on the filter type, oversampling rate (OSR), and delta-sigma modulator frequency [5]. Typically, a higher OSR will result in a higher ENOB for a given filter type; however, the trade-off is increased filter delay.

It is important to choose the right filter configuration by studying the optimal speed versus resolution trade-offs. For PFC current-loop control, a short delay is more important, because it can help increase the control-loop phase margin and reduce the total current harmonic distortion. On the other hand, high-resolution current data is necessary to achieve high accuracy for e-metering. For this reason, the solution proposed here uses two delta-sigma digital filters: one configured with high speed but a relatively low resolution for PFC current-loop control, and the other configured with high resolution but a relatively low speed for e-metering; see Figure 5.

Figure 5 The proposed delta-sigma filter configuration uses two filters: one for high-speed but with a low resolution for PFC current-loop control and another with low-speed for e-metering but with a high resolution. Source: Texas Instruments

Firmware structure

Figure 6 is the firmware structure, which consists of three loops:

  • A main loop used for slow and non-time-critical tasks.
  • A fast interrupt service routine (IRS1) running at 100 kHz for the ADC, delta-sigma data reading, and current-loop control.
  • A slower ISR2 running at 10 kHz for voltage-loop control and e-meter calculation.

Since the e-meter calculation is in ISR2, it has no effect on the PFC current loop. Integrating e-meter functionality into PFC control code with this structure does not affect PFC performance.

Figure 6 Firmware structure that consists of three loops: a main loop for low non-time-critical tasks; a 100 kHz IRS1 loop for ADC, delta-sigma data reading, and current loop control; and 10 kHz ISR2 lopo for voltage-loop control and e-meter calcuation. Source: Texas Instruments 

E-meter calculation

Now that there’s both input current data (through the delta-sigma modulator) and input voltage data (through the MCU’s ADC), it’s time to perform e-meter calculations. Equation 1 calculates the input voltage RMS value:

where Vin(n) is the Vin ADC sample data and N is the total number of ADC samples in one AC cycle.

The input current RMS value calculation consists of two steps. The first step is to calculate the measured current (inductor current) RMS value, as shown in Equation 2:

where Iin(n) is the delta-sigma digital filter output.

Referring back to Figure 3, because the shunt resistor is placed after the EMI filter, the reactive current caused by the X-capacitor of the EMI filter is not measured. Therefore, Equation 2 does not represent the total input current. This situation worsens at high line and light load, where the reactive current is not negligible; accurate input current reporting requires its inclusion.

In order to calculate the reactive current of the EMI capacitor, you first need to know the input voltage frequency. The ADC measures the AC line and neutral voltage; comparing the line and neutral voltage values will find the zero crossing. Since the input voltage is sampled at a fixed rate, it is possible to calculate the AC frequency by counting the number of samples between two consecutive zero-crossing points. Once you know the input voltage frequency, Equation 3 calculates the reactive current of the EMI capacitor:

where C is the total capacitance of the EMI filter and f is the input AC voltage frequency.

IEMI is a reactive current that leads the measured current (IL) by 90 degrees; therefore, Equation 4 calculates the total input current as:

Input power calculation also consists of two steps. First, calculate the measured power, as shown in Equation 5:

Since the input voltage is measured after the EMI filter, the power loss caused by the EMI filter is not measured. While this power loss is usually very small, you may need to include it for applications requiring extremely accurate measurements.

The total DC resistance of the EMI filter is R. Equation 6 calculates the power loss on the EMI filter as:

Finally, adding the EMI filter power loss to the measured power obtains the total input power (Equation 7):

Test results

I implemented the proposed e-meter function in a 3.6 kW (1.8 kW at low line) totem-pole bridgeless PFC. Figure 7, Figure 8 and Figure 9 show the test results at low line, high line and DC input, respectively. This implementation achieved <0.5% measurement error, which is two times better than the M-CRPS e-meter specification. Moreover, the implementation uses only 1-point calibration, which significantly reduces calibration time and cost.

Figure 7 E-meter test results at 1.8 kW low line with Vin set to 115 VAC showing an e-meter accuracy much better than the M-CRPS accuracy specification. Source: Texas Instruments

Figure 8 E-meter test results at 3.6 kW high line with Vin set to 230 VAC showing an e-meter accuracy much better than the M-CRPS accuracy specification. Source: Texas Instruments

Figure 9 E-meter test results at DC input showing an e-meter accuracy much better than the M-CRPS accuracy specification. Source: Texas Instruments

Low-cost, high-accuracy e-meter

This article described a low-cost and high-accuracy e-meter solution: an isolated delta-sigma modulator measures the input current which is then sent to an MCU for both e-metering and PFC current-loop control. The proposed solution achieves excellent measurement accuracy with only 1-point calibration. Compared to a traditional e-meter solution, it not only saves cost, but also simplifies PCB layout and expedites the design process.

Bosheng Sun is a Systems Engineer in Texas Instruments, focus on developing digital controlled high performance AC/DC solutions for server and industry application. Bosheng received the M.S. degree from Cleveland State University, Ohio, USA in 2003, the B.S degree from Tsinghua University, Beijing, China in 1995, both in Electrical Engineering. He holds 5 US patents.

 

Related Content

 References

  1. S. Department of Energy, (2017, Feb. 7). Data Center Metering and Resource Guide. [Online]. Available: https://datacenters.lbl.gov/sites/default/files/DataCenterMeteringandResourceGuide_02072017.pdf.
  2. Modular Hardware System – Common Redundant Power Supply (M-CRPS) Base Specification. Open Compute Project, Version 1.00, Release Candidate 4, Nov. 1, 2022.
  3. Analog Devices. 78M6610+PSU Hardware Design Guidelines. (2012). [Online]. Available: https://www.analog.com/media/en/technical-documentation/user-guides/78m6610psu-hardware-design-guidelines.pdf.
  4. Bonnie Baker, “How Delta-Sigma ADCs Work.” Texas Instruments Analog Application Journal, August 2011.
  5. Texas Instruments. TMCS320F28003x Real-Time Microcontrollers Technical Reference Manual. (2022). [Online]. Available: https://www.ti.com/product/TMS320F280039C.
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #132: A low-cost and high-accuracy e-meter solution appeared first on EDN.

Samsung’s backside power delivery network (BSPDN) roadmap

Mon, 08/26/2024 - 14:37

Samsung has started providing more details about its backside power delivery network (BSPDN) roadmap, stating that its 2-nm process node will be optimized for this new technology when it enters mass production in 2027.

While trade media has been regularly reporting on the availability of BSPDN technology from large fabs—Intel, Samsung, and TSMC—according to TrendForce, it’s the first time a senior Samsung Foundry executive has provided details about the company’s BSPDN roadmap.

Source: Samsung

In a report published in Korea Economic Daily, Lee Sung-Jae, VP of PDK development team at Samsung, said that BSPDN will reduce the size of Samsung’s 2 nm chip by 17% compared to the traditional front-end power delivery. He added that BSPDN will allow Samsung to improve 2-nm chip’s performance and power efficiency by 8% and 15%, respectively.

According to an Intel study, power lines typically occupy around 20% of the space on the chip surface in a traditional front-end power delivery. The BSPDN technology puts the power rails on the back of the wafer to remove bottlenecks between power and signal lines, making the manufacturing of smaller chips easier.

Moreover, backside power delivery facilitates thicker, lower-resistance wires, thus delivering more power to enable higher performance and save power. According to a Samsung paper presented at the VLSI Symposium in 2023, BSPDN also facilitates a 9.2% reduction in wiring length.

In that paper, Samsung also claimed to have implemented backside power delivery in two Arm-based test chips, achieving a 10% and 19% die area reduction. The company didn’t disclose the process node for these test chips.

It’s worth noting that after being a pioneer in deploying gate-all-around (GAA) manufacturing technology in its 3-nm chips, Samsung is now following Intel and TSMC in implementing the BSPDN technique.

Intel, which calls its backside power delivery technology PowerVia, is expected to produce chips based on this technique this year. Next, TSMC is planning to integrate its backside power delivery technology—Super PowerRail architecture—in its 1.6 chips to be mass-produced in 2026.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Samsung’s backside power delivery network (BSPDN) roadmap appeared first on EDN.

Pages