Українською
  In English
EDN Network
(Dis)assembling the bill-of-materials list for measuring blood pressure on the wrist

More than a decade ago, I visited my local doctor’s office, suffering from either kidney stone or back-spasm pain (I don’t recall which; at the time, it could have been either, or both, for that matter). As usual, the assistant logged my height and weight on the hallway scale, then my blood pressure in the examination room. I recall her measuring the latter, then re-measuring it, then hurriedly leaving the room with a worried look on her face and an “I’ll be back in a minute” comment. Turns out, my systolic blood pressure reading was near 200; she and the doctor had been conferring on whether to rush me to the nearest hospital in an ambulance.
Fortunately, a painkiller dropped my blood pressure below the danger point (spikes are a common body response to transient acute pain) in a timely manner, but the situation more broadly revealed that my pain-free ongoing blood pressure was still at the stage 2 hypertension level. My response was three-fold:
- Dietary changes, specifically to reduce sodium intake (my cholesterol levels were fine)
- Medication, specifically ongoing daily losartan potassium
- And regular blood pressure measurement using at-home equipment
Before continuing, here’s a quick definition of the two data points involved in blood pressure:
- Systolic blood pressure is the first (top/upper) number. It measures the pressure your blood is pushing against the walls of your arteries when the heart beats.
- Diastolic blood pressure is the second (bottom/lower) number. It measures the pressure your blood is pushing against your artery walls while the heart muscle rests between beats.
How is blood pressure traditionally measured at the doctor’s office or a hospital, specifically via a device called a sphygmomanometer in conjunction with a stethoscope? Thanks for asking:
Your doctor will typically use the following instruments in combination to measure your blood pressure:
- a cuff that can be inflated with air,
- a pressure meter (manometer) for measuring the air pressure inside the cuff, and
- a stethoscope for listening to the sound the blood makes as it flows through the brachial artery (the major artery found in your upper arm).
To measure blood pressure, the cuff is placed around the bare and extended upper arm, and inflated until no blood can flow through the brachial artery. Then the air is slowly let out of the cuff. As soon as blood starts flowing into the arm, it can be heard as a pounding sound through the stethoscope. The sound is produced by the rushing of the blood and the vibration of the vessel walls. The systolic pressure can be read from the meter once the first sounds are heard. The diastolic blood pressure is read once the pounding sound stops.
Home monitoring devicesWhat about at home? Here, there’s no separate stethoscope—or another person trained in listening to it and discerning what’s heard, for that matter—involved. And no, there isn’t a microphone integrated in the cuff to listen to the brachial artery, coupled with digital signal processing to analyze the microphone outputs, either (admittedly, that was Mr. Engineer here’s initial theory, until a realization of the bill-of-materials cost involved to implement the concept compelled me to do research on alternative approaches). This Reddit thread, specifically the following post within it, was notably helpful:
Pressure transducer within the machine. The pressure transducer can feel the pressure within the cuff. The air pressure in the cuff is the same at the end of the line in the machine.
So, like a manual BP cuff, the computer pumps air into the cuff until it feels a pulse. The pressure transducer actually senses the change in cuff pressure as the heartbeat.
That pulse is only looked at a little, get a relative beats per minute from the cuff. Now that the cuff can sense the pulse, keep pumping air until the pulse stops being sensed. That’s systolic. Now slowly and gently release air until you feel the pulse again. Check it against the rate number you had earlier. If it’s close, keep releasing air until you lose the sense. The last pressure that you had the pulse is the diastolic.
It grabs the two numbers very similarly to how you do it with your ears and a stethoscope. But, it is able to measure the pressure directly and look at the pressure many times per second, instead of your eyes and ears listening to the pulse and watching the gauge.
That’s where the specific algorithm inside the computer takes over. They’re all black magic as to exactly how they interpret pulse. Peaks from baseline, rise and fall, rising wave, falling wave, lots of ways to count pulses on a line. But all of them can give you a heart rate from just a blood pressure cuff.
Another Redditor explained the process a bit differently in that same thread, specifically in terms of exactly when the systolic value is ascertained:
OK, imagine your arm is a like a balloon and your heartbeat is a drummer inside. The cuff squeezes the balloon tight, no drumming gets out. As it slowly lets air out, the first quiet drumbeat you “hear” is your systolic. When the drumming gets too lazy to rattle the balloon, that’s your diastolic. The machine just listens for those drum‑beats via pressure wobbles in the cuff, no extra pulse sensor needed!
I came across a couple of nuances in a teardown of a different machine than the one we’ll be looking at today. First off, particularly note the following bolded-by-me emphasis phrase:
The system seems to be quite simple – a DC motor drives a pump (PUMP-924A) to inflate the cuff. The port to the cuff is actually a tee, with the other port heading towards a solenoid valve that is venting to atmosphere by default. When the unit starts, it does a bit of a leak-check which inflates the cuff to a small value (20mmHg) and sits there for a bit to also ensure that the user isn’t moving about, and detect if the cuff is too tight or too loose. From there, it seems to inflate at a controlled pressure rate, which requires running the motor at variable speed depending on the tightness of the cuff and the pressure in the cuff.
Note, too, the following functional deviation of the device showcased at “Dr. Gough’s Tech Zone” (by Dr. Gough Lui, with the most excellent tagline “Reversing the mindless enslavement of humans by technology”) from the previous definition I’d quoted, which had described measuring systolic and diastolic pressure on the cuff-deflation phase of the entire process:
As a system that measures on the inflation stroke, it’s quicker but I do have my hesitations about its accuracy.
Wrist cuff-monitoring pros and consWhen I decided to start regularly measuring my own blood pressure at home, I initially grabbed a wrist-located cuff-based monitor I’d had sitting around for a while, through multiple residence transitions (therefore explaining—versus frequent usage, which admittedly would have been a deception if I’d tried to convince you of it—the condition of the packaging), Samsung’s BW-325S (the republished version of the press release I found online includes a 2006 copyright date):
I quickly discovered, however, that its results’ consistency (when consecutive readings were taken experimentally only a few minutes apart, to clarify; day-to-day deviations would have been expected) was lacking. Some of this was likely due to imperfect arm-and-hand positioning on my part. And, since I was single at the time, I didn’t have a partner around to help me put it on; an upper-arm cuff-based device, conversely, left both hands free for placement purposes. That said, my research also suggests that upper-arm cuff-located devices are also inherently more reliable than wrist cuff alternatives (or alternative approaches that measure pulse rate via photoplethysmography, computer vision facial analysis, or other techniques, for that matter)
I’ve now transitioned to using an Omron BP786N upper-arm cuff device, which also includes Bluetooth connectivity for smartphone data-logging and -archiving purposes.
Having retired my wrist cuff device, I’ll be tearing it down today to satisfy my own curiosity (and hopefully at least some of yours’ as well). Afterwards, assuming I’m able to reassemble it in a fully functional condition, I’ll probably go ahead and donate it, in the spirit of “ballpark accuracy is better than nothing at all.” That said, I’ll include a note for the recipient suggesting periodic redundant checks with another device, whether at home, at a pharmacy or a medical clinic.
Opening and emptying the box reveals some literature:
along with our patient, initially housed within a rugged plastic case convenient for travel (and as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes).
I briefly popped in a couple of AAA batteries to show you what the display looks like near-fully digit-populated on measurement startup:
More generally, here are some perspectives of the device from various vantage points, and with the cuff both coiled and extended:
There are two screw heads visible on both the right side, whose sticker is also info-rich:
And the left, specifically inside the hard-to-access battery compartment (another admitted reason why I decided to retire the device):
You know what comes next, right?
Easy peasy:
Complete with a focus shift:
The inside of the top half of the case is comparatively unmemorable, unless you’re into the undersides of front-panel buttons:
That’s more like it:
Look closely (lower left corner, specifically) and you’ll see what looks like evidence that one of the screws that supposedly holds the PCB in place has been missing since the device left the factory:
Turns out, however, that this particular “hole” doesn’t go all the way through; it’s just a raised disc formed in the plastic, to fit inside the PCB hole (thereby holding the PCB in place, horizontally at least). Why, versus a proper hole and associated screw? I dunno (BOM cost reduction?). Nevertheless, let’s remove the other (more accurately: only) screw:
Now we can flip the assembly over:
And rotate it 90° to expose the innards to full view.
The pump, valve, and associated tubing are located underneath the PCB:
Directly below the battery compartment is another (white-color) hole, into which fits the pressure transducer attached to the PCB underside:
“Dr. Gough” notes in the teardown of his unit that “The pressure sensor appears to be a differential part with the other side facing inside the case for atmospheric pressure perhaps.”
Speaking of “the other side,” there’s an entire other side of the PCB that we haven’t seen yet. Doing so requires first carefully peeling the adhesive-attached display away:
Revealing, along with some passives, the main control/processing/display IC marked as follows:
86CX23
HL8890
076SATC22 [followed by an unrecognized company logo]
Its supplier, identity, and details remain (definitively, at least) unknown to me, unfortunately, despite plenty of online research (and for what it’s worth, others are baffled as well). Some distributor-published references indicate that the original developer is Sonix, but although that company is involved in semiconductors, its website suggests that it focuses exclusively on fabrication, packaging, and test technologies and equipment. Others have found this same chip in blood pressure monitoring devices from a Taiwan-based personal medical equipment company called Health & Life (referencing the HL in the product code), which makes me wonder if Samsung just relabeled and sold a blood pressure monitor originally designed and built by Health & Life (to wit, in retrospect, note the “Healthy Living” branding all over the device and its packaging), or if Samsung just bought up Health & Life’s excess IC inventory. Insights, readers?
The identity of the other IC in this photo (to the right of the 86CX23-HL) was thankfully easier to ascertain and matched my in-advance suspicion of its function. After cleaning away the glue with isopropyl alcohol and my fingernail, I faintly discerned the following three-line marking:
ATMEL716
24C08AN
C277 D
It’s an Atmel (now Microchip Technology) 24C08 8 Kbit I²C-compatible 2-wire serial EEPROM, presumably used to store logged user data in a nonvolatile fashion that survives system battery expiration, removal, and replacement steps.
All that’s left is to reverse my steps and put everything back together carefully. Reinsert a couple of batteries, press the front panel switch, and…
Huzzah! It lives to measure another person another day! Conceptually, at least …worry not, dear readers, that 180 millimeters of mercury (mmHg) systolic measurement is not accurate. Wrapping up at this point, I await your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- Avoiding blood pressure measurement errors
- COVID-19: The long-term implications
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- Blood Pressure Monitor Design Considerations
The post (Dis)assembling the bill-of-materials list for measuring blood pressure on the wrist appeared first on EDN.
Hybrid system resolves edge AI’s on-chip memory conundrum

Edge AI—enabling autonomous vehicles, medical sensors, and industrial monitors to learn from real-world data as it arrives—can now adopt learning models on the fly while keeping energy consumption and hardware wear under tight control.
It’s made possible by a hybrid memory system that combines the best traits of two previously incompatible technologies—ferroelectric capacitors and memristors—into a single, CMOS-compatible memory stack. This novel architecture has been developed by scientists at CEA-Leti, in collaboration with scientists at French microelectronic research centers.
Their work has been published in a paper titled “A Ferroelectric-Memristor Memory for Both Training and Inference” in Nature Electronics. It explains how it’s possible to perform on-chip training with competitive accuracy, sidestepping the need for off-chip updates and complex external systems.
The on-chip memory conundrum
Edge AI requires both inference for reading data to make decisions and learning, a.k.a. training, for updating models based on new data on a chip without burning through energy budgets or challenging hardware constraints. However, for on-chip memory, while memristors are considered suitable for inference, ferroelectric capacitors (FeCAPs) are more suitable for learning tasks.
Resistive random-access memories or memristors excel at inference because they can store analog weights. Moreover, they are energy-efficient during read operations and better support in-memory computing. However, while the analog precision of memristors suffices for inference, it falls short for learning, which demands small, progressive weight adjustments.
On the other hand, ferroelectric capacitors allow rapid, low-energy updates, but their read operations are destructive, making them unsuitable for inference. Consequently, design engineers face the choice of either favoring inference and outsourcing training to the cloud or carrying out training with high costs and limited endurance.
This led French scientists to adopt a hybrid approach in which forward and backward passes use low-precision weights stored in analog form in memristors, while updates are achieved using higher-precision FeCAPs. “Memristors are periodically reprogrammed based on the most-significant bits stored in FeCAPs, ensuring efficient and accurate learning,” said Michele Martemucci, lead author of the paper on this new hybrid memory system.
How hybrid approach works
The CEA-Leti team developed this hybrid system by engineering a unified memory stack made of silicon-doped hafnium oxide with a titanium scavenging layer. This dual-mode memory device can operate as a FeCAP or a memristor, depending on its electrical formation.
In other words, the same memory unit can be used for precise digital weight storage (training) and analog weight expression (inference), depending on its state. Here, a digital-to-analog transfer method, requiring no formal DAC, converts hidden weights in FeCAPs into conductance levels in memristors.
The hardware for this hybrid system was fabricated and tested on an 18,432-device array using standard 130-nm CMOS technology, integrating both memory types and their periphery circuits on a single chip.
CEA-Leti has acknowledged funding support for this design undertaking from the European Research Council and the French Government’s France 2030 grant.
Related Content
- Speak Up to Shape Next-Gen Edge AI
- AI at the edge: It’s just getting started
- Will Memory Constraints Limit Edge AI in Logistics?
- Two new runtime tools to accelerate edge AI deployment
- For Leti and ST, the Fastest Way to Edge AI Is Through the Memory Wall
The post Hybrid system resolves edge AI’s on-chip memory conundrum appeared first on EDN.
DC series motor caution
There are various ways to construct a motor, and the properties of that motor will depend on the construction choice. The series motor configuration has some desirable properties, but it can become quite dangerous to use if proper safety precautions are overlooked.
“Motors,” per se, is a complex subject. Variations in motor designs abound and lie well outside the scope of this essay. Rather, the goal here is to focus on just one aspect of one particular type of motor. To pay proper homage, Figure 1 shows three basic motor designs.
Figure 1 The three basic DC motor types, this article focuses on DC series motor.
Readers may study the first two at their leisure, but we will focus on the DC series motor highlighted in green and begin with an examination of its basic structure.
The DC series motorA magnetic field is required. That field is provided by current-carrying coils that are wound over steel structures called “poles”. The number of poles may vary from design to design. Simple-mindedly, Figure 2 shows three examples of pole design: two poles, four poles, and six poles. Note the alternation of north (N) and south (S) magnetic polarities.
The armature is shown as a setup for four (Figure 2) paralleled paths of wires that are insulated from each other but tied at their ends. In the example shown, there are twenty-four armature conductors arranged in six groups of four conductors, or in four parallel paths, each.
Figure 2 The DC series motor structure showing two, four, and six poles with alternating N and S polarities.
It is conventional to use the letter “Z” to represent the number of armature conductors (twenty-four as shown) and the letter “A” to represent the number of paralleled conductors (four as shown) in each path. Please do not be confused by the fact that this “Z” does NOT refer to an impedance and that this “A” does NOT refer to an area.
As shown in Figure 3, we now look at the circuit of this structure.
The field coils, wrapped around each pole, are connected in series to form the field coil.
The armature conductor groups are wired in series, with their returns being made through the center of the armature, where their wire movement is slowest. By contrast, the outermost sections of the armature conductor groups move quite rapidly as they cross the magnetic flux lines of the poles and since they are all connected in series, they generate a summation voltage called the “back electromotive force” or the “back EMF”.
Figure 3 The DC series motor equivalent circuit, the series connections of the outermost sections of the armature conductor generating back EMF.
The current flowing in the field coil and the current flowing in the armature is the same current. There is no other place for the current to flow. The available torque of a DC series motor is therefore proportional to the square of that current. By using really heavy and large conductors for both, that current can be made very large, and the available torque can be made very high. Such motors are used in high torque applications such as engine starters, in heavily loaded and slow-moving lifting cranes, commuter railroad cars, and other such applications.
The governing equation for generating back EMF is as follows in Figure 4.
Figure 4 The governing equation for back EMF, where the back EMF equals the total magnetic flux multiplied by the rotational speed multiplied by the number of series-connected armature groups.
The total magnetic flux equals the flux per pole times the number of poles. The back EMF equals the total magnetic flux multiplied by the rotational speed multiplied by the number of series-connected armature groups, which, for our present example, will be six for our six-pole magnetic structure.
Connect the load!Now comes the crucial point to remember about DC series motors.
For safety’s sake, no DC series motor should ever be operated without a mechanical load. A DC shunt motor or a DC compound motor can be safely operated without a mechanical load (separate discussions), but a DC series motor CANNOT be safely operated that way.
When the DC series motor is operating, there will be some back EMF generated in the armature as shown in Figure 4. That back EMF will act in opposition to the input voltage in determining the field and armature current, as shown in Figure 3 and as follows:
However, suppose a DC series motor is allowed to run without a mechanical load as the DC series motor undergoes rotary acceleration and starts to gain rotational velocity. In that case, a current flow exists for which some measure of torque exists for which there will be some measure of angular acceleration. With no mechanical load, the rotor will always be rotationally accelerating and gaining in rotational velocity because there is then no load to take rotational energy away from that rotating armature.
As the armature accelerates, the back EMF tends to rise, which lowers the current flow, which lowers the magnetic flux, which lowers the torque, but the flux and the torque do not go to zero, and the rotational velocity will continue to rise. The rotational velocity will keep increasing, tending toward further raising the back EMF, which further reduces the current flow, which further reduces the magnetic field as the rotational velocity continues to increase, and so on and so on, but it is in a vicious cycle of rotary speed-up that constitutes a runaway condition. If there is no mechanical load on the armature, there will be no upper limit on the armature’s speed of rotation, and the DC series motor can and will destroy itself.
A storyIt is stridently recommended that any mechanical load being driven by a DC series motor be coupled to that motor by a gear mechanism and never by a belt because a belt can break. If such a break occurs, the DC series motor will have no mechanical load, and as described, it will run away with itself.
This issue was taught to my class by my instructor, Dr. Sigfried Meyers, when I was in Brooklyn Technical High School in Brooklyn, NY. There was a motor lab area. Dr. Meyers told us of one day when there was no faculty supervision at hand, several students snuck into that lab and decided to hook up a lab motor in a series motor mode with no mechanical load. When they applied power, the motor did exactly as Dr. Meyers had warned that it would do, and the motor was destroyed.
As Mr. Spock would put it on Star Trek, that was “an undesirable outcome”.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Brushless DC Motors – Part I: Construction and Operating Principles
- DC Motor Drive Basics – Part 1: Thyristor Drive Overview
- Electric motor: types, operation modes
- Speed Control Unit Designed for a DC Motor
The post DC series motor caution appeared first on EDN.
R&S expands VNA lineup to 54 GHz

With the addition of 32-GHz, 43.5-GHz, and 54-GHz models, the R&S ZNB3000 series of vector network analyzers (VNAs) now covers a wider range of applications. The midrange family combines precision and speed in a scalable platform, extending RF component testing to satellite Ka and V bands and high-speed interconnects for AI data centers.
Beyond satellite and data center applications, the ZNB3000 also enables RF testing for 5G, 6G, and Wi-Fi. This makes it well-suited for both production environments and research labs working on next-generation technologies.
The ZNB3000 offers strong RF performance with up to 150-dB dynamic range and less than 0.0015-dB RMS trace noise. It also provides fast sweep cycle times of 11.8 ms (1601 points, 1 MHz to 26.5 GHz) and high output power of 11 dBm at 26.5 GHz. A 9-kHz start frequency enables precise time-domain analysis for signal integrity and high-speed testing.
Flexible frequency upgrades allow customers to start with a base unit and expand the maximum frequency later. ZNB3000 VNAs operating at the new frequencies will be available by the end of 2025.
The post R&S expands VNA lineup to 54 GHz appeared first on EDN.
2-in-1 SiC module raises power density

Rohm has introduced the DOT-247, a 2-in-1 SiC molded module that combines two TO-247 devices to deliver higher power density. The dual structure accommodates larger chips, while the optimized internal design lowers on-resistance. Package enhancements cut thermal resistance by roughly 15% and reduce inductance by about 50% compared with standard TO-247 devices. Rohm reports a 2.3× increase in power density in a half-bridge configuration, enabling the same conversion capability in nearly half the volume.
The 750-V and 1200-V devices target industrial power systems such as PV inverters, UPS units, and semiconductor relays, and are offered in half-bridge and common-source configurations. While two-level inverters remain standard, demand is growing for multi-level circuits—including three-level NPC, three-level T-NPC, and five-level ANPC—to support higher voltages. These advanced topologies often require custom designs with standard SiC packages due to the complexity of combining half-bridge and common-source configurations.
Rohm addresses this challenge with standardized 2-in-1 modules supporting both topologies, providing greater flexibility for NPC circuits and DC/DC converters. This approach reduces component count and board space, enabling more compact designs compared with discrete solutions.
Devices in the 750-V SC740xxDT series and 1200-V SCZ40xxKTx series are available now in OEM quantities. Samples of AEC-Q101 qualified products are scheduled to begin in October 2025.
The post 2-in-1 SiC module raises power density appeared first on EDN.
Redriver strengthens USB4v2 and DP 2.1a signals

Parade Technologies’ PS8780 four-lane bidirectional linear redriver restores high-speed signals for active cables, laptops, and PCs. It supports USB4v2 Gen 4, Thunderbolt 5, and DisplayPort 2.1 Alt Mode, and is pin-compatible with the PS8778 Gen 3 redriver.
The redriver delivers USB4v2 at up to 2×40 Gbps symmetric or 120 Gbps asymmetric, TBT5 at 2×41.25 Gbps, and DP 2.1 UHBR20. It provides full USB4, USB 3.2, and DP 2.1a power management, including Advanced Link Power Management (ALPM). Its low-power design and Modern Standby support extend battery life in mobile devices and reduce energy use in active cables.
The PS8780 extends USB4v2 signals beyond the typical 1-m (3.3-ft) passive cable limit while maintaining full performance. When paired with a USB4v2 retimer between the SoC (USB4v2 router) and the USB-C/USB4 connector, it also lengthens system PCB traces. Operating from a 1.8 V supply, the device consumes 297 mW at 40 Gbps and just 0.5 mW in standby. Its compact 28-pin, 2.8×4.4 mm QFN package suits space-constrained designs.
The PS8780 redriver is now sampling.
The post Redriver strengthens USB4v2 and DP 2.1a signals appeared first on EDN.
Gate driver boosts reliability in high-power designs

Featuring 2.5-kV capacitive isolation, the Littelfuse IX3407B gate driver improves signal integrity and safety in motor drives, inverters, and industrial power supplies. The single-channel, galvanically isolated driver provides low propagation delay, high common-mode transient immunity, and enhanced thermal stability across switching frequencies and temperatures.
The IX3407B gate driver delivers up to 7 A peak source and sink current through separate output pins. Typical turn-on and turn-off times are 154 ns and 162 ns, respectively, with rise and fall times of 10 ns. It achieves 150-kV/µs common-mode transient immunity at 700 V.
Input supply voltage ranges from 3.1 V to 17 V, while the driver-side supply operates from 13 V to 35 V. TTL/CMOS logic compatibility with 3.3-V thresholds and input voltage tolerance up to VCC support a wide range of control logic devices. Active shutdown and undervoltage lockout safeguard against fault conditions.
The IX3407B is offered in a wide-body SOIC-8 package. Samples are available through Littelfuse authorized distributors.
The post Gate driver boosts reliability in high-power designs appeared first on EDN.
Chip inductors broaden automotive magnetics portfolio

The SRF3225TAP series of common-mode chip inductors from Bourns delivers reliable EMI suppression and noise filtering for automotive systems. Meeting AEC-Q200 reliability standards, these devices provide impedance values of 500 Ω and 1000 Ω at 100 MHz, with rated currents of 2 A and 1.5 A, respectively.
Designed with a shielded construction to minimize radiation, the inductors operate across a wide temperature range of -55°C to +150°C. They feature low 0.1-Ω DC resistance and are rated for 80 VDC, all in a compact 3.2×2.5×2.2-mm package that conserves board space.
These features make the SRF3225TAP series well-suited for protecting sensitive electronics, enhancing signal integrity, and improving reliability in noise filters and DC power lines across automotive, consumer, and industrial applications.
SRF3225TAP common-mode chip inductors are available now through Bourns’ authorized distribution partners.
The post Chip inductors broaden automotive magnetics portfolio appeared first on EDN.
Unusual 2N3904 transistor circuit

A Planet Analog article, “2N3904: Why use a 60-year-old transistor?” by Bill Schweber, inspired some interest in this old transistor and how it’s commonly used, and if any uncommon uses might exist. Here’s one we played around with.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The Linear Technology Application Note 47-D: “High Speed Amplifier Techniques” by Jim Williams offers an interesting side road to usual transistor use, where a typical fast pulse transistor is utilized in avalanche collector-to-emitter breakdown Vbceo to create sub-nanosecond pulses. The 2N3904 will work in this configuration, but requires a high voltage (>100 V) like the pulse transistor to reach the Vbeco breakdown, and produces a slower pulse, being a slower GP transistor.
A while back, I had measured the reverse breakdown of the 2N3904 base-emitter junction and noted the small area of negative resistance where the junction current reduces as applied reverse voltage increases (Figure 1).
Figure 1 Measurement of the reverse breakdown of the 2N3904’s base-emitter junction, showing a small area of negative resistance.
This base-emitter breakdown is much lower than the collector-emitter breakdown and might serve as a lower voltage version of the avalanche pulse generation method described in App Note 47-D.
A simple circuit was created with the 2N3904 emitter connected by a 100-kΩ resistor to a variable supply set to ~14 VDC. A shunt capacitance of 10 nF from the emitter to ground and a 50-Ω resistor from the collector to ground. Just two resistors, a capacitor, and the 2N3904 are all that’s required to create a simple relaxation oscillator (actually, the 50-Ω resistor isn’t required).
Figure 2 shows the result with the DSO AC-coupled blue trace, the relaxation voltage at the transistor emitter, and the DC-coupled magenta trace, the voltage across the 50-Ω resistor from the collector to ground (remember the NPN is upside down or inverted!).
Figure 2 Waveforms of the simple relaxation oscillator circuit with the AC-coupled blue trace and DC-coupled magenta trace.
The pulse across the 50-Ω resistor in Figure 3 shows the avalanche current in more detail, where this current is ~ 2 V peak across the 50-Ω resistor, or ~40 mA peak. This isn’t fast, however, the 2N3904 is a general-purpose (GP) transistor that is not intended for speed.
Figure 3 Avalanche current shown in more detail on the DSO, showing a ~40 mA peak.
Utilizing faster transistors such as the 2N2369 should produce narrower pulses with faster rise times. Whether these produce faster rise times and narrower pulse widths than in the collector-emitter avalanche breakdown method from App Note 47-D remains an experiment waiting for those interested. Intuition indicates the “normal” avalanche collector-emitter mode will be faster, though!
Anyway, I hope folks find this simple and unusual use of these old standby 2N3904 transistors interesting, I certainly did!!
Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Exelis/Harris, ViaSat, and retiring (semi) with Wyatt Labs. During his career, he accumulated 32 US Patents and, in the past, published a few EDN Articles, including Best Idea of the Year in 1989.
Related Content
- 2N3904: Why use a 60-year-old transistors?
- The overcurrent limiting transistor fails before anything else!
- 2 Channel Audio Mixer using Transistors
- Inverted bipolar transistor doubles as a signal clamp
- How to Read Data Sheets: More on BJTs
The post Unusual 2N3904 transistor circuit appeared first on EDN.
Purpose-built AI inference architecture: Reengineering compute design

Over the past several years, the lion’s share of artificial intelligence (AI) investment has poured into training infrastructure—massive clusters designed to crunch through oceans of data, where speed and energy efficiency take a back seat to sheer computational scale.
Training systems can afford to be slow and power-hungry; if it takes an extra day or even a week to complete a model, the result still justifies the cost. Inference, by contrast, plays an entirely different game. It sits closer to the user, where latency, energy efficiency, and cost-per-query reign supreme.
And now, the market’s center of gravity is shifting. While tech giants like Amazon, Google, Meta, and Microsoft are expected to spend more than $300 billion on AI infrastructure this year—still largely on training—analysts forecast explosive growth on the inference side. Gartner, for example, projects a 42% compound annual growth rate for AI inference in data centers over the next few years.
This next wave isn’t about building smarter models; it’s about unlocking value from the ones we’ve already trained.
Figure 1 In the training versus inference equation, while training is about brute force at any cost, inference is about precision. Source: VSORA
Training builds, inference performs
At its core, the difference between training and inference comes down to cost, latency, and efficiency.
Training happens far from the end user and can run for days, weeks or even months. Inference, by contrast, sits directly in the path of user interaction. That proximity imposes a hard constraint: ultra-low latency. Every query must return an answer in milliseconds, not minutes, or the experience breaks.
Throughput is the second dimension. Inference isn’t about eventually finishing one massive job—it’s about instantly serving millions or billions of tiny ones. The challenge is extracting the highest possible number of queries per second from a fixed pool of compute.
Then comes power. Every watt consumed by inference workloads directly hits operating costs, and those costs are becoming staggering. Google, for example, has projected a future data center that would draw three gigawatts of power—roughly the output of a nuclear reactor.
That’s why efficiency has become the defining metric of inference accelerators. If a data center can deliver the same compute with half the power, it can either cut energy costs dramatically or double its AI capacity without expanding its power infrastructure.
This marks a fundamental shift: where training chased raw performance at any cost, inference will reward architectures that deliver more answers faster and with far less energy.
This is why efficiency—not sheer performance—is becoming the defining metric of inference accelerators. If you can get the same answers using half the power, you can either slash your energy bill or double your AI capacity without building new power infrastructure.
Training was about brute force at any cost. On the other hand, inference is about precision.
GPUs are fast—but starved
GPUs have become the workhorses of modern computing, celebrated for their staggering parallelism and raw speed. But beneath their blazing throughput lies a silent bottleneck that no amount of cores can hide—they are perpetually starved for data.
To understand why, it helps to revisit the foundations of digital circuit design.
Every digital system is built from two essential building blocks: computational logic and memory. The logic executes operations—from primitive Boolean functions to advanced digital signal processing (DSP) and multi-dimensional matrix calculations. The memory stores everything the logic consumes or produces—input data, intermediate results, and outputs.
The theoretical throughput of a circuit, measured in operations per second (OPS), scales with its clock frequency and degree of parallelism. Double either and you double throughput—on paper. In practice, there’s a third gatekeeper: the speed of data movement. If data arrives every clock cycle, the logic runs at full throttle. If data arrives late, the logic stalls, wasting cycles.
Registers are the only storage elements fast enough to keep up: single-cycle, address-free, and directly indexed. But they are also the most silicon-expensive, which makes building large register banks economically impossible.
This cost constraint gave rise to the memory hierarchy, which spans from the bottom up:
- Massive, slow, cheap storage (HDDs, SSDs, tapes)
- Moderate-speed, moderate-cost DRAM and its many variants
- Tiny, ultra-fast, ultra-expensive SRAM and caches
All of these, unlike registers, require addressing and multiple cycles per access. And moving data across them burns vastly more energy than the computation itself.
Despite their staggering parallelism, GPUs are perpetually starved for data. Their thousands of cores can blaze through computations, but only if fed on time. The real bottleneck isn’t compute. It’s memory because data must traverse a slow, energy-hungry hierarchy before reaching the logic, and every stall wastes cycles. Registers are fast enough to keep up but too costly to scale, while larger memories are too slow.
This imbalance is the GPU’s true Achilles’ heel and fixing it will require rethinking computer architecture from the ground up.
Toward a purpose-built inference architecture
Trying to repurpose a GPU—an architecture originally centered on massively parallel training workloads—to serve as a high-performance inference engine is a dead end. Training and inference operate under fundamentally different constraints. Training tolerates long runtimes, low compute utilization, and massive power consumption. Inference demands sub-millisecond latency, throughput efficiency approaching 100%, and energy frugality at scale.
Instead of bending a training-centric design out of shape, we must start with a clean sheet and apply a new set of rules tailored to inference from the ground up.
Rule #1—Replace caches with massive register files
Traditional GPUs rely on multi-level caches (L1/L2/L3) to hide memory latency in highly parallel workloads. Inference workloads are small, bursty, and demand predictable latency. Caches introduce uncertainty (hits versus misses), contention, and energy overhead.
A purpose-built inference architecture should discard caches entirely and instead use huge, directly addressed register-like memory arrays with index-based access instead of address-based lookup. This allows deterministic access latency and constant-time delivery of operands. Aim for tens or even hundreds of millions of bits of on-chip register storage, positioned physically close to the compute cores to fully saturate their pipelines (Figure 1).
Figure 1 Here is a comparison of memory hierarchy in traditional processing architectures (left) versus an inference-driven register-like, tightly-coupled memory architecture (right). Source: VSORA
Rule #2—Provide extreme memory bandwidth
Inference cores are only as fast as the data feeding them. Stalls caused by memory bottlenecks are the single biggest cause of underutilized compute in AI accelerators today. GPUs partially mask this with massive over-provisioning of threads, which adds latency and energy cost—both unacceptable in inference.
The architecture must guarantee multi-terabyte-per-second bandwidth between registers and cores, sustaining continuous operand delivery without buffering delays. This requires wide, parallel datapaths and banked memory structures co-located with compute to enable every core to run at full throttle, every cycle.
Rule #3—Execute matrices natively in hardware
Most modern AI workloads are built from matrix multiplications, yet GPUs break these down into scalar or vector ops stitched together by compilers. This incurs instruction overhead, excess memory traffic, and scheduling complexity.
Inference cores should treat matrices as first-class hardware objects with dedicated matrix execution units that can perform multiply–accumulate across entire tiles in a single instruction. This eliminates scalar orchestration overhead, slashes instruction counts and maximizes both performance and energy efficiency per operation.
Rule #4—Expand the instruction set beyond tensors
AI is rapidly evolving beyond basic tensor algebra. Many new architectures—for instance, transformers with sparse attention, hybrid symbolic-neural models, or signal-processing-enhanced models—need richer functional primitives than today’s narrow tensor op sets can offer.
Equip the ISA with a broad library of DSP-style operators; for example, convolutions, FFTs, filtering, non-linear transforms, and conditional logic. This empowers developers to build innovative new model types without waiting for hardware revisions, enabling rapid architectural experimentation on a stable silicon base.
Rule #5—Orchestrate cores via a smart, reconfigurable NoC
Inference workloads are highly structured but vary layer by layer: some are dense, others sparse; some are compute-bound, others bandwidth-bound. A static interconnect leaves many cores idle depending on the model phase.
Deploy a dynamic network-on-chip (NoC) that can reconfigure on-the-fly allowing the algorithm itself to control dataflow. This enables adaptive clustering of cores, localized register sharing, and fine-grained scheduling of sparse layers. The result is maximized utilization and minimal data movement energy, tuned dynamically to each workload phase.
Rule #6—Build a compiler that hides complexity
A radically novel architecture risks becoming unusable if programmers must hand-tune for it. To drive adoption, complexity must be hidden behind clean software abstractions.
Provide a smart compiler and runtime stack that automatically maps high-level models to the underlying architecture. It should handle data placement, register allocation, NoC reconfiguration, and operator scheduling automatically, exposing only high-level graph APIs to developers. This ensures users see performance, not complexity, making the architecture accessible to mainstream AI developers.
Reengineering the inference future
Training celebrated brute-force performance. Inference will reward architectures that are data-centric, energy-aware, and precision-engineered for massive real-time throughput.
These design rules, pioneered by semiconductor design outfits like VSORA in their development of efficient AI inference solutions, represent an engineering breakthrough—a highly scalable architecture that redefines inference speed and efficiency, from the world’s largest data centers to edge intelligence powering Level 3–5 autonomy.
Lauro Rizzatti is a business advisor to VSORA, an innovative startup offering silicon IP solutions and silicon chips, and a noted verification consultant and industry expert on hardware emulation.
Related Content
- Partitioning to optimize AI inference for multi-core platforms
- Custom AI Inference Has Platform Vendor Living on the Edge
- The next AI frontier: AI inference for less than $0.002 per query
- Startup To Take On AI Inference With Huge SiP, Custom Memory
- Revolutionizing AI Inference: Unveiling the Future of Neural Processing
The post Purpose-built AI inference architecture: Reengineering compute design appeared first on EDN.
Power Tips #145: EIS applications for EV batteries
Rechargeable batteries are the primary components in EVs, mobile devices, and energy storage systems. The batteries’ working conditions, including state of health (SOH), state of charge (SOC), and temperature, are essential to reliably and efficiently operate devices or equipment. Predicting battery SOH and SOC is becoming a priority in order to increase their performance and safety.
Physically, you can represent the batteries as an electrical circuit model, as shown in Figure 1. The resistors (Rs) and capacitors (Cs) in the model have good correlations with battery states. Electrochemical impedance spectroscopy (EIS) technologies are crucial to characterize the elements of the model in order to obtain the batteries’ working conditions.
Figure 1 The equivalent circuit of a battery showing Rs and Cs that have a good correlation with battery states. Source: Texas Instruments
Rs and Cs change when the batteries are in different states, leading to impedance changes. With EIS techniques, applying AC signals to the batteries and measuring their voltage and current response enables calculations of the impedance data of the batteries in the frequency domains. By analyzing the impedance data, you can know the battery’s SOC, internal temperature, and battery life. EV manufacturers are now researching how to apply EIS techniques to a battery management system (BMS).
Nyquist toolApplying an AC voltage to a circuit excites the AC current. Equation 1 calculates the impedance, which varies as frequencies change if the circuit is not a pure resistance load.
Figure 2 illustrates Ohm’s law for an AC voltage. You can plot the impedance by applying many frequencies. Typically, a battery is modeled as Rs and Cs in combination, as shown in Figure 1. Figure 3 illustrates the impedance plot using a Nyquist tool.
Figure 2 Ohm’s law in an AC circuit, impedance can be plotted by applying many frequencies. Source: Texas Instruments
Figure 3 The plot of impedance using the Nyquist tool. Source: Texas Instruments
Methods of excitation current generationYou can use the EIS technique for one cell, multiple cells, modules, or a pack. Performing an EIS measurement requires the application of AC current to the batteries. For different battery system voltages, there are four different methods to generate the excitation current. Let’s review them.
Method #1: Resistor dissipation at the cell level and module levelIn Figure 4, the power switch (S1), power resistor (Rlimit), sense resistor (Rsense), and a controller produce the excitation source. The controller generates a sinusoidal pulse-width modulation (SPWM) signal for S1. One or several battery cells are connected in series with the excitation source. Turning on S1 draws the current from the batteries through Rlimit. The energy burns and dissipates. When the voltage is high, the power dissipation is significantly large.
Figure 4 EIS with a resistor load where S1, Rsense, and the controller source produce the excitation circuit. Source: Texas Instruments
You can use this method at the cell level and small module level with low voltage, but it is not a practical solution for high-voltage batteries in EVs or hybrid EVs (HEVs) because the power dissipation is too great.
Method #2: An isolated DC/DC converter at the pack levelIn an EV powertrain, high-voltage batteries charge low-voltage batteries through an isolated DC/DC converter (as shown in Figure 5), which you can design to support bidirectional power flow. During EIS excitation, power transfers from high- to low-voltage batteries during the positive cycle; power is then reversed from the low- to high-voltage side during the negative cycle. This method uses existing hardware without adding extra costs. However, the excitation source is limited by the capacity of the low-voltage batteries. It is particularly challenging for 800V-12V battery systems.
Figure 5 EV power train with high-voltage batteries charging low-voltage batteries through an isolated DC/DC converter. Source: Texas Instruments
Method #3: A non-isolated DC/DC converter in stack mode for the packThis method uses a non-isolated DC/DC converter to generate excitation current between two battery modules. During EIS excitation, the charge transfers from Vbat1 to Vbat2 during the positive cycle, and the charge transfers back to Vbat1 from Vbat2. In Figure 6, two battery modules are connected in stack mode. Two active half-bridges are connected in series, and their switching nodes are connected through an inductor and a capacitor.
There are several advantages to this method: one is the use of low-voltage rating switches in a high-voltage system; the other is that the switches operate under zero-voltage switching (ZVS) conditions. Additionally, this method enables the production of a larger excitation current without adding stress.
Figure 6 A non-isolated DC/DC converter connecting two battery modules in stack mode. Source: Texas Instruments
Method #4: A non-isolated DC/DC converter in parallel mode for the packThis method connects two battery modules in parallel mode, as shown in Figure 7. Two modules share a common ground. The charges are transferred to the inductor and capacitor from VBat1; then the charges stored in the inductor and capacitor are transferred to VBat2. The parallel mode and stack mode are swappable by properly reconfiguring two modules to meet different charging stations or battery voltage systems ,such as 400 V or 800 V.
Figure 7 A non-isolated DC/DC converter in parallel mode for the pack. Source: Texas Instruments
EIS measurementFigure 8 divides the battery pack into two modules. S2 and S3 are battery-management ICs. The BQ79826 measures the voltage of every cell through an analog front end. Applying the AC current to battery modules builds up the AC voltage of each cell, which the BMICs then measure. A current measurement IC is used to measure the excitation current sensed by a current shunt. A communication bridge IC connects all BMICs through a daisy-chain communication bus. The BQ79826 uses the EIS engine to calculate the impedance, which is transmitted to a microcontroller for the Nyquist plot. The Controller Area Network (CAN) protocol provides communication while MCU1 controls the generation of excitation current.
Figure 8 Block diagram of an EIS measurement that divides the battery pack into two modules, each monitored by BMICs. Source: Texas Instruments
In a simulation of a non-isolated stacked active bridge (SAB) based excitation circuit, the conditions were VBat1 = VBat2 = 400 V, Fs =100 kHz, and current amplitude = 5 A. Figure 9 shows the excitation current waveform from the simulation. The blue trace is VBat1 current, while the green trace is VBat2 current.
Figure 9 Excitation current generated by stacked active bridge, the blue trace is the VBat1 current, the green trace is the VBat2 current. Source: Texas Instruments
The frequency synchronization between the controller of the excitation source and the BQ79826 is essential to minimize measurement errors. One solution is to take the SPWM signal generated by the BQ79826 as the reference of the excitation source (Figure 10). The excitation source and EIS engine of BQ79826 are automatically synchronized.
Figure 10 Block diagram for system timing synchronization of the excitation source and the BQ79826 in order to minimize measurement errors. Source: Texas Instruments
When building hardware to evaluate an EIS measurement, the excitation source should have high efficiency to minimize charge losses in the batteries. The total current harmonics should also be small in order to increase the signal-to-noise ratio (SNR). Figure 11 shows the efficiency measurement of the converter using a DC voltage and a DC load. With a higher excitation current, the power efficiency is higher because of the larger ZVS range. Above a 1-A amplitude of excitation current, efficiency is >95%. All the power dissipates in the traditional method of using a load resistor.
Figure 11 Efficiency measurement of a stacked active bridge-based power stage. Source: Texas Instruments
A fast Fourier transform (FFT) is a tool to evaluate the SNR of the excitation current. Placing six 18650 batteries in series for one module, with two modules connected to the stacked active bridges, demonstrates the quality of excitation current. In Figure 12, two tones of 10 Hz and 100 Hz are generated simultaneously to reduce the excitation time.
Figure 12 FFT of excitation current where two 10 Hz and 100 Hz tones are generated simultaneously. Source: Texas Instruments
Figure 13 is a Nyquist plot showing the impedances of different cells using two 200-V battery modules. At lower excitation frequencies, the difference between the measurements is small. There are more discrepancies at higher excitation frequencies (shown on the left side of the graphs), but the impedances within this range are not important.
Figure 13 Nyquist plot showing the impedances of different cells using two 200-V battery modules. Source: Texas Instruments
EIS techniqueEIS is an evolutionary technique with applications for EV and HEV batteries. EIS techniques enable users to obtain real-time information about the SOC, SOH, and temperature during battery system operation.
Achieving good EIS results still requires resolving challenges such as developing accurate algorithms, utilizing reliable excitation systems, and minimizing noise sensitivity.
Sean Xu currently works as a system engineer in Texas Instruments’ Power Design Services team to develop power solutions using advanced technologies for automotive applications. Previously, he was a system and application engineer working on digital control solutions for enterprise, data center, and telecom power. He earned a Ph.D. degree from North Dakota State University and a Master’s degree from Beijing University of Technology, respectively.
Related Content
- Power Tips #144: Designing an efficient, cost-effective micro DC/DC converter with high output accuracy for automotive applications
- Power Tips #143: Tips for keeping the power converter cool in automotive USB PD applications
- Power Tips #75: USB Power Delivery for automotive systems
- Power Tips #101: Use a thermal camera to assess temperatures in automotive environments
The post Power Tips #145: EIS applications for EV batteries appeared first on EDN.
PWM buck regulator interface generalized design equations

A while back, I published the Design Idea (DI) “Simple PWM interface can program regulators for Vout < Vsense.” It showed some simple circuits for PWM programming of standard bucking-type regulator chips, both linear and switching, including applications that need an output voltage span that can swing well below the regulator’s sense voltage.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Recent reader comments have shown interest in applying those designs to different applications and regulators. So, here’s a step-by-step procedure to make that process easier.
Note that it only works if Vx > 2Vs and Vl > Vs.
Figure 1 Ten discrete parts comprise a circuit for linear regulator programming with PWM.
The steps are:
- Vs = U1 sense voltage from U1 datasheet (typically 0.5 to 1.25 V)
- Vl = available logic rail (typically 3 to 5 V)
- Vx = desired maximum output voltage at PWM duty factor = 100%
- Vpp = PWM peak to peak amplitude, typically Vl
- Fp = PWM rep rate
- N = PWM bits of resolution, N > 4
- R1 = recommended value from U1 datasheet example application
- R2 = R1(Vx/Vs – 1)
- R4 = R2Vl/Vs – R1 – R2
- R5 = (Vl – Q2vbe)Q2minbeta(R4 + R1 + R2)/Vl
- R3 = Vpp/(Vs/R1 + (Vl – Vs)/(R1 + R4))
- R3C3 = R2C2 = 2((N-2)/2)Fp-1
- C1 = C2R2/R1
Now, taking the inexpensive XLsemi XL4016 asynch buck converter as an example case for U1, and turning the crank for these givens gives you:
- Vs = 1.25 V
- Vl = 3.3 V
- Vx = 30 V
- Vpp = 3.3 V
- Fp = 10 kHz
- N = 8
- R1 = recommended value from U1 datasheet figure 4 = 3.3 kΩ
- R2 = 75 kΩ
- R4 = 120 kΩ
- R5 = 15 MΩ
- R3 = 8.2 kΩ
- C3 = 0.1 µF, C2 = 0.011 µF
- C2 = 0.27 µF
This yields Figure 2.
Figure 2 General design-accommodating parameters listed above. Note that U1-specific parts (e.g., inductor, capacitors, and power diode) are not shown.
Note that if the microamps and millivolts of residual zero offset that persist on the unloaded supply output at PWM = zero duty factor aren’t objectionable, then the Q2 R5 current sink is irrelevant and can be omitted.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Simple PWM interface can program regulators for Vout < Vsense
- Revisited: Three discretes suffice to interface PWM to switching regulators
- Three discretes suffice to interface PWM to switching regulators
- Cancel PWM DAC ripple with analog subtraction
- Add one resistor to allow DAC control of switching regulator output
The post PWM buck regulator interface generalized design equations appeared first on EDN.
A crash course in 3D IC technology

What’s 3D IC, and what’s causing the shift from 2D IC to 3D IC? How does this new technology relate to heterogeneous integration and advanced packaging? What is required for a successful 3D IC implementation? EDN recently published a three-article series to cover multiple facets of this advanced packaging technology.
Below is a sneak peek at this three-article series, which explains 3D IC fundamentals, microarchitectures, toolkits, and design use cases.
Part one, titled “Making your architecture ready for 3D IC”, provided essential context and technical depth for design engineers working toward highly integrated, efficient, and resilient 3D IC systems. It explains 3D IC microarchitectures that redefine how data and controls move through a system, how blocks are partitioned and co-optimized across both horizontal and vertical domains, and how early-stage design decisions address the unique challenges of 3D integration.
Making your architecture ready for 3D IC
Part two, titled “Putting 3D IC to work for you”, outlines the challenges and opportunities in adopting 3D IC technology. It demonstrates how 3D IC enables efficient chiplet integration and reuse, accelerates innovation, and guarantees manufacturability across organizational boundaries. The article also provides a detailed examination of 3D IC design toolkits and workflows, as well as their incorporation of AI technology.
The third and final part, titled “Automating FOWLP design: A comprehensive framework for next-generation integration”, presents fan-out wafer-level packaging (FOWLP) as a case study, showing how automation frameworks manage the inherent complexity of advanced packaging and the scale of modern layouts, racking up millions of pins and tens of thousands of nets. It also demonstrates how effective frameworks promote collaboration among package designers, layout specialists, signal and power integrity analysts, and thermal and mechanical engineers.
Automating FOWLP design: A comprehensive framework for next-generation integration
Related Content
- 3D IC Design
- Thermal analysis tool aims to reinvigorate 3D-IC design
- Heterogeneous Integration and the Evolution of IC Packaging
- Tighter Integration Between Process Technologies and Packaging
- Advanced IC Packaging: The Roadmap to 3D IC Semiconductor Scaling
The post A crash course in 3D IC technology appeared first on EDN.
Meta Connect 2025: VR still underwhelms; will smart glasses alternatively thrive?

For at least as long as Meta’s been selling conventional “smart” glasses (with partner EssilorLuxottica, whose eyewear brands include the well-known Oakley and Ray-Ban), rumors suggested that the two companies would sooner or later augment them with lens-integrated displays. The idea wasn’t far-fetched; after all, Google Glass had one (standalone, in this case) way back in early 2013:
Meta founder and CEO Mark Zuckerberg poured fuel on the rumor fire when, last September, he demoed the company’s chunky but impressive Orion prototype:
And when Meta briefly, “accidentally” (call me skeptical, but I always wonder how much of a corporate mess-up versus an intentional leak these situations often really are) published a promo clip for (among other things) a display-inclusive variant of its Meta Ray-Ban AI glasses last week, we pretty much already had our confirmation ahead of the last-Wednesday evening keynote, in the middle of the 2025 edition of the company’s yearly Connect conference:
Yes, dear readers, as of this year, I’ve added yet another (at least) periodic tech-company event to my ongoing coverage suite, as various companies’ technology and product announcements align ever more closely with my editorial “beat” and associated readers’ interests.
But before I dive fully into those revolutionary display-inclusive smart glasses details, and in the spirit of crawling-before-walking-before-running (and hopefully not stumbling at any point), I’ll begin with the more modest evolutionary news that also broke at (and ahead of) Connect 2025.
Smart glasses get sportyWithin the midst of my pseudo-teardown of a transparent set of Meta Ray-Ban AI Glasses published earlier this summer:
I summarized the company’s smart glasses product-announcement cadence up to that point. The first-generation Stories introduced in September 2020:
was, I wrote, “fundamentally a content capture and playback device (plus a fancy Bluetooth headset to a wirelessly tethered smartphone), containing an integrated still and video camera, stereo speakers, and a three-microphone (for ambient noise suppression purposes) array.”
The second-generation AI Glasses unveil was led three-plus years later in October 2023, which I own—two sets of, in fact, both Transitions-lens equipped:
make advancements on these fundamental fronts…They’re also now moisture (albeit not dust) resistant, with an IPX4 rating, for example. But the key advancement, at least to this “tech-head”, is their revolutionary AI-powered “smarts” (therefore the product name), enabled by the combo of Qualcomm’s Snapdragon AR1 Gen 1, Meta’s deep learning models running both resident and in the “cloud”, and speedy bidirectional glasses/cloud connectivity. AI features include real-time language Live Translation plus AI View, which visually identifies and audibly provides additional information about objects around the wearer.
And back in June (when published, written early May), I was already teasing what was to come:
Next-gen glasses due later this year will supposedly also integrate diminutive displays.
More recently, on June 20 (just three days before my earlier coverage had appeared in EDN, in fact), Meta and EssilorLuxottica released the sports-styled, Oakley-branded HTSN new member of the AI Glasses product line:
The battery life was nearly 2x longer: up eight hours under typical use, and 19 hours in standby. They charged up to 50% in only 20 minutes. The battery case now delivered up to 48 operating hours’ worth of charging capacity, versus 36 previously. The camera, still located in the left endpiece, now captured up to 3K resolution video (albeit the same 12 Mpixel still images as previously). And the price tag was also boosted: $499 for the initial limited-edition version, followed by more mainstream $399 variants.
A precursor retrofit and sports-tailored expansionFast forward to last week, and the most modest news coming from the partnership is that the Oakley HTSN enhancements have been retrofitted to the Ray-Ban styles, with one further improvement: 1080p video can now be captured at up to 60 fps in the Gen 2 versions. Cosmetically, they look unchanged from the Gen 1 precursors. And speaking of looks, trust me when I tell you that I don’t look nearly as cool as any of these folks do when donning them:
Meta and EssilorLuxottica have also expanded the Oakley-branded AI Glasses series beyond the initial HTSN style to the Vanguard line, in the process moving the camera above the nosepiece, otherwise sticking with the same bill-of-materials list, therefore specs, as the Ray-Ban Gen 2s:
And all of these, including a welcome retrofit to the Gen 1 Ray-Ban AI Glasses I own, will support a coming-soon new feature called conversation focus, which “uses the glasses’ open-ear speakers to amplify the voice of the person you’re talking to, helping distinguish it from ambient background noise in cafes and restaurants, parks, and other busy places.”
AI on displayAnd finally, what you’ve all been waiting for: the newest, priciest (starting at $799) Meta Ray-Ban Display model:
Unlike last year’s Orion prototype, they’re not full AR; the display area is restricted to a 600×600 resolution, 30 Hz refresh rate, 20-degree lower-right portion of the right eyepiece. But with 42 pixels per degree (PPD) of density, it’s still capable of rendering crisp, albeit terse information; keep in mind how close to the user’s right eyeball it is. And thanks to its coupling to Transitions lenses, early reviewer feedback suggests that it’s discernible even in bright sunlight.
Equally interesting is its interface scheme. While I assume that you can still control them using your voice, this time Meta and EssilorLuxottica have transitioned away from the right-arm touchpad and instead to a gesture-discerning wristband (which comes in two color options):
based on very cool (IMHO) surface EMG (electromyography) technology:
Again, the initial reviewer feedback that I’ve seen has been overwhelmingly positive. I’m guessing that at least in this case (Meta’s press release makes it clear that Orion-style full AR glasses with two-hand gesture interface support are still under active development), the company went with the wristband approach both because it’s more discreet in use and to optimize battery life. An always-active front camera, after all, would clobber battery life well beyond what the display already seemingly does; Meta claims six hours of “mixed-use” () between-charges operating life for the glasses themselves, and 18 hours for the band.
Longstanding silicon-supplier partner Qualcomm was notably quieter than usual from an announcement standpoint last week. Back in June, it had unveiled the Snapdragon AR1+ Gen 1 Platform, which may very well be the chipset foundation of the display-less devices launched last week. Then again, given that the aforementioned operating life and video-capture quality advancements versus their precursor (running the Snapdragon AR1) are comparatively modest, they may result mostly-to-solely from beefier integrated batteries and software optimizations.
The Meta Ray-Ban Display, on the other hand, is more likely to be powered by a next-generation chipset, whether from Qualcomm—the Snapdragon AR1+ Gen 1 or perhaps even one of the company’s higher-end Snapdragon XR platforms—or another supplier. We’ll need to wait for the inevitable teardown-to-come (at $799, not from yours truly!) to know for sure. Hardware advancements aside, I’m actually equally excited (as will undoubtedly also be the software developers out there among my readership) to hear what Meta unveiled on day 2: a “Wearables Device Access Toolkit” now available as a limited developer preview, with a broader rollout planned for next year.
Pending more robust third-party app support neatly leads into my closing topic: what’s in all of this for Meta? The company has clearly grown beyond its Facebook origin and foundation, although it’s still fundamentally motivated to cultivate a community that interacts and otherwise “lives” on its social media platform. AI-augmented smart glasses are just another camera-plus-microphones-and-speakers (and now, display) onramp to that platform. It’ll be interesting to see both how Meta’s existing onramps continue to evolve and what else might come next from a more revolutionary standpoint. Share your guesses in the comments!
p.s…I’m not at all motivated to give Meta any grief whatsoever for the two live-demo glitches that happened during the keynote, given that the alternative is a far less palatable fully-pre-recorded “sanitary” video approach. What I did find interesting, however, were the root causes of the glitches; an obscure, sequence-of-events driven software bug not encountered previously as well as a local server overload fueled by the large number of AI Glasses in the audience (a phenomenon not encountered during the comparatively empty-venue preparatory dress rehearsals). Who would have thought that a bunch of smart glasses would result in a DDoS?
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Smart glasses skepticism: A look at their past, present, and future(?)
- Ray-Ban Meta’s AI glasses: A transparency-enabled pseudo-teardown analysis
- Apple’s Spring 2024: In-person announcements no more?
The post Meta Connect 2025: VR still underwhelms; will smart glasses alternatively thrive? appeared first on EDN.
Debugging a “buggy” networked CableCARD receiver

Welcome to the last in a planned series of teardowns resulting from the mid-2024 edition of “the close-proximity lightning strike that zapped Brian’s electronics devices”, following in the footsteps of a hot tub circuit board, a three-drive NAS, two eight-port GbE switches and one five-port one, and a MoCA networking adapter…not to mention all the gear that had expired in the preceding 2014 and 2015 lightning-exposure iterations…
This is—I’m sad to say, in no small part because they’re not sold any longer (even in factory-refurbished condition) and my accumulated “spares” inventory will eventually be depleted—the third straight time that a SiliconDust HDHomeRun Prime has bit the dust:
The functional failure symptoms—a subsequent access inability from elsewhere over the LAN, coupled with an offline-status front panel LED—were identical in both the first and second cases, although the first time around, I couldn’t find any associated physical damage evidence. The second time around, on the other hand…
This third time, though, the failure symptoms were somewhat different, although the “dead” end (dead-end…get it? Ahem…) result was the same; a never-ending system loop of seemingly starting up, getting “stuck” and rebooting:
Plus, my analysis of the systems’ insides in the first two cases had been more cursory than the comparative verbosity to which subsequent teardowns have evolved, so I decided a thorough revisit was apropos. I’ll start with some overview photos of our patient, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
See those left-side ventilation slots? Hold that thought:
Onward:
Two screws on top:
And two more on the bottom:
will serve as our pathway inside:
Before diving in, here’s visual confirmation:
that the “wall wart” still works (that said, I still temporarily swapped in the replacement HDHomeRun Prime’s PSU to confirm that any current-output deficit with this one wasn’t the root cause of the system’s bootup woes…it’s happened to me with other devices, after all…)
Onward:
Now for that inner plastic sleeve still surrounding three sides of the PCB, which slips right off:
This seems to be the same rev. 1.7D version of the design that I saw in the initial November 2014 teardown, versus the rev. 1.7F iteration analyzed a year (and a few months) later:
Once again, a heatsink dominates the PCB topside-center landscape, surrounded by, to the left, a Macronix MX25L1655D 16 Mbit serial interface flash memory (hold that thought) and a Hynix (now SK Hynix) H5PS5162FFR 64 Mbit DDR2 SDRAM, and above, a Realtek RTL8211CL single-port Ethernet controller. Back in late 2014, I relied on WikiDevi (or, if you prefer, DeviWiki) to ID what was underneath the heatsink:
The chip is Ubicom’s IP7150U communications and media processor; the company was acquired in early 2012 and I can’t find any mention of the SoC on new owner Qualcomm’s website. Here’s an archive of the relevant product page.
I confess that I had subsequently completely forgotten about my earlier online sleuthing success; regardless, I was determined to pop the heatsink off this time around:
Next, some rubbing alcohol and a fingernail to scrape off the marking-obscuring glue:
Yep, it’s the Ubicom IP7150U Here’s an interesting overview of what happens when you interact with the CPU (and broader system) software via the SiliconDust-supplied Linux-based open source development toolset and a command line interface, by the way.
I was also determined this time to pry off the coax tuner subsystem’s Faraday cage and see what was underneath, although in retrospect I could have saved myself the effort by just searching for the press release first (but then again, what’s the fun in that?):
Those are MaxLinear MxL241SF single-die integrated tuner and QAM demodulator ICs, although why there are four of them in a three-tuner system design is unclear to me…(readers?)
Grace Hopper would approveNow let’s flip the PCB over and see what’s underneath:
What’s that blob in the lower right corner, under the CableCard slot? Are those…dead bugs?
Indeed!
I’d recently bought a macro lens and ring light adapter set for my smartphone:
Which I thought would be perfect to try out for the first time in this situation:
That optical combo works pretty well, eh? Apparently, the plants in the greenhouse room next door to the furnace room, which does double-duty as my network nexus, attract occasional gnats. But how and why did they end up here? For one thing, the LED at this location on the other side of the PCB is the one closest to the aforementioned ventilation slots (aka, gnat access portals). And for another, this particular LED is a) perpetually illuminated whenever the device is powered up and b) multicolor, whereas the others are either green-or-off. As I wrote in 2014:
At the bottom [editor note: of the PCB topside] are the five front-panel LEDs. The one on the left [editor note: the “buggy” one] is normally green; it’s red when the HDHomeRun Prime can’t go online. The one to its right is also normally green; it flashes when the CableCARD is present but not ready, and is dark when the CableCARD is not present or not detected. And the remaining three on the right, when green-lit, signify a respective tuner in use.
Hey, wait…I wonder what might happen if I were to scrape off the bugs?
Nope, the device is still DOA:
I’ll wrap up with one more close-up photo, this one of the passives-dominated backside area underneath the topside Ubicom processor and its memory and networking companion chips:
And in closing, a query: why did the system die this time? As was the case the first time, albeit definitely not the case the second time, there’s no obvious physical evidence for the cause of this demise. Generally, and similar to the MoCA adapter I tore down last month, these devices have dual potential EMP exposure sources, Ethernet and coax. Quoting from last month’s writeup:
Part of the reason why MoCA devices keep dying, I think, is due to their inherent nature. Since they convert between Ethernet and coax, there are two different potential “Achilles Heels” for incoming electromagnetic spikes. Plus, the fact that coax routes from room to room via cable runs attached to the exterior of the residence doesn’t help.
In this case, to clarify, the “weak link” coax run is the one coming into the house from the Comcast feed at the street, not a separate coax span that would subsequently run from room to room within the home. Same intermediary exterior-exposure conceptual vulnerability, however.
The way the device is acting this time, though, I wonder if the firmware in the Macronix flash memory might have gotten corrupted, resulting in a perpetual-reboot situation. Or maybe the processor just “loses its mind” the first time it tries to access the no-longer-functional Ethernet interface (since this seemed to be the root cause of the demise the first two times) and restarts. Reader theories, along with broader thoughts, are as-always welcomed in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The whole-house LAN: Achilles-heel alternatives, tradeoffs, and plans
- Lightning strikes…thrice???!!!
- Computer and network-attached storage: Capacity optimization and backup expansion
- A teardown tale of two not-so-different switches
- Dissecting (and sibling-comparing) a scorched five-port Gigabit Ethernet switch
- Broke MoCA II: This time, the wall wart got zapped, too
The post Debugging a “buggy” networked CableCARD receiver appeared first on EDN.
A short tutorial on hybrid relay design

What’s a hybrid relay? How does it work? What are its key building blocks? Whether you are designing a power control system or tinkering with a do-it-yourself automation project, it’s important to demystify the basics and know why this hybrid approach is taking off. Here is a brief tutorial on hybrid relays, which also explains why they are becoming the go-to choice for engineers and makers alike.
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- Common Types of Relay
- IC drives up to four single-coil latching relays
- Designing a simple electronic impulse relay module
- Mastering latching relays: A hands-on design guide
- Electromechanical relays: an old-fashioned component solves modern problems
The post A short tutorial on hybrid relay design appeared first on EDN.
Automating FOWLP design: A comprehensive framework for next-generation integration

Fan-out wafer-level packaging (FOWLP) is becoming a critical technology in advanced semiconductor packaging, marking a significant shift in system integration strategies. Industry analyses show 3D IC and advanced packaging make up more than 45% of the IC packaging market value, underscoring the move to more sophisticated solutions.
The challenges are significant—from thermal management and testing to the need for greater automation and cross-domain expertise—but the potential benefits in terms of performance, power efficiency, and integration density make these challenges worth addressing.
Figure 1 3D IC and advanced packaging make up more than 45% of the IC packaging market value. Source: Siemens EDA
This article explores the automation frameworks needed for successful FOWLP design and focuses on core design processes and effective cross-functional collaboration.
Understanding FOWLP technology
FOWLP is an advanced packaging method that integrates multiple dies from different process nodes into a compact system. By eliminating substrates and using wafer-level batch processing, FOWLP can reduce cost and improve yield. Because it shortens interconnect lengths, FOWLP packages offer lower signal delays and power consumption compared to conventional methods. They are also thinner, making them ideal for space-constrained devices such as smartphones.
Another key benefit is support for advanced stacking, such as placing DRAM above a processor. As designs become more complex, this enables higher performance while maintaining manageable form factors. FOWLP also supports heterogeneous integration, accommodating a wide array of die combinations to suit application needs.
The need for automation in FOWLP design
Designing with FOWLP exceeds the capabilities of traditional PCB design methods. Two main challenges drive the need for automation: the inherent complexity of FOWLP and the scale of modern layouts, racking up millions of pins and tens of thousands of nets. Manual techniques cannot reliably manage this complexity and scale, increasing the risk of errors and inefficiency.
Adopting automation is not simply about speeding up manual tasks. It requires a complete change in how design teams approach complex packaging design and collaborate across disciplines. Let’s look at a few of the salient ways to make this transformation successful.
- Technology setup
All FOWLP designs start with a thorough technology setup. Process design kits (PDKs) from foundries specify layer constraints, via spans, and spacing rules. Integrating these foundry-specific rules into the design environment ensures every downstream step follows industry requirements.
Automation frameworks must interpret and apply these rules consistently throughout the design. Success here depends on close attention to detail and a deep understanding of both the foundry’s expectations and the capabilities of the design tools.
- Assembly and floor planning
During assembly and floor planning, designers establish the physical relationships between dies and other components. This phase must account for thermal and mechanical stress from the start. Automation makes it practical to incorporate early thermal analysis and flag potential issues before fabrication.
Effective design partitioning is also critical when working with automated layouts. Automated classification and grouping of nets allow custom routing strategies. This is especially important for high-speed die-to-die interfaces, compared to less critical utility signals. The framework should distinguish between these and apply suitable methodologies.
- Fan-out and routing
Fan-out and routing are among the most technically challenging parts of FOWLP design. The automation system must support advanced power distribution networks such as regional power islands, floodplains, or striping. For signal routing, the system needs to manage many constraints at once, including routing lengths, routing targets, and handling differential pairs.
Automated sequence management is essential, enabling designers to iterate and refine routing as requirements evolve. Being able to adjust routing priorities dynamically helps meet electrical and physical design constraints.
- Final verification and finishing
The last design phase is verification and finishing. Here, automation systems handle degassing hole patterns, verifying stress and density requirements, and integrating dummy metal fills. Preparing data for GDS or OASIS output is streamlined, ensuring the final package meets manufacturing and reliability standards.
Building successful automated workflows
For FOWLP automation flows to succeed, frameworks must balance technical power with ease of use. Specialists should be able to focus on their discipline without needing deep programming skills. Automated commands should have clear, self-explanatory names, and straightforward options.
Effective frameworks promote collaboration among package designers, layout specialists, signal and power integrity analysts, and thermal and mechanical engineers. Sharing a common design environment helps teams work together and apply their skills where they are most valuable.
A crucial role in FOWLP design automation is the replay coordinator. This person orchestrates the entire workflow, managing contributions from all team members as well as the sequence and dependencies of automated tasks, ensuring that all the various design steps are properly sequenced and executed.
To be effective, replay coordinators need a high-level understanding of the overall process and strong communication with the team. They are responsible for interpreting analysis results, coordinating adjustments, and driving the group toward optimal design outcomes.
The tools of the new trade
This successful shift in how we approach microarchitectural design requires new tools and technologies that support the transition from 2D to 3D ICs. Siemens EDA’s Innovator3D IC is a unified cockpit for design planning, prototyping, and predictive analysis of 2.5/3D heterogeneous integrated devices.
Innovator3D IC constructs a digital twin, unified data model of the complete semiconductor package assembly. By using system technology co-optimization, Innovator3D IC enables designers to meet their power, performance, area, and cost objectives.
Figure 2 Innovator3D IC features a unified cockpit. Source: Siemens EDA
FOWLP marks a fundamental evolution in semiconductor packaging. The future of semiconductor packaging lies in the ability to balance technological sophistication with practical implementation. Success with this technology relies on automation frameworks that make complex designs practical while enabling effective teamwork.
As industry continues to progress, organizations with robust FOWLP automation strategies will have a competitive advantage in delivering advanced products and driving the next wave of semiconductor innovation.
Todd Burkholder is a Senior Editor at Siemens DISW. For over 25 years, he has worked as editor, author, and ghost writer with internal and external customers to create print and digital content across a broad range of EDA technologies. Todd began his career in marketing for high-technology and other industries in 1992 after earning a Bachelor of Science at Portland State University and a Master of Science degree from the University of Arizona.
Chris Cone is an IC packaging product marketing manager at Siemens EDA with a diverse technical background spanning both design engineering and EDA tools. His unique combination of hands-on design experience and deep knowledge of EDA tools provides him with valuable insights into the challenges and opportunities of modern semiconductor packaging, particularly in automated workflows for FOWLP.
Editor’s Notes
This is third and final part of the article series on 3D IC. The first part provided essential context and practical depth for design engineers working on 3D IC systems. The second part highlighted 3D IC design toolkits and workflows to demonstrate how the integration technology works.
Related Content
- 3D IC Design
- Thermal analysis tool aims to reinvigorate 3D-IC design
- Heterogeneous Integration and the Evolution of IC Packaging
- Tighter Integration Between Process Technologies and Packaging
- Advanced IC Packaging: The Roadmap to 3D IC Semiconductor Scaling
The post Automating FOWLP design: A comprehensive framework for next-generation integration appeared first on EDN.
Solar-driven TEG advances via fabrication, not materials

Solar thermoelectric generators (STEGs) are used for direct conversion of impinging solar and thermal energy into electricity. It can be an alternative to photovoltaic cells in some cases, which can only make use of sunlight. STEGs consist of a hot side and a cold side separated by semiconductor materials, and the temperature difference between them generates electricity through the well-known Seebeck effect, Figure 1.
Figure 1 New, high-efficiency STEGs were engineered with three strategies: black metal technology on the hot side, covering the black metal with a piece of plastic to make a mini-greenhouse, and laser-etched heat sinks on the cold side. Source: University of Rochester / J. Adam Fenster
However, widespread use of STEGs has been limited by their extremely low efficiency, typically under 1 percent; in contrast, standard consumer-grade solar panels have an energy-conversion rate of roughly 20 percent.
A team at the University of Rochester has focused on this low-efficiency challenge, but not by seeking to develop more advanced or esoteric materials. Instead, they used enhanced spectral engineering and thermal management methods in three ways to create a STEG device that generates 15 times more power than previous devices, Figure 2.
Figure 2 Theoretical design of spectral engineering and thermal management strategies for the STEG hot and cold sides: a) Schematic of enhancing STEG output power through hot- and cold-side thermal management. The hot-side thermal management system consists of a W-SSA and a greenhouse chamber to reduce heat loss. The cold-side thermal management system consists of a μ-dissipator, which enhances the cold-side heat dissipation. b) Four cases of STEG with (I) no thermal management, (II) hot-side thermal management, (III) cold-side thermal management, and (IV) both sides thermal management. c) Simulated STEGs’ peak output power with different thermal management strategies. d) Simulated energy flows in the four STEGs. The blue bars represent the energy flow through the STEG. Source: University of Rochester / J. Adam Fenster
By focusing on the hot and cold sides of the device, and by combining better solar energy absorption and heat trapping at the hot side with better heat dissipation at the cold side, they improved efficiency to about 15%.
First, they applied a specialized black metal technology developed in their lab to the hot side of the device, by modifying ordinary tungsten to selectively absorb light at solar wavelengths. They did this by using intense femtosecond laser pulses to etch nanoscale structures into the metal’s surface, which increased its ability to capture energy from sunlight while limiting heat loss at other wavelengths.
Second, the researchers covered the black metal with a piece of plastic to make a mini greenhouse. This minimized the convection and conduction to trap more heat, increasing the temperature on the hot side.
Third, on the cold side of the STEG, they once again used femtosecond laser pulses, but this time on regular aluminum. This created a heat sink with tiny structures that improved the heat dissipation through both radiation and convection, Figure 3. Doing so doubles the cooling performance of a typical aluminum heat dissipator.
Figure 3 A close-up of laser-etched nanostructures on the surface of a solar thermoelectric generator. Source: University of Rochester / J. Adam Fenster
Their tests and analysis separated the three improvement changes they implemented, so they could confirm the impact of each individual enhancement and compare it to their simulations, Figure 4.
Figure 4 Synergistic effect of STEG hot- and cold-side spectral and thermal management: a) Schematics of four cases of STEG with different thermal management strategies. b) STEG weight increases when adding the μ-dissipator, W-SSA, and greenhouse chamber to the TEG. c) STEG power-current curves under 3 suns. d) STEG peak output power under 1–5× solar concentrations. e) STEG power enhancement and TEG average temperatures under 1–5× solar concentrations by applying spectral and thermal management on both sides. f) Photos of LED illumination when powered by the four STEGs in (a). Source: University of Rochester / J. Adam Fenster
It’s obviously not possible to say how successful or practical this STEG approach will be. Nonetheless, it’s interesting to see their focused approach to the weaknesses of STEGs and how they avoided working on the materials-science aspects, but instead concentrated on design improvements. The work is detailed in their paper “15-Fold increase in solar thermoelectric generator performance through femtosecond-laser spectral engineering and thermal management” published in Light: Science & Applications.
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- Is There a TEG in Your Power Future?
- TEG’s potential: Is it real, a dream, or in-between?
- Wearables benefit from flexible TEG materials
- TEG energy harvesting: hype or hope?
The post Solar-driven TEG advances via fabrication, not materials appeared first on EDN.
20-year-old Bosch Sensortec eyes AI inside MEMS sensors

Bosch Sensortec, which shipped more than 1 billion MEMS sensors in 2024, is now a 20-year-old outfit with an ambitious goal of making MEMS sensing as essential to consumer electronics as the microprocessor.
Market research firm Yole Group has acknowledged Bosch Sensortec as the global market leader in MEMS sensors for the fourth consecutive year. “Bosch Sensortec has been one of the main driving forces in the MEMS industry,” said Jean-Christophe Eloy, president and founder of Yole Group. “The company has evolved from a startup with a strong technical vision into a global leader in intelligent sensing for consumer electronics.”
The timing of Bosch Sensortec’s inception in 2005 was impeccable; the smartphone revolution would arrive a couple of years later, transforming the MEMS sensor world by bringing sensor technology into real-world impact. “As smartphones began to change the world, we brought deep technical expertise,” said Stefan Finkbeiner, CEO of Bosch Sensortec.
He recalls the early days when a handful of engineers would all fit in a single meeting room. “I remember us playing early mobile phone games in that room just to understand how a gyroscope might enhance the user experience.” Over the years, miniaturization became the key driving force by combining MEMS and ASIC layers through advanced packaging technologies such as through-silicon vias and buried bonding.
It reduced the sensor footprint and enabled AI computation directly on the chip. “Twenty years ago, our first MEMS accelerometer was 15 times larger in package volume than today’s ultra-compact sensors,” Finkbeiner said. “Today, you can hardly see these sensors; they’re only slightly bigger than a grain of sand.”
Figure 1 Miniaturization transformed MEMS sensors in the past two decades. Source: Bosch Sensortec
First and foremost, this miniaturization opened the door to new applications in space-constrained environments, spanning from true wireless stereo earbuds and wearables to smart home devices. Moreover, instead of redesigning hardware, design engineers can update software to adapt functionality, speeding up time-to-market and enabling broader use cases.
MEMS sensors in the AI era
The next tectonic shift in the MEMS sensor space is related to artificial intelligence (AI). Bosch Sensortec describes itself as a supplier of intelligent sensing systems that integrate MEMS technology, embedded software and edge AI.
Consumer electronics products—from smartphones and wearables to smart homes and hearables—are connected devices that require more than raw data. They demand context and energy-efficient intelligence. Here, AI-powered sensors that process data directly on the sensor itself ensure privacy, extend battery life, and enable new features like activity recognition, gesture control, and indoor navigation.
Figure 2 The AI-powered sensors transform raw data into actionable signals for smartphones, wearables, hearables, and smart homes. Source: Bosch Sensortec
Bosch Sensortec claims that 90% of its products will feature on-sensor intelligence by 2027. Furthermore, its long-term goal is to deliver over 10 billion intelligent sensors in total by 2030. “From silicon to system, we’re building sensor solutions that shape tomorrow’s connected, sustainable technologies,” Finkbeiner said.
He concludes by saying that while the company’s startup phase may be over, the spirit of experimentation remains. That’s a vital premise for AI‑driven sensor systems in a connected world.
Related Content
- MEMS sensors as drivers for change
- MEMS ready to lead component revolution
- The MEMS Industry Strives for the Next Big Thing
- Advances make MEMS sensors easier to integrate
- MEMS sensors add machine-learning core, electrostatic sensing
The post 20-year-old Bosch Sensortec eyes AI inside MEMS sensors appeared first on EDN.
IGBT7 modules cut power losses up to 20%

Microchip’s DualPack 3 (DP3) IGBT7 power modules come in six variants at 1200 V and 1700 V with current ratings from 300 A to 900 A. They reduce power losses by 15% to 20% compared to IGBT4 devices and operate reliably up to 175°C under overload conditions.
Available in a phase-leg configuration, the 152×62×20-mm DP3 modules support a frame size jump to increase power output. This packaging eliminates the need to parallel multiple modules, reducing system complexity and BOM cost. DP3 modules also serve as a second-source alternative to industry-standard EconoDUAL packages, improving design flexibility and supply chain security.
The IGBT power modules support motor drive, data center, and renewable energy systems with a compact design that simplifies power converters. They are well-suited for general-purpose motor drives and address challenges such as dv/dt, drive complexity, conduction losses, and limited overload capability.
DualPack 3 IGBT7 power modules are now available in production quantities. Additional information and datasheets can be found here.
The post IGBT7 modules cut power losses up to 20% appeared first on EDN.