Українською
  In English
EDN Network
ESD diodes raise surge ratings to 540 W

Vishay’s single-line VGSOTxx and two-line VGSOTxxC ESD protection diodes offer reverse working voltages from 3.3 V to 36 V. Compared to earlier GSOTxx/xxC devices, the automotive-grade diodes provide improved heat dissipation, supporting higher peak pulse power ratings up to 540 W and current ratings up to 44 A for an 8/20-µs pulse.
Both series come in SOT-23 packages and serve as drop-in replacements for GSOT devices, providing unidirectional ESD protection. The VGSOTxxC series employs a dual common-anode configuration that also enables bidirectional protection. Its dual diodes can be paralleled to double surge power ratings, line capacitance, and reverse leakage current.
The RoHS-compliant devices deliver ESD immunity per IEC 61000-4-2 and ISO 10605 at ±30 kV air and contact discharge, and meet AEC-Q101 HBM class H3B at >8 kV. Their high power and current handling make them well-suited for automotive, industrial, consumer, communications, medical, and military applications.
Samples and production quantities of the ESD protection diodes are available now, with lead times of 12 weeks.
The post ESD diodes raise surge ratings to 540 W appeared first on EDN.
What initiates lightning? There’s a new and likely better answer.

Engineers across many disciplines are aware of and concerned with lightning—and for good reasons. A lightning strike can cause significant structural damage, house and forest fires, and severe electrical surges (Figure 1).
Figure 1 The intensity of a lightning strike is always awe-inspiring and represents a millisecond-level transient of hundreds of kiloamps. Source: Science Daily
Even if the strike is not directly on the equipment (in which case the unit is probably “fried”), the associated transients induced in nearby wires and paths can be damaging. Lightning can also be mystifying: some people who have been hit have no ill effects; others have some temporary or long-lasting physical and mental impairments; and for some….well, you know how it ends.
Measuring lightningFor these reasons, protection against the effects of lightning to the extent possible is an important factor in many designs. These efforts can include the use of lightning rods, which provide low-impedance paths to Earth ground functioning as a near-infinite source and sink for electrons, gas-discharge tubes (GDTs), and metal oxide varistors (MOVs), among other devices. Implementing protection is especially challenging when there are multiple strikes, as they can erode the capabilities of the protective devices.
This natural phenomenon occurs most frequently during thunderstorms, but has also been observed during volcanic eruptions, extremely intense forest fires, and surface nuclear detonations. There are many available numbers for the voltages, currents, timing, and temperature ranges associated with lightning. While there is obviously no single lightning waveform, Figure 2 shows representative data; note the maximum current of several hundred kiloamps.
Figure 2 These are representative values for lighting-stroke current versus time and current magnitudes; these are not the only ones, of course. Source: Kingsmill Industries Ltd
Researchers have studied lightning for many decades, using a variety of techniques ranging from “man-made” lightning in controlled enclosures, to field measurements in lightning-prone areas, to instigating it with a grounded wire launched into a lightning-prone cloud. There’s also the futile quest to direct and capture lightning’s energy into some sort of project store-and-use scheme. (For fictional demonstration, see the 1931 classic Frankenstein, where lightning is used to energize the doctor’s monster-like creation, or the end of the 1985 classic Back to the Future, where lightning is captured by a rod on the clock tower and used to recharge the flux capacitor of the DeLorean time-travel vehicle .
The standard explanations for lightning and its initiation are like this one from Wikipedia: “Lightning is a powerful natural electrical discharge caused by a buildup of static electricity within storm clouds. This buildup occurs when ice crystals and water droplets collide in the turbulent environment of a cumulonimbus cloud, separating charges within the cloud. When the electrical potential becomes too great, it discharges, creating a bright flash of light and a loud sound known as thunder.”
But what really happens inside the cloud?Well, maybe that’s only a partial answer, or perhaps it’s misleading. Why so? For decades, scientists have understood the mechanics of a lightning strike, but exactly what sets it off inside thunderclouds remained a lingering mystery. Apparently, it’s much more than static electricity potential finally reaching a “flashover” level.
That mystery may now be solved, as a team at Pennsylvania State University (Penn State) has produced what they say is the complete story. It’s far more complicated than just a huge static-electricity burst; it’s really a mixture of cosmic rays, X-rays, and high-energy electrons.
Their work involves some deep physics and complex analysis. It also introduced me to some new acronyms: initial breakdown pulses (IBPs), narrow bipolar events (NBEs), energetic in cloud pulses (EIPs), and terrestrial gamma ray flashes (TGFs), flickering gamma ray flashes (FGFs), and Initial Electric Field Change (IEC).
They have taken both historical lighting-related data (and there is a lot of that available from multiple sources) with current measurements, presented a hypothesis, correlated the data, developed models, and ran simulations, and put it all together. The result is a plausible explanation that seems to fit the facts, although with natural events such as lightning, you can never be completely sure.
The Penn State research team, led by professor of electrical engineering Victor Pasko, explained how intense electric fields within thunderclouds accelerate electrons. These fast-moving electrons collide with molecules such as nitrogen and oxygen, generating X-rays and sparking a rapid surge of new electrons and high-energy photons. This chain reaction then creates the necessary conditions for a lightning bolt to form, showing the link between X-rays, electric fields, and the physics of electron avalanches.
These electrons radiate energetic photons (X-rays) as they scatter by the nuclei of nitrogen and oxygen atoms in air. These X-rays radiate in all directions, and some fractions are radiated in the opposite direction of electron motion. These particular X-rays lead to the seeding of new relativistic seed electrons due to the photoelectric effect and thus a strong amplification of the original avalanche.
To validate their explanation, the team used mathematical modeling to simulate atmospheric events that match what scientists have observed in the field. These observations involve photoelectric processes in Earth’s atmosphere, where high-energy electrons—triggered by cosmic rays from space—multiply within the electric fields of thunderstorms and release short bursts of high-energy photons. This process, known as a terrestrial gamma-ray flash, consists of invisible but naturally occurring bursts of X-rays and associated very high frequency (VHF) radiation pulses, Figure 3.
Figure 3 A conceptual representation of conditions required for transition from fast positive breakdown (FPB) to fast negative breakdown (FNB) based on relationship between the relativistic feedback threshold E0/δ and the minimum negative streamer propagation fields E—cr/δ. Source: Pennsylvania State University
They demonstrated how electrons, accelerated by strong electric fields in thunderclouds, produce X-rays as they collide with air molecules like nitrogen and oxygen, and create an avalanche of electrons that produce high-energy photons that initiate lightning. They used the model to match field observations—collected by other research groups using ground-based sensors, satellites, and high-altitude spy planes—to the conditions in the simulated thunderclouds.
I’ll admit: it’s pretty intense stuff, as demonstrated by a read-through of their paper “Photoelectric Effect in Air Explains Lightning Initiation and Terrestrial Gamma Ray Flashes” published in the Journal of Geophysical Research. (I do have one minor objection: I wish they did not use the term “photoelectric effect” in the title or body of the paper. Although that phrase is technically correct as they use it, I associate it with Einstein’s groundbreaking 1905 paper, which resolved all the contradictions of the data of this phenomenon and instead proposed photons as energy quanta, for which he received the Nobel Prize.)
While the root causes of lightning, as delineated in the work of the Penn State team, are not directly relevant to engineers whose designs must tolerate nearby lightning strikes, it’s still interesting to see what is going on and how even our modern science may still not have all the answers to such a common occurrence. In other words, there’s still a lot to learn about basic natural events.
Have you ever been involved with a design that had to be lightning-tolerant? What standards did you try to follow? What techniques and components did you use? How did you test it to verify the performance?
Related content
- Lightning as an energy harvesting source?
- When Lightning Strikes, Will a Surge Protector Help?
- Pulse power and transient loads: a very different world
References
- Kingsmill Industries Ltd, Characteristics of Lightning Discharges
- Wikipedia, Lightning
The post What initiates lightning? There’s a new and likely better answer. appeared first on EDN.
The empty promise of the LED bulb’s lifetime

We are told that LED-based lighting will provide a very long service life per bulb, but here comes “Sportin’ Life” again (from Porgy and Bess) to put the lie to that claim. (It ain’t necessarily so.)
These four LED lamps each went dark after only a few months of service despite their packages’ promise (Figure 1).
Figure 1 Four LED bulbs that failed after a few months despite their service life being over 20 years.
Similarly, one of the five LED lamps in this ceiling fixture also went dark after only a few months of service (Figure 2).
Figure 2 One in five LED bulbs in this ceiling lamp was rendered nonfunctional only after a few months.
In my eighty years in this world, I have only twice seen a new incandescent lamp fail so soon after being put into service. One lamp had a service life of thirty minutes and the other one died almost instantly.
I tried to cut open one of the four failed conical LED lamps to see what specifically had gone wrong, but I couldn’t manage to penetrate the shroud. Those plastic bulb enclosures were made of really tough stuff. Failing in that effort, I simply threw the four of them out.
Nevertheless, four for four strikes me as a pretty shabby history. I replaced each of the four with products from a different manufacturer, and so far, since pre-pandemic times, those LED bulbs are still working.
It can be done.
John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Teardown: What killed this LED bulb?
- What’s the storage life of idled LED light bulbs?
- Incandescent lamps and service life
- Rich voltage, poor voltage: My incandescent tale
- The burned-out bulb mystery
- The LED: incandescent light bulb heir apparent
The post The empty promise of the LED bulb’s lifetime appeared first on EDN.
A different twist to the power pushbutton problem: A kilowatt AC DAC

Design Idea (DI) contributors have recently explored various possibilities for ON/OFF power control using just a momentary contact “shiny modern push-button,” many of which build off of Nick Cornford’s “To press on or hold off? This does both.”
These ideas are interesting, and they’ve suggested a different notion. Figure 1 takes the one-button power control concept a bit further. It uses its button to provide six bits of resolution to a kilowatt of variable AC power, addressing adjustable applications like heating blankets, lamp dimming, motor speed, etc. I like it because, well, shouldn’t there be a bit (or even six) more to life than just ON/OFF?
Figure 1 Variable AC power control with a simple pushbutton. When S1 is pushed, counter U1 ramps through the 64 DAC codes in a 210 / 120Hz = 8.5-second cycle and stops on any selected power setting when S1 is released.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Power control methodThe power control method employed in Figure 1 is variable AC phase angle conduction via thyristor Q3. It’s wired in the traditional way except that the 6-bit DAC driven by CMOS counter U1 fills in for the usual phase adjustment pot. Because, unlike Q3, the DAC circuitry isn’t bidirectional, the D1-4 rectifier is needed to feed it DC and keep it working and counting through 60-Hz alternations.
Full power Q3 efficiency is around 99%, but its maximum junction temperature rating is only 110 °C. Adequate heatsinking of Q2 will therefore be necessary if output loads greater than 200 W are expected.
Adjusting U1 to the desired power setting is accomplished by pushing and holding switch S1. This connects the 120-Hz full-wave rectifier signal from the D1-D4 bridge to the Schmitt trigger formed by R2, R3, and U1’s internal non-inverting q0 input buffer.
The subsequent division of the 120 Hz signal by U1’s ripple divider chain makes flip-flop q5 toggle at 120/25 = 3.75 Hz, q6 at 120/26 = 1.875 Hz, and so forth down to q10 at 120/210 = 0.117 Hz. This gives a ramp time of 8.5 seconds for the full 0 (= full OFF) to 63 (= full ON) code cycle. Meanwhile, digital integration of the raw signal from switch S1 by U1’s counters suppresses switch contact bounce.
When the desired power setting (lamp brightness, motor speed, etc.) is reached, release the button, i.e., just let go! However, due to the fairly rapid toggle rate of the lower counter stages, a bit of practice may be required to accurately hit a target setting on the first try.
DAC topologyThe DAC topology is straightforward. Just six (R4 through R9) binary-weighted resistors make up a summing network that produces a 0-V to 15-V input to the Q1 Q2 complementary current-mode output buffer.
Q1 provides nominal compensation for Q2’s Vbe offset and tempco, as well as sufficient current gain to allow use of multi-megohm resistances in the summation network. This is important because operating power for the DAC is basically stolen from Q3’s phase control signal.
This (as you probably noticed), nicely avoids the need for a separate power supply, but it provides only microamps of current for U1 and friends. So, a power-thrifty topology was definitely needed.
DAC reference Z1 is remarkably content with its meager share of this starvation diet. It maintains a usefully constant regulation despite only a single-digit microamp bias, which is impressive for an 11-cent (in singles) part. Meanwhile, U1 daintily sips only tens of nanoamps.
R11 and C3 provide an initial reset to OFF when power is first applied.
At this point, you might reasonably ask: Is this scheme any better than a simple pot with a twistable knob? Well, don’t forget the “shiny modern push-button” factor.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- To press on or hold off? This does both.
- To press ON or hold OFF? This does both for AC voltages
- Latching power switch uses momentary pushbutton
- A new and improved latching power switch
- Latching power switch uses momentary-action pushbutton
The post A different twist to the power pushbutton problem: A kilowatt AC DAC appeared first on EDN.
Hands-on with hobby-grade arc generator modules

Arc generator modules may be small in scale, but they offer big opportunities for hands-on exploration in electronics. Whether you are experimenting with arc simulation, testing circuit behavior under fault conditions, or simply curious about high-voltage phenomena, these minuscule modules provide a safe and accessible way to dive into the fundamentals.
This blog will present hands-on tips and tricks for working with hobby-grade arc generator modules and circuits—ideal for curious minds and budding engineers eager to explore high-voltage experimentation.
There are several methods for generating electric arcs. However, this post will focus on how to achieve extra-high voltage levels using simple electronic circuits. The spotlight is on a widely available, budget-friendly arc generator module kit designed for DIY enthusiasts. It’s an accessible way to dive into high-voltage experimentation without breaking the bank.
Take a look at the kit below, along with its key technical specs to help you understand what it offers.
- Input voltage: 3.7 V to 4.2 V DC
- Input current: < 2 A
- Output voltage: ~15 kV
- Output current: ≤ 0.4 A
- Ignition distance (high voltage bipolar): ≤ 0.5 cm
Figure 1 This compact arc generator kit delivers around 15-kV output using only a handful of components. Source: Author
This is arguably one of the elementary and most accessible kits for electronics enthusiasts looking to explore high-voltage applications. The module requires minimal setup skills, with no circuit-level adjustments needed. While the power output is not exceptionally high, even a minor mishap can result in serious electrical burns. That said, with proper safety precautions in place, the system can produce stunningly high-frequency arcs.
Now, let’s take a look at the schematic diagram to understand how the circuit works.
Figure 2 The schematic diagram demonstrates how the kit produces high voltage through a minimal circuit design. Source: Author
Examining its internal electronics reveals a single-transistor oscillator at the heart of the circuit. This simple yet effective configuration allows high-voltage generation from standard battery cells.
Functionally, it acts as a step-up (booster) transformer system, where a feedback loop controls the switching of a power transistor. The secret to high-voltage output lies in the transformer’s winding setup. It uses two primary coils—main and feedback—alongside a secondary winding that can produce voltages soaring into the kilovolt range.
The diode’s most critical function in this oscillator circuit is to block the reverse voltage pulse generated by the transformer’s collapsing magnetic field. This action is essential for two reasons; it prevents damage to the transistor and ensures a clean transition to the “off” state.
Next is another compact high-voltage boost module (sometimes labelled as XKT203-33) capable of generating up to 30 kV. Specifically engineered for pest control applications, it finds use in devices aimed at eliminating mosquitoes, cockroaches, and other small insects. Despite its impressive output, the module operates efficiently with minimal power input, making it ideal for battery-powered or low-power systems.
The image below presents the aforesaid module alongside its internal schematic for reference. A closer look at the available schematic highlights the use of proprietary components, with a Delon voltage doubler circuit strategically placed at the output stage to deliver the required 30 kV.
Figure 3 The 30-kV module achieves high-voltage generation through an elegantly minimal design. Source: Author
Interestingly, a closer look at two seemingly popular kV generator modules shows that even humble jellybean components can handle the task. Still, integrating custom parts might elevate performance and efficiency.
But before jumping to conclusions, consider this alternative design idea for building your own kV generator module, an approach many have explored with intriguing results. Let’s take a quick look.
Figure 4 The blueprint shows how to generate high-voltage output using an automotive ignition coil. Source: Author
This approach simply utilizes a universal automotive ignition coil to produce high-voltage output, as depicted in the self-explanatory diagram above.
At its core, an ignition coil consists of three main components: a primary winding, a secondary winding, and a laminated iron core. Secondary winding contains significantly more turns of wire than the primary, creating a turn ratio that directly influences the voltage increase. There is a fairly typical range for the ignition coil turns ratio, usually between possibly 50:1 to 200:1, with 100:1 probably being the most common.
Just to add, in an inductive ignition system, the primary winding is typically energized with 12 V or 24 V. When this current is suddenly interrupted, a high-voltage EMF is induced in the secondary winding—often reaching 20 kV to 40 kV—more than enough to jump across a spark gap.
To break it down further, a single switching action by a transistor (BJT/IGBT/MOSFET) initiates the ignition process by allowing current to flow through the ignition coil’s primary winding. The current charges the primary coil, storing energy in its magnetic field. When the transistor turns off and interrupts the current, the magnetic field begins to collapse.
In response, the coil resists the sudden change, causing a rapid rise in voltage across the secondary winding, ultimately generating the high-voltage spark needed for ignition. It’s enough to ionize the air to create a spark.
Back to the subject matter, when driving the ignition coil through either an IGBT or a MOSFET, try experimenting with appropriate square wave pulses. Start with low frequencies around 150 to 350 Hz and duty cycles between 25% and 45% (just to get a feel for the response).
Heads up! Touching the high voltage from the ignition coil will definitely sting. It won’t kill you, but it will make you regret it.
That wraps up this post. I have got plenty more practical tips and insights lined up, so expect fresh content soon. This is just one piece of a much larger puzzle.
Finally, please note that this article is intended purely for informational and educational purposes. It does not promote, endorse, or commercially affiliate with any product, brand, or service mentioned. No sponsorships, no hidden agendas—just straight-up knowledge for curious minds.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Hyperchip taps ARC cores for peta router offerings
- High-speed pulse generator has programmable levels
- Setting safety standard for arc detection in solar industry
The post Hands-on with hobby-grade arc generator modules appeared first on EDN.
Increasing bit resolution with oversampling

Many electronic designs contain an ADC, or more than one, to read various signals and voltages. Often, these ADCs are included as part of the microcontroller (MCU) being used. This means, once you pick your MCU, you have chosen the maximum resolution (calculated from the number of bits in the ADC and the reference) you will have for taking a reading.
Wow the engineering world with your unique design: Design Ideas Submission Guide
What happens if, later in the design, you find out you need slightly more resolution from the ADC? Not to worry, there are some simple ways to improve the resolution of the sampled data. I discussed one method in a previous EDN Design Idea (DI), “Adaptive resolution for ADCs,” which talked about changing the reference voltage, so I won’t discuss that here. Another way of improving the resolution is through the concept of oversampling.
Let’s first look at a method that is essentially a simplified version of oversampling…averaging. (Most embedded programmers have used averaging to improve their readings, sometimes with the thought of minimizing the effects of bad readings and not thinking about improving resolution.)
So, suppose you’re taking a temperature reading from a sensor once a second. Now, to get a better resolution of the temperature, take the reading every 500 ms and average the two readings together. This will give you another ½-bit of resolution (we’ll show the math later). Let’s go further—take readings every 250 ms and average four readings. This will give you a whole extra bit of resolution.
If you have an 8-bit ADC and it is scaled to read 0 to 255 degrees with 1-degree resolution, you will now have a virtual 9-bit ADC capable of returning readings of 0 to 255.5 degrees with 0.5-degree resolution. If you average 16 readings, you will create a virtual 10-bit ADC from your 8-bit ADC. The 64-averaged reading will create an 11-bit virtual ADC by improving your 8-bit ADC with three extra bits, thereby giving you a resolution of one part in 2048 (or, in the temperature sensor example, a resolution of about 0.12 degrees).
A formula for averagingThe formula for extra bits versus the number of samples averaged is:
Number of samples averaged = M
Number of virtual bits created = b
M = 4b
If you want to solve for b given M: b = log4(M)
Or, b = (1/ log2(4)) * log2(M) = log2(M)/2
You may be scratching your head, wondering where that formula comes from. First, let’s think about the readings we are averaging. They consist of two parts. The first is the true, clean reading the sensor is trying to give us. The second part is the noise that we pick up from extraneous signals on the wiring, power supplies, components, etc. (These two signal parts combine in an additive way.)
We will assume that this noise is Gaussian (statistically normally distributed; often shown as a bell curve; sometimes referred to as white noise) and uncorrelated to our sample rate. Now, when taking the average, we first sum up the readings. The clean readings from the sensor will obviously sum up in a typical mathematical way. In the noise part, though, the standard deviation of the sum is the square root of the sum of the standard deviations. In other words, the clean part increases linearly, and the noise part increases as the square root of the number of readings.
What this means is that not only is the resolution increased, but the signal-to-noise ratio (SNR) would improve by M/sqrt(M), which mathematically reduces to sqrt(M). In simpler terms, the averaged reading SNR improves by the square root of the number of samples averaged. So, if we take four readings, the average SNR improves by 2, or the equivalent of one more bit in the ADC (an 8-bit ADC performs as a 9-bit ADC).
I have used averaging in many pieces of firmware, but it’s not always the best solution. As was said before, your sensor connection is passing your ADC a good signal with some noise added to it. Simple averaging is not always the best solution. One issue is the slow roll-off in the frequency domain. Also, the stopband attenuation is not very good. Both of these issues indicate that averaging allows a good portion of the noise to enter your signal. So, we may have increased the resolution of the reading, but have not removed all the noise from the signal we can.
Reducing the noiseTo reduce this noise, that is spread over the full frequency spectrum coming down the sensor wire, you may want to apply an actual lowpass filter (LPF) to the signal. This can be done as a hardware LPF applied before the ADC or it can be a digital LPF applied after the ADC, or it can be both. (Oversampling makes the design of these filters easier as the roll-off can be less steep.)
There are many types of digital filters but the two major ones are the finite impulse response (FIR) and the infinite impulse response (IIR). I won’t go into the details of these filters here, but just say that these can be designed using tradeoffs of bandpass frequency, roll-off rate, ripple, phase shift, etc.
A more advanced approach to oversamplingSo, let’s look at a design to create a more advanced oversampling system. Figure 1 shows a typical layout for a more “formal”, and better oversampling design.
Figure 1 A typical oversampling block diagram with an antialiasing filter, ADC, digital LPF, and decimation (down-sampling).
We start by filtering the incoming signal with an analog hardware LPF (often referred to as an antialiasing filter). This filter is typically designed to filter the incoming desired signal at just above the frequency of interest.
The ADC then samples the signal at a rate many times (M) the frequency of interest’s Nyquist rate. Then, in the system’s firmware, the incoming sample stream is again low-pass filtered with a digital filter (typically an FIR or IIR) to further remove the signal’s Gaussian noise as well as the quantization noise created during the ADC operation. (Various filter designs can also be useful for other kinds of noise, such as impulse noise, burst noise, etc.) Oversampling gave us the benefit of spreading the noise over the wide oversample bandwidth, and our digital lowpass filter can remove much of this.
Next, we decimate the signal’s data stream. Decimation (also known as down-sampling) is simply the act of now only using every 2nd, or 3rd, or 4th, up to every Mth sample, and tossing the rest. This is safe due to oversampling and the lowpass filters, so we won’t alias much noise into the lower sample rate signal. Decimation essentially reduces the bandwidth as represented by the remaining samples. Further processing now requires less processing power as the number of samples is significantly reduced.
It worksThis stuff really works. I once worked on a design that required us to receive very small signals being transmitted on a power line (< 1 W). The signal was attenuated by capacitors on the lines, various transformers, and all the customer’s devices plugged into the powerline. The signal to be received was around 10 µV riding on the 240-VAC line. We ended up oversampling by around 75 million times the Nyquist rate and were able to successfully receive the transmissions at over 100 miles from the transmitter.
Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.
Phoenix Bonicatto is a freelance writer.
Related Content
- Adaptive resolution for ADCs
- Understanding noise, ENOB, and effective resolution in ADCs
- How do ADCs work?
- Understand key ADC specs
The post Increasing bit resolution with oversampling appeared first on EDN.
Broke MoCA II: This time, the wall wart got zapped, too

Back in 2016, I did a teardown of Actiontec’s ECB2200 MoCA adapter, which had fried in response to an EMP generated by a close-proximity lightning bolt cloud-to-cloud spark (Or was it an arc? Or are they the same thing?). As regular readers may recall, this was the second time in as many years that electronics equipment had either required repair or ended up in the landfill for such a reason (although the first time, the lightning bolt had actually hit the ground). And as those same regular readers may already be remembering, last August it happened again.
I’ve already shared images and commentary with you of the hot tub circuitry that subsequently required replacement, as well as the three-drive NAS, the two eight-port GbE switches and the five-port one (but not two, as originally feared) GbE switch. And next month, I plan to show the insides of the three-for-three CableCard receiver that also met its demise this latest lightning-related instance. But this time, I’ll dissect Actiontec’s MoCA adapter successor, the ECB2500C:
I’d already mentioned the ECB2500C a decade back, actually:
The ECB2500C is the successor to the ECB2200; both generations are based on MoCA 1.1-supportive silicon, but the ECB2500C moves all external connections to one side of the device and potentially makes other (undocumented) changes.
And as was the case back in 2016, the adapter in the master guest bedroom was the MoCA network chain link that failed again this time. Part of the reason why MoCA devices keep dying, I think, is due to their inherent nature. Since they convert between Ethernet and coax, there are two different potential “Achilles Heels” for incoming electromagnetic spikes. Plus, the fact that coax routes from room to room via cable runs attached to the exterior of the residence doesn’t help. And then there’s the fact that the guest bedroom’s location is in closest proximity (on that level, at least) to the Continental Divide, from whence many (but not all) storms source.
This time, however, the failure was more systemic than before. The first thing I did was to test the wall wart’s DC output using my multimeter:
Dead! Hey…maybe the adapter itself is still functional? I grabbed the spare ECB2500C’s wall wart, confirmed it was functional, plugged it into this adapter and…nope, nothing lit up on the front panel, so the adapter’s dead, too. Oh well, you’ll get a two-for-one teardown today, then!
Let’s start with the wall wart, then, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
Specs n’ such:
Time to break out the implements of destruction again (vise not shown this time):
Progress…
Success!
No “potting” in this case; the PCB pulls right out:
The more interesting side of the PCB, both in penny-inclusive and closer-up perspectives:
The same goes for the more boring (unless you’re into thick traces, that is) side:
And now for some side views:
I didn’t see anything obviously scorched, bulged, or otherwise mangled; did you? Let me know in the comments if I missed something! Now on to the adapter, measuring 1.3 x 3.8 x 5.5 in. (33 x 97 x 140 mm). I double-checked those dimensions with my tape measure and initially did a double-take until I realized that the published width included the two coax connectors poking out the back. Subtract 5.8” for the actual case width:
You may have already noticed the four screw heads, one in each corner, in the earlier underside shot. You know what comes next, right?
That was easy!
The PCB then (easily, again) lifts right out of the remaining top half of the case:
Light pipes for the LEDs, which we’ll presumably see once we flip over the PCB:
Let’s stick with this bottom side for now, though:
The lone component of note is a Realtek RTL8201EL Fast Ethernet PHY. The mess of passives below it is presumably for the system processor at that location on the other side of the PCB:
Let’s see if I’m right:
Yep, it’s Entropic’s EN2510 single-chip MoCA controller, at lower left in the following photo. To its left are the aforementioned LEDs. At upper left is an Atmel (now Microchip Technology) ATMEGA188PA 8-bit AVR microcontroller. And at upper right, conveniently located right next to its companion Ethernet connector, is a Magnetic Communications (MAGCOM) HS9001 LAN transformer:
Switching attention to the other half of the PCB upper half, I bet you’re dying to see what’s underneath those “can” and “cage” lids, aren’t you? Me, too:
Your wish is my command!
As with the wall wart, and unlike last time when a scorched soldered PCB pad pointed us to the likely failure point, I didn’t notice anything obviously amiss with the adapter, either. It makes me wonder, in fact, whether either the coax or Ethernet connector was the failure-mechanism entry point this time, and whether the failure happened in conjunction with last August’s lightning “event” or before. The only times I would ever check the MoCA adapter in the master guest bedroom were when…umm…we were prepping for overnight guests to use that bedroom.
Granted, an extinguished “link active” light at the mated MoCA adapter on the other end, in the furnace room, would also be an indirect tipoff, but I can’t say with certainty that I regularly glanced at that, either. Given that the wall wart is also dead, I wonder if its unknown-cause demise also “zapped” the power regulation portion of the adapter’s circuitry, located at the center of its PCB’s upper side, for example. Or maybe the failure sequence started at the adapter and then traveled back to the wall wart over the conjoined power tether? Let me know your theories, as well as your broader thoughts on what I’ve covered today, in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Teardown: MoCA adapter succumbs to lightning strike
- Devices fall victim to lightning strike, again
- Lightning strike becomes EMP weapon
- Lightning strikes…thrice???!!!
- A teardown tale of two not-so-different switches
- Dissecting (and sibling-comparing) a scorched five-port Gigabit Ethernet switch
- LAN security for MoCA and powerline
The post Broke MoCA II: This time, the wall wart got zapped, too appeared first on EDN.
Silicon 100: Chiplet work spanning interconnect PHYs to smart substrates

While the Silicon 100 report was being compiled and curated to profile the most promising startups in the semiconductor industry in 2025, two prominent chiplet upstarts were already taken. First, Qualcomm announced its acquisition of chiplet interconnect developer Alphawave Semi in the second week of June 2025.
Nearly a month later, Tenstorrent snapped Blue Cheetah Analog Design, another supplier of die-to-die interconnect IPs. These two deals highlight the red-hot nature of the chiplets world and how this new multi-die technology landscape is emerging despite geopolitical headwinds.
In this year’s Silicon 100 report, there are eight startup companies associated with chiplet design and manufacturing work. In the chiplet design realm, DreamBig Semiconductor develops chiplet platforms and high-performance accelerator solutions for 5G, artificial intelligence (AI), automotive, and data center markets. Its core technology includes a chiplet hub with high-bandwidth memory (HBM).
Founded by Sohail Syed in 2019, the San Jose, California-based chiplet designer is using Samsung Foundry’s SF4X 4-nm process technology and is backed by the Samsung Catalyst Fund and the Sutardja family investment.
Eliyan, another well-known chiplet firm, offers PHY interconnect that enables high-bandwidth, low-latency, and power-efficient communication between chiplets on both silicon and organic substrates. The company, co-founded in 2021 by serial entrepreneur Ramin Farjadrad, completed the tapeout of its NuLink PHY in a ×64 UCIe package module on Samsung Foundry’s SF4X 4-nm manufacturing process in November 2024.
Figure 1 The die-to-die PHY solution for chiplet interconnect achieves 64 Gbps/bump. Source: Eliyan
While design startups are mostly engaged in die-to-die interconnect and related aspects, chiplet manufacturing realm seems far more expansive and exciting. Take AlixLabs, for instance, a 2019 startup spun off from Sweden’s Lund University. It specializes in atomic layer etch (ALE) equipment to develop a technique called ALE pitch splitting (APS), which enables atomic-scale precision in semiconductor manufacturing at dimensions below 20 nm.
Figure 2 The ALE-based solutions perform atomic-level processing to reduce the number of process steps required to manufacture a chip while increasing throughput. Source: AlixLabs
Then there is Black Semiconductor, developing manufacturing methods for back-end-of-line use of graphene to create optical chip-to-chip connections. The company is currently building a manufacturing facility at its new headquarters in Aachen, Germany. FabONE is expected to be operational in 2026, with pilot production scheduled to start in 2027 and full-volume production by 2029.
Figure 3 FabONE will be the world’s first graphene photonics fab. Source: Black Semiconductor
Next, Chipletz, a fabless substrate startup, is working on chiplet-based packaging. Established in 2016 as an activity within AMD and then spun off in 2021, its smart substrate enables the heterogeneous integration of multiple ICs within a single package. That, in turn, eliminates the need for a silicon interposer by providing die-to-die interconnects and high-speed I/O. It also supports different voltage domains from a single supply, outperforming traditional multichip modules and system-in-package (SIP) solutions.
Silicon Box is another semiconductor packaging upstart featured in the Silicon 100 report; it specializes in the production of multi-die components based on chiplet architecture. It currently operates a factory and R&D facility in Singapore and has raised $3.5 billion to build a semiconductor assembly and testing facility in Piedmont, Italy.
Silicon 100 offers a glimpse into the startup ecosystem of 2025 and beyond, highlighting firms that work on various aspects of chiplet design and manufacturing. And their potential is inherently intertwined with another 2025 star: AI and data center semiconductors. One common factor that both chiplets and AI semiconductors share is their association with advanced packaging technology.
Find out more about upstarts focusing on chiplet design and manufacturing in “Silicon 100: Startups to Watch in 2025” by uploading a copy of the report here.
Related Content
- The Age of Chiplets is Upon Us
- Startup Aims to Improve Chiplet Packaging
- Chiplets diary: Three anecdotes recount design progress
- Silicon Box to Invest €3.2B in a Semiconductor Fab in Italy
- Eliyan Breaks Chiplet Memory Wall With Standard Packaging
The post Silicon 100: Chiplet work spanning interconnect PHYs to smart substrates appeared first on EDN.
Audio amplifiers: How much power (and at what tradeoffs) is really required?

My first proper audio setup, discounting the GE Wildcat record player I had as a kid:
was a JVC PC-11 portable stereo system (thank goodness for Google Image Search to refresh my memory!), an example of which my (Catholic) high school chaplain owned, played (George Winston tapes, to be precise) in the background during weekly confession sessions, and acted as inspiration for my own subsequent acquisition, which made it through most of college:
Pretty slick setup: this was the pre-CD era, but the JVC PC-11 included an AM/FM tuner, five-band equalizer, cassette player, all (plus the speakers) detachable, and turntable inputs:
And for the purposes of today’s discussion, check out these modest specs:
- Output power: 2 x 15 W max.
- DC fluctuation: < 0.05 % WRMS
- Speaker chassis diameter: 120 mm “High Ceramic” cone
- Impedance: 6 Ohm
- Efficiency: 90 dB / W / m
Nevertheless, it could fill my dorm-later-fraternity room with tunes discernible even over whatever party might have been going on at the time. Distorted? Mebbe. But discernible still.
Historical precedentsEven more “proper” was a Kenwood KA-4002 integrated amplifier (albeit with switch-selectable separate preamp outputs and main amp inputs, plus a separate mono output, no less!), apparently manufactured from 1970-1973, that I acquired at around that same time. I vaguely recall that my dad might have bought it for me used from a co-worker of his? The JVC PC-11 eventually died: I vaguely recall—again—that the cassette deck locked up, plus the pressboard-construction speaker enclosures were getting beat up from my back-and-forth moves between the university campus in West Lafayette and my co-op sessions at Magnavox in Fort Wayne.
At that point, I pressed the KA-4002 into service, along with speakers and other discretes whose identities I no longer recall (though I remember a 10-band equalizer with a bouncing red multi-LED display!). It eventually also met its demise, complete with an acrid “magic smoke” release if I recall correctly, but only after serving me faithfully for a remarkably long time, including, at the end, acting as a power amplifier for a passive subwoofer. Again, check out the modest specs:
- Continuous power (at THD)
- 8 Ohm: 2×18 W (RMS, 20Hz…20Khz, 0.05% THD), 2x 24W (8 Ohm, 1Khz)
- 4 Ohm: 2×33 W
I reminisced about both of these past personal case studies when I saw on Reddit last November that audio equipment manufacturer Schiit (who I’ve mentioned before) was doing a $99 (vs $149 MSRP) last-call sellout of its Rekkr 2W/channel amplifier. The Rekkr product page is no longer live on the manufacturer’s website, but here’s a January 2, 2025, snapshot courtesy of the Internet Archive. Stock photos (still active on Schiit’s web server as I type this) to start:
Yes, it really is that small:
and came in both black and—briefly—silver patina options:
Now for a peek at the internals:
For those of you still scratching your head at that earlier 2W/channel power output spec, allow me to reassure you that it’s not a typo. More precisely:
- Stereo, 8 Ohms: 2W RMS per channel
- Stereo, 4 Ohms: 3W RMS per channel
- Mono, 8 Ohms: 4W RMS
That last one’s particularly interesting to me; hold that thought. For now, here’s a visual hint:
More:
- Frequency Response: 20Hz-20KHz, ±0.01dB, 3Hz-500KHz, ±3dB
- THD: <0.001%, 20Hz-20KHz, at 1V RMS into 8 ohms
- IMD: <0.001%, CCIR, at 1V RMS into 8 ohms
- SNR: >120dB, A-weighted, referenced to full output
- Damping Factor: >100 into 8 ohms, 20-20kHz
- Gain: 4 (12dB)
- Input Sensitivity: AKA Rated Output (Vrms)/Rated Gain. Or, 4/4. You do the math.
- Input Impedance: 20k ohms SE
- Crosstalk: >80dB, 20-20kHz
- Inputs: L/R RCA jacks for stereo input, switch for mono input on R jack
- Topology: fully discrete, fully complementary current feedback, no capacitors in the signal path
- Oversight: over-current and over-temperature sensors with relay shutdown for faults
- Power Supply: 6VAC, 2A wall-wart, 12,000µF filter capacitance, plus boosted, regulated supply to input, voltage gain, and driver stages
- Power Consumption: 12W maximum
- Size: 5” x 3.5” x 1.25”
- Weight: 1 lbs.
And here’s a link to the Audio Precision APx report PDF (also still active as I write these words; if not by the time you read them, you can get to it from the Internet Archive product page cache).
Target usage detailsHow on earth did Schiit rationalize the development and (even more notably) subsequent productization of such a seemingly underpowered product? Here’s the intro to company co-founder (and chief analog design engineer) Jason Stoddard’s “Less Power, More Better” post at the Head-fi forum, which accompanied the public unveil of Rekkr (and its Gjallarhorn “big brother”, which is still in the product line) on February 23, 2023:
And so now there’s Gjallarhorn and Rekkr, and a whole bunch of people saying, “I don’t care I can get a Class D widget with like 100,000 watts that’s the size of a matchbook for $4, why are you making these crazy low-power antique-technology things?”
Let’s start with the TL;DR:
- Because they don’t hiss like a demon cat, drilling slowly into your synapses and draining your soul.
- Because let’s face it, how much power do you need for desktop speakers?
- Because, reaaaaaally let’s face it, how do your neighbors feel about 100,000 watts if you share walls with them?
- Because these little suckers probably get a lot louder than you think.
- Because they sound really, really good.
In short: less powah. Moar better.
A new idea, yes. But maybe one you can get behind.
These next few paragraphs from his post were especially resonant for me, as you’ll understand now that I’ve shared my own personal low-power audio amplifier heritage with you:
A Billion Years Ago…I had a compact Realistic receiver that did 10 watts per channel into 8 ohms. Together with some tiny Minimus-7 speakers, it sounded pretty darn good. And it got fairly stupidly loud, enough that my parents really regretted me getting into music.
Think about that a bit: 10 watts into 4” 2-way speakers that were probably, what, 85dB efficient at best (in other words, they don’t make much sound for the watts you put in). Bass cranked almost all the way up…loud enough to piss off your shared-wall neighbors…that 10 watts did fine.
Somehow this antique receiver and speakers burrowed its way into the back of my mind and sat there for, like, 40 years. Because I always enjoyed the way it sounded. And I tried to replicate the experience over and over again.
I commend the full post to your attention, but for now, I’ll dive into detail on a few of Jason’s overview points. First off, what did he mean by “Because they don’t hiss like a demon cat, drilling slowly into your synapses and draining your soul”? He was contrasting Rekkr’s Class AB approach to what alternative noisier (he believed, at least, and at least at the time) Class D amplifiers exhibit. My personal take: he might have been right about Class D a few years ago, especially in the near-field configurations he’s specifically advocating for Rekkr, but no longer. More on that in a follow-up post to come.
Usage requirementsSpeaking of near-field, let’s attempt to quantify his comment, “Because let’s face it, how much power do you need for desktop speakers?” Near-field translates to (among other things) “close proximity”, i.e., speakers located 5 feet (1.5 meters) or less away from the listener. Why’s this important? It’s because sound intensity follows the inverse square law: doubling the distance from a sound source reduces the intensity to one-quarter of its original value (said another way: the sound level will be down by 6 dB). There’s a handy online calculator (along with others) for ascertaining sound level variance versus distance on Crown Audio’s System Design Tools webpage. And to my earlier comments: near-field speaker configurations are conveniently-for-Jason also most likely to result in listener-discernible amplifier-generated “hiss”.
Directly above that calculator is another one we’re going to focus on most today, titled “Amplifier Power Required”. Note that sound level is a function of multiple factors:
- Distance (already discussed)
- Speaker sensitivity, which indicates how efficiently a speaker converts electrical power into sound. It’s measured in decibels at a specific distance (usually 1 meter) from the speaker when 1 watt of power is applied. The higher the sensitivity (which for any single- or multi-transducer setup also varies with frequency; the spec’d value is an average), the louder it will sound for a particular connected-amplifier power output. Or said, another way, the higher the sensitivity the less power is needed to hit a given sound level.
- And, of course, the amplifier’s per-channel power output capability (optionally also allowing for headroom to prevent clipping caused by sound level “peaking”). This is in part dependent on the speaker impedance load that the amplifier is connected to (lower impedance = higher output power). Reference, for example, the earlier Rekker specs.
Crown Audio’s “How Much Amplifier Power Do I Need?” essay provides an excellent review of these factors, along with their relevance to different kinds of music and listening environments. I’ll only offer one caution: the company’s business model particularly focuses on live sound venue setups, so although the essay concepts remain relevant for the home, you’ll need to tweak the specifics a bit. For now, let’s plug the following values into the online calculator:
- Distance: 1.5 meters
- Desired sound level: 85 dBSPL
- Speaker sensitivity: 85 dB (at 8 ohms)
- Headroom: 3 dB
The calculated result? The required per-channel amplifier power is only 4W (per channel for a stereo setup). Decrease the speaker-to-listener distance, and/or the peak sound level (and/or associated headroom), and/or increase the speaker sensitivity, and the per-channel amplifier power requirements further plummet.
To wit, trust me when I tell you that I didn’t proactively twiddle with the input values to come up with any particular calculator output end result. In fact, they actually overshoot those of the setup I’m planning on hooking up shortly after completing this preparatory write-up. It’s based on a set of Audioengine P4 passive speakers:
rated at 88-dB sensitivity and 4-ohm nominal impedance. Their likely normal distance to me will be more like 3 feet, not 5 feet, i.e., 1.5 meters (but we’ll stick with 5 feet for now). And when Jason postulates about how “neighbors feel about 100,000 watts if you share walls with them”, in my case, that’s my wife, who I doubt will long tolerate 85 dB (or even close) sound levels spreading from my office throughout the rest of the house. That said, try this data set:
- Distance: 1.5 meters
- Desired sound level: 85 dBSPL
- Speaker sensitivity: 88 dB (at 4 ohms)
- Headroom: 3 dB
And you’ll discover that the result, 2W/channel, is less than the 3W/channel (4 ohm) output power capabilities of a single Rekkr. To confirm or deny the calculator claim, I’m going to try out this single-amp configuration first. And then, since I happen to have a pair (two pairs, actually, both black and silver sets) and it’s so easy to configure them in monoblock mode (the user manual is here on Schiit’s site, for all the implementation details), I’ll try ‘em that way, too. Stand by for results to come in a follow-up post (or a few) soon.
Is bigger always better?As is often the case with my writeups, my motivation here wasn’t just to tell you about a diminutive audio amplifier that seemingly punches above its weight. And it also wasn’t just to quantify for you that, in fact, at least for its target usage scenarios, Rekkr’s weight was exactly right. It was to use this case study as an example of a bigger-picture situation: that in the absence of in-depth understanding to the contrary, consumers are always going to assume that “bigger numbers are always better”…even if those bigger (power output, in this case) numbers come with bigger price tags, and/or require housing product in bigger (and heavier) form factors, and/or have bigger associated distortion specs, and/or…you get my drift.
I strongly suspect that many of you, whether you’re in the audio industry or another, regularly struggle with “crazy numbers-for-sake-of-numbers, power-nervosa specsmanship” (see below for the verbiage reference) demands from your target customers and/or your own company’s marketing, sales, corporate execs and others, often motivated by your competitors’ statements and new-product actions. I’d love to hear more about the specifics of your various situations and how you deal with them. Please sound off with your thoughts in the comments; your fellow readers and I look forward to reading and responding to them. Thank you in advance!
Jason delves into this understandably frustrating dichotomy at the tail-end of his February 2023 post, which I’ll reproduce in full in closing, preceded by the reality-check calibrating contents of a subsequent post he made last November when word of the Rekkr discontinuation became known to the community: “Rule 1 of all business: don’t make what people don’t buy.”
The “less power, more better” manifestoNow, some people are still not convinced. They want more power. And that’s fine. Maybe I’ll stack two Vidar transformers in a Tyr chassis and do a 300WPC stereo amp. Probably not, because it would also require a panic fan, and I hate fans, but after last year’s ordering debacle, we got lots of Vidar transformers to play with.
But I’d say, keep an open mind. You might be surprised.
We’re sooooooo conditioned to want more, more, mooore, moaaarrr! that I think sometimes we lose perspective, like I did when I started this amp adventure. And that can quickly devolve into venerating something that can produce huge power above everything else—even if we don’t need that power.
I mean, here’s the thing. I’ve had desktop systems for ages. A lot of them used a 60W integrated amp—first the Sumo Antares prototype, then the Ragnarok, then the Rag 2. And each of them had a common denominator: I never used even a fraction of those amp’s output power.
And yeah, I’ve also had desktop systems based on powered monitors. Including ones that like to brag they have like 1000W for the woofer and 50,000W for the tweeter and that sounds like 10,000,000W and etc. (Well, maybe a bit of hyperbole there, but you know what I mean: powered monitors with a bunch of watts and claims of hitting 1XXdB at 1 meter and other silly numbers.
And each time, those mega-powered systems were used once for that circus trick of huge output—then turned down for use at regular listening levels.
Because, you know, yeah, they go loud, but Rina’s yelling at me from the other room.
And each time, those mega-powered systems didn’t last long on the desktop—their infernal hisssssssssssssssssssssss drove me bonkers, and I went back to passive.
So, yeah, something used for a party trick once (but then annoys the neighbors) with the added bonus of the unrelenting hiss of a demonic cat drilling its way into your ears…yeah, no thanks, not for me.
Aside: and yes, I know, there are pros that have legit uses for such monitors, and people who don’t have to worry about neighbors. Not dissing those. Just asking: do you really need it? Can you use it? Or is it just crazy numbers-for-sake-of-numbers, power-nervosa specsmanship?
Sooooo…maybe it’s time to recalibrate.
To sit back, and think, “Do I really need power for the sake of power?”
Yes. I know. It’s a challenging idea.
But maybe, just maybe, it’s time for something less power, more better.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- How to stop worrying and love compressed audio
- High quality and lossy: Audio upgrades don’t need to be costly
- An update on music codecs
- Hands-on review: Is a premium digital audio player worth the price?
The post Audio amplifiers: How much power (and at what tradeoffs) is really required? appeared first on EDN.
Regulator delivers clean power to sensitive circuits

Diodes’ AP7372 low-dropout (LDO) regulator offers high power supply ripple rejection (PSRR) and low output noise for precision signal chains. It powers ADCs, DACs, VCOs, and PLLs, helping meet stringent ripple and noise targets in test and measurement, communication, industrial automation, and medical applications.
The AP7372 maintains just 8 µVRMS output noise, independent of fixed output voltage, and delivers PSRR of 90 dB at 10 kHz, 70 dB at 100 kHz, and 52 dB at 1 MHz. Operating from 2.7 V to 20 V, it provides up to 200 mA of output current with a typical 120-mV dropout voltage at 200 mA The wide input range covers common rails (19.5 V, 12 V, 5 V) and single-cell Li-ion sources, while outputs from 1.2 V to 5.0 V suit analog and mixed-signal loads.
Four fixed-output voltages are available—1.8 V, 2.5 V, 3.3 V, and 5.0 V—along with an adjustable output down to 1.2 V. A dedicated resistor-divider pin allows fine adjustments above the nominal fixed-output values. The LDO also provides an enable pin for system-level control, such as power-up sequencing or shutdown when the regulator is unused. Shutdown current is approximately 3 µA, with a quiescent current of 66 µA.
The AP7372 LDO regulator costs $0.30 each in 1000-piece quantities.
The post Regulator delivers clean power to sensitive circuits appeared first on EDN.
eFuse limits current on server input rails

A 60-A eFuse from Alpha & Omega, the AOZ17517QI, is optimized for 12-V input power rails in servers, data centers, and telecom infrastructure. Operating from 4.5 V to 20 V (27 V absolute maximum), it safeguards the main input bus from interruptions caused by abnormal loads or fault conditions.
The eFuse co-packages a high-performance IC with protection features and a high-SOA trench MOSFET, which serves as the device’s controllable power switch. It continuously monitors current through the MOSFET, limiting it if it exceeds the set threshold. If the overcurrent persists, the switch turns off, safeguarding downstream devices much like a conventional fuse.
MOSFET on-resistance of 0.65 mΩ isolates the load from the input bus when the eFuse is off. Built-in startup SOA management and additional protections enable glitch-free system power-up and safe hot-plug operation.
The AOZ17517QI offers auto-restart or latch-off options. Prices start at $1.80 each in 1,000‑unit quantities. It is available now in production quantities, with a lead time of 14 weeks.
The post eFuse limits current on server input rails appeared first on EDN.
DC-link capacitors endure heat and moisture

Knowles’ Cornell Dubilier Type BLS DC-link capacitors operate in harsh environments with temperatures up to 125°C. The company reports the capacitors exceed industry standards in temperature-humidity bias (THB) testing, achieving a 100% longer lifespan than comparable devices.
The series undergoes 2000 hours of THB testing at 85°C and 85% relative humidity at rated voltage. Type BLS capacitors also meet automotive-grade electrical and mechanical requirements per AEC-Q200, ensuring reliable operation under high temperature and moisture conditions.
Type BLS DC-link capacitors operate across a wide temperature range, from –55°C to 125°C, maintaining stable capacitance and low ESR even under the thermal stress of high-speed SiC switching. They offer capacitance values from 1 µF to 220 µF and voltage ratings between 450 VDC and 1100 VDC.
Encased in UL94-V0 rated plastic with thermosetting resin potting, the capacitors resist solvents and mechanical stress for long-term durability. Versatile mounting options allow horizontal or vertical board placement with 2- or 4-pin configurations.
Check availability, request samples, or get a quote on the product page linked below.
The post DC-link capacitors endure heat and moisture appeared first on EDN.
TVS diodes shield consumer electronics interfaces

TDK has added three new models to its SD0201 series of TVS diodes for USB Type-C, HDMI, DisplayPort, and Thunderbolt connections. They protect sensitive circuits in smartphones, laptops, wearables, and networking devices from ESD and transient surges.
Each component comes in a 0201 chip-scale package with dimensions of 0.58×0.28×0.15 mm, suited for space-constrained designs. With working voltages of ±1 V, ±2 V, and ±3.6 V, the TVS diodes meet the IEC 61000-4-2 standard for ESD robustness up to ±15 kV and handle surge currents up to 7 A, depending on the variant. Their symmetrical design enables bidirectional protection for I/O interfaces, and they feature low leakage current with dynamic resistance down to 0.16 Ω.
The three new devices in the SD0201 series differ in their DC working voltages and parasitic capacitances:
- SD0201SL-S1-ULC101 (B74121U1036M060): ±3.6 V, 0.65 pF
- SD0201-S2-ULC105 (B74121U1020M060): ±2 V, 0.7 pF
- SD0201SL-S1-ULC104 (B74121U2010M060): ±1 V, 0.15 pF
Datasheets for the TVS diodes are available for downloading on the product page linked below.
The post TVS diodes shield consumer electronics interfaces appeared first on EDN.
Montage broadens timing device lineup

Fabless semiconductor company Montage Technology is now sampling its clock buffers and spread-spectrum oscillators, following the mass production of its clock generators. Designed for precise, low-jitter performance, these devices deliver reliable timing for AI servers, communication infrastructure, industrial control systems, and automotive electronics.
Clock chips generate the reference signals that maintain synchronization across system components. Leveraging expertise in mixed-signal IC design, core I/O technology, and PLL integration, Montage’s portfolio supports complete clock tree implementations, enhancing timing accuracy and system efficiency in a range of applications.
The timing portfolio includes clock generators with up to four independent differential outputs and clock buffers with four to ten scalable outputs for lossless signal distribution. It also features spread-spectrum oscillators that suppress EMI to enhance system stability. The devices deliver low output phase noise and offer flexible per-channel configuration—covering I/O type, drive strength, voltage, frequency, and spread spectrum—for precise receiver alignment.
Montage offers six clock generator models now in mass production, along with twenty clock buffer models and four spread-spectrum oscillators available for sampling. For more information, visit the product page linked below or email globalsales@montage-tech.com.
The post Montage broadens timing device lineup appeared first on EDN.
Analysis of large data acquisitions

Digital acquisition instruments like oscilloscopes and digitizers are incorporating increasingly large acquisition memories. Acquisition memories with lengths in the gigasample (GS) range are commonly available. The advantage of long-acquisition memories is that they can capture longer-time records. They also support higher sampling rates at any given record duration, providing better time resolution.
The downside of these long records is the time required to analyze them. Most users couple the instrument to a host computer and transfer the data records to the host computer for post-acquisition analysis.
The longer the record, the longer the transfer time and the slower the testing. Many instruments have added tools to allow internal analysis within the instrument, allowing only the results of the analysis to be transferred instead of all the raw data. This can save a great deal of time during testing. This article will investigate the use of several of those analysis tools.
Case study: The startup of an SMPSTesting the startup of a switched-mode power supply (SMPS) provides an example of a relatively long acquisition. Figure 1 shows an example of a 10-ms acquisition covering the startup of an SMPS.
Figure 1 A 10-ms acquisition showing the startup of an SMPS, including the drain-to-source voltage (channel 1-yellow), drain current (channel 2-red), and gate drive voltage (channel 3-blue) of the FET switch. Source: Art Pini
This record, sampled at 250 megasamples per second (MS/s), has 2.5 million samples per channel. That is a lot of data to render on a screen with a 1920 x 1080 pixel resolution. The oscilloscope acquires and stores all the data, but when more than 1920 samples are being displayed, it compacts the displayed data. Rather than just sparse the signal records, which might cause the loss of significant data points, it detects the significant peaks and valleys and includes those values on the display. This enables users to find significant events within the compacted displays.
Basic measurementsThere are three acquired waveforms. The drain-to-source voltage (VDS), drain current (ID), and gate-to-source voltage (VGS) of the primary FET switch. The test will look at the variation in these signals as the power supply controller powers up the supply. Some basic measurements of the signal amplitudes are made and displayed. The peak-to-peak amplitudes of VDS and ID, as well as the amplitude of VGS, are shown as parameters P1, P2, and P3, respectively.
The frequency of the VGS signal and the number of edges contained in the acquisition appear as parameters P4 and P5. Amplitude measurements are taken once per acquisition. Time measurements such as frequency, period, width, and duty cycle are made once per waveform cycle. So, the frequency measurement includes all 1163 cycles acquired. This is an example of “all instance measurements.” This feature ensures that every cycle in the signal is captured in the measurement.
Zoom in on the detailsAll the acquired data is stored in the instrument’s acquisition memory and can be expanded using zoom traces to see the details, as shown in Figure 2.
Figure 2 Zoom traces provide horizontally or vertically expanded views of the acquired traces, allowing detailed study of the elements of each acquired waveform. Source: Art Pini
In the figure, the zoom traces of the acquired waveforms are horizontally expanded and displayed at 5 ms per division, with a horizontal expansion of 200:1. The zoom traces are taken from the area of the acquired waveform highlighted with higher intensity. The SMPS uses pulse width modulation (PWM) to control its output power.
The zoom traces show the variations in the amplitude and duty cycle of the waveforms just after the gap at 456 ms in the acquired waveforms. The zoom traces are locked together to keep the displayed waveform time synchronous. They can be scrolled horizontally or vertically to show the details in any part of the source waveforms.
Finding desired events in long recordsThe question of locating areas of interest in these long acquisitions has several answers. Histograms of measured parameters can display the range of values and the number of measurements made by the instrument. A measurement track displays any measurement value versus time. The track can be aligned with the source waveform to show where in the acquisition that value occurs. Some instruments offer scanning functions to map where, in the long record, specific values of a measured parameter occur. These features are extremely useful in analyzing long records.
HistogramsA histogram plots the number of measured values occurring in a small range of measured values (known as a bin) against the nominal measured value. It counts the number of measurements in each bin. Figure 3 shows a histogram measuring the duty cycle of the VGS waveform.
Figure 3 The histogram of the duty cycle measurement of the VGS waveform shows the distribution of measured values with a mean value of 28.4%, a maximum value of 38.9%, and a minimum value of 0.3%. Source: Art Pini
The histogram shows the range of values encountered in a measurement. This example shows that the most commonly occurring value of the duty cycle is 31.6%. This is read from the X@peak parameter (P4). The range of duty cycle values is from 0.3 to 38.9%. This data is based on 1164 measurements shown in the total population measurement (totp – P8). How is the location of the maximum value of the duty cycle found? A measurement track matches measurements to a specific cycle in the acquired waveform.
Measurement trackA measurement or parameter track is a waveform comprised of a series of measured parameter values plotted against time at the same sample rate as the source waveform on which the measurement was made. It is time synchronous with the source waveform.
Figure 4 is an example of a measurement track based on the duty cycle at level (duty@lv) measurement of the VGS waveform.
Figure 4 The trace F1 is the track of duty@lv parameter (P1) values over the entire acquisition. It is time synchronous with the trace of channel 3. Source: Art Pini
The track function, located beneath the source waveform, illustrates how the duty cycle of the gate drive signal changes over time during SMPS startup. After a brief gap, it rises steadily until it reaches a plateau, then drops to a relatively stable value.
The parameter maximum (max-P2) reads the maximum value of the duty cycle as 38.59%. The parameter horizontal location of the maximum (x@max-P3) locates the maximum at 3.12 ms after the trigger (zero time). The parameter markers (blue dashed lines) mark these values on the track display. The center of the zoom traces can be set to 3.12 ms, and the zoom traces are expanded about that point to show the specific cycle of each waveform with the maximum duty cycle.
The VGS voltage appears in zoom trace Z3. The duty cycle at level is read for that specific cycle of the VGS signal in parameter P4, confirming that it is the cycle with the maximum duty cycle value. The track function helps locate specific waveform events within the long record without manually scrolling through the whole waveform to find them.
Tracks can show a variety of characteristics, such as peaks, valleys, periodicity, or rate of change. Periodicity in a track of frequency or phase provides information about frequency or phase modulation, respectively. In this example, the track has a nearly linear slope as the controller adjusts the duty cycle. The rate of change is of interest and can be easily measured, as shown in Figure 5.
Figure 5 Using the relative horizontal cursors to determine the rate of change of the duty cycle of the VGS waveform over the linearly changing portion of the track. Source: Art Pini
The relative horizontal cursors read the slope of the waveform between the cursor lines. This value is displayed and highlighted by an orange box in the waveform annotation box for the math trace F1 as 7.03 k% per second (7% per millisecond).
WaveScan—automatic scan and search of long waveformsThe oscilloscope used in this example has a scan and search engine called WaveScan that can locate unusual events in a single capture or scan for a specific measurement event in multiple acquisitions over a long period. WaveScan has over twenty search modes for analog or digital channel acquisition events. Figure 6 shows an example of a search using WaveScan to find all instances of a duty cycle measurement greater than 38%.
Figure 6 Using WaveScan, an automatic search tool, to search the VGS waveform for duty cycle values greater than 38%. Source: Art Pini
The WaveScan setup dialog box shows the search criteria set up to find duty cycle values greater than 38%. WaveScan can search based on measurements, waveform edges, non-monotonic edges, and serial data patterns. A numeric search, such as those on measurements, can be based on values greater, less, than, within a range, outside of a range, or for the rarest events.
In the example, the search is based on measuring the duty cycle at level with values greater than 38%. The search results are marked with red lines on the source trace and appear in the table in the upper left corner. Each event matching the search criteria is listed in the table in the order of occurrence. The maximum duty cycle value of 38.859%, located previously, is item 3. The table entries are hyperlinked to the Zoom trace, and if one is selected, it will center that event in the Zoom trace. In the example, event six is selected. The zoom trace Z1 has been centered on the sixth cycle with a duty cycle greater than 38%, highlighting its location at 3.2295 ms.
Post-acquisition analysis toolsModern instruments offer longer acquisition times and include a host of tools to aid in analyzing the data generated. Features such as compaction, zoom, histograms, track, and WaveScan enable various analyses and measurements. The tools also augment the measurements by annotating them numerically or graphically on the display. These features enable local analysis, which accelerates testing and reduces the amount of data that needs to be transferred to external computers.
Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.
Related Content
- Why deep memory matters
- Understanding the importance of acquisition memory
- Oscilloscope special acquisition modes
- FFTs and oscilloscopes: A practical guide
- Basic oscilloscope operation
The post Analysis of large data acquisitions appeared first on EDN.
Vibration motors: The key to compact haptic solutions

Vibration motors are the silent workhorses behind tactile feedback in wearables and handheld devices. These compact actuators convert electrical signals into physical cues, enriching user interaction. Whether you are prototyping or troubleshooting, understanding their behavior and integration is key to designing responsive, reliable hardware.
Let’s start with the basics: How they generate vibration, and what sets different types apart.
ERM and coin vibration motors
As is often valuable to design engineers, vibration motors can be categorized by form factors to simplify selection and integration. The two primary types are eccentric rotating mass (ERM) vibration motors and coin vibration motors.
ERM vibration motors generate vibration by spinning a mass that is offset from the center of rotation. This off-center mass creates an imbalance, producing the desired vibration effect. These motors typically have a cylindrical form factor, with the rotating shaft and eccentric mass often exposed. Their design is straightforward and well-suited for applications where space constraints are less critical.
Coin vibration motors, sometimes referred to as “pancake” motors, also rely on an offset rotating mass to produce vibration. However, they feature a flat, compact, and fully enclosed form factor. Internally, they contain a short shaft and a flat mass that is offset from the center of rotation, allowing the mechanism to fit within the coin-shaped housing.
Although coin motors operate on the same ERM principle, industry convention typically distinguishes them by form: the exposed cylindrical type is commonly referred to as an ERM vibration motor, while the flat, enclosed type is known as a coin or pancake vibration motor.
Figure 1 ERM and pancake vibration motors generate haptic feedback via eccentric rotating mass. Source: Author
LRA vibration motors
While our primary focus has been on vibration motor form factors, there is another important category worth highlighting: linear resonant actuator (LRA) vibration motors. In terms of external appearance, LRAs often resemble coin vibration motors, sharing the same flat, compact form factor. This visual similarity can be misleading, as the underlying mechanism is fundamentally different.
Unlike ERM motors, which rely on a rotating offset mass driven by a unidirectional current, LRAs operate using a linearly oscillating mass. This mass moves back and forth in a controlled manner, following the principles of simple harmonic motion. Because the direction of movement continuously changes, LRAs require an alternating current (AC) signal with a specific frequency that matches the resonant frequency of the actuator.
This distinction in operating principle allows LRAs to deliver more precise and efficient haptic feedback, making them well-suited for applications where responsiveness and control are critical. Despite their similar form factor to ERM and coin motors, LRAs represent a distinct class of vibration technology.
Figure 2 LRA vibration motor generates haptic feedback via resonant linear actuation. Source: Author
Keep note that there is also a growing category of brushless vibration motors, typically based on brushless DC (BLDC) technology. These motors offer improved durability and efficiency compared to traditional brushed ERM designs, thanks to the absence of mechanical brushes.
While they may share similar cylindrical or coin-like form factors, their internal construction and control requirements differ. Brushless vibration motors are especially useful in high-reliability applications where splendidly long mean time before failure (MTBF) and low maintenance are priorities.
Figure 3 BLDC motors often feature additional wires that enable functions like speed regulation and directional control. Source: Author
How to use actuators
Up next, we take a closer look at how to use these tiny actuators effectively in your designs.
To start with, ERM and coin vibration motors that run on DC can be powered directly from a suitable DC source. But when it comes to haptics—where you want the motor to respond to input—you will probably want to hook it up to a microcontroller. That way, you can control not just the on/off state but also tweak the amplitude and define vibration profiles.
For those seeking integrated driver solutions, ICs such as the NCP5426 offer a reliable and efficient alternative to using a simple BJT or MOSFET.
LRAs, on the other hand, operate on an AC signal and are tuned to a specific resonant frequency. Driving them properly usually means using a dedicated LRA driver to ensure optimal performance.
At this point, it’s worth noting that the DRV2605/DRV2605L from Texas Instruments is a popular motor driver designed for haptic feedback applications. Unlike basic motor drivers, it can generate nuanced vibration patterns, making it ideal for creating tactile feedback that feels responsive and intentional. Thus, it offloads waveform generation from the host processor, simplifying design and saving resources.
Quick note: After reviewing numerous datasheets, a few general trends emerge. Most ERM vibration motors are rated around 3 V, with a starting voltage near 2.5 V and a rated current close to 100 mA at full voltage.
In contrast, most LRAs tend to have a rated voltage of approximately 2 V RMS, a nominal operating current around 150 mA, and a resonant frequency of 150 Hz ±5Hz. That said, consider these figures as ballpark estimates rather than absolutes. Always double-check with the specific datasheet!
Other design considerations
When it comes to mounting vibration motors, they are typically placed within an enclosure or directly onto a PCB. For enclosure-based setups, custom 3D-printed housing can be a convenient way to fasten the motor. If you are mounting the motor to a PCB, many models offer through-hole pins for straightforward soldering. For coin and LRA types, the adhesive backing is usually sufficient for reliable attachment.
As a little extra, here is a handy blueprint for testing/driving a 6-wire vibration motor with integrated driver (Model NFP-BLV3650-FS, for example)
Figure 4 This handy little circuit tests and runs most vibration motors with internal drivers. Source: Author
Just to round things off, there are numerous ways to integrate haptic feedback into your devices, with vibration motors being one of the most accessible options. Whether you opt for a simple implementation or a more sophisticated approach, adding haptics can significantly elevate your device’s user experience and overall effectiveness.
The insights shared here are intended to serve as a springboard, hopefully helping you incorporate haptic feedback into your designs with confidence and creativity.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Baby, You Can Drive My BLDC
- Haptic or Vibration motors – A Quick Look
- Single-phase BLDC motor minimizes noise, vibration
- Vibration motor driver IC applies the Freewheel principle
- Motor-driver IC features Hall-element commutation for driving vibration motors
The post Vibration motors: The key to compact haptic solutions appeared first on EDN.
Improve the accuracy of programmable LM317 and LM337-based power sources

Several Design Ideas (DIs) have employed the subject ICs to implement programmable current sources in an innovative manner [Editor’s note: DIs referenced in “Related Content” below]. Figure 1 shows the idea.
Figure 1 Two independent current sources, one for loads referenced to the more negative voltage, and the other for those to the more positive one. The Isub current sources control the magnitudes of the currents delivered to the loads.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Each of the ICs works by enforcing Vref = 1.25 V (±50 mV over load current, supply voltage, and operating temperature) between the OUT and ADJ terminals. The Isubs are programmable current sources (PWM-implemented or otherwise) which produce voltage drops Vsubs across the Rsubs.
Given that there are ADJ terminal currents IADJ ( typically 50 µA and maxing out at 100 µA ), the load currents can be seen to be:
[Vref + ( IADJ – Isub ) · Rsub] / Rsns
When Isub is 0, the load current is at its maximum, Imax, and its uncertainty is a mere ±50 mV / 1250 mV = ±4%. But when Isub rises to yield a desired current of Imax/10, the uncertainty rises to ±40%; the intended fraction of 1.25 V is subtracted, but the unknown portion of the ±50 mV remains. If Imax/25 is desired, the actual load current could be anywhere from 0 to twice that value. Things are actually slightly worse, since the uncertainty in IADJ is a not-insignificant portion of the typically few-milliamp maximum value of Isub.
Circumnavigating the accuracy limitations of reference voltagesDespite the modest accuracy of their reference voltages, these ICs have the advantage of built-in over-temperature limiting. So it’s desirable to find a way around their accuracy limitations. Figure 2 shows just such a method.
Figure 2 Two independent current regulators. The Isub magnitudes are programmable and are often implemented with PWMs. Diodes connected to the ADJ terminals protect the LM ICs during startup. The 0.1-µF supply decoupling capacitors for U1 and U3 are not shown.
The idea of the three-diode string was borrowed from the referenced DIs [Editor’s note: in “Related Content” below]. It ensures that even for the lowest load currents (the LM ICs’ minimum operating is spec’d at 10 mA max.), the ADJ terminal voltages needn’t be beyond the supply rails.
The OPA186 op-amp’s input operating range extends beyond both supply rails (a maximum of 24 V between them is recommended), and its outputs swing to within 120 mV of the rails for loads of less than 1 mA.
The maximum input offset voltage, including temperature drift and supply voltage variations, is less than ±20 µV. An input current of less than ±5 nA maximum means that for Rsubs of 1 kΩ or less, the total input offset voltage is 2000 times better than the LMs’ ±50 mV.
Placing the LM ICs in this op-amp’s feedback loop improves output current accuracy by a similar factor (but see addendum).
Adapting Jim Williams’ design for a current regulatorJim Williams of analog design fame published an application note placing the LM317 in an LT1001-based feedback loop to produce a voltage regulator. Nothing prevents the adaptation of this idea to a current regulator. The LT1001’s typical gain-bandwidth (GBW) product is 800 kHz, almost exactly the 750 kHz of the OPA186, so no stability problems are expected. And there were none when the LM317 circuit was bench-tested with an LM358 op amp (GBW typically 700 kHz), which I had handy.
Just as you would with the Figure 1 designs, make sure the LM ICs are heatsinked for intended operation. Enclosing them in a feedback loop won’t help if their over-temperature circuitry kicks in. But under the temperature limit, this circuit increases not only load current accuracy, but also the IN-terminal impedances and the rejection of both the power supply and the LM’s references’ noises.
Note that some of the reduction in reference voltage error can be traded off to reduce power dissipation by making the Rsns resistors small. You can also convert the design to a precision voltage regulator by replacing the three-diode strings with a resistor and moving the load to between the OUT terminal and its Rsns resistor’s supply terminal.
AddendumThere’s a missing term in the equation given for load current. In Figure 2, the unknown and unaccounted-for amount of ADJ terminal current is added to the load current.
Considering that the LMs’ minimum specified operating current (see the LM317 3-Pin Adjustable Regulator datasheet and LMx37 3-Terminal Adjustable Regulators datasheet)—and therefore the minimum current through the load—is 10 mA at 25°C, the ADJ maximum of 100 µA is small potatoes. Still, there might be applications where it would be desirable to account for it. Figure 3 is a possible solution, although I’ve not bench-tested it.
Figure 3 Replacing the ADJ terminal-connected diodes with JFETs preserves startup protection for the LM ICs.
The ‘201 and ‘270 JFETS route the ADJ terminal current through the Rsns resistors where it can be recognized and accounted for as part of the current that passes through the load. Cheaper bipolar transistors (which would reroute almost all IADJ) could be used in place of the JFETS, but that would require an additional diode in series with the three-diode string.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- Cross connect complementary current sources to reduce self-heating error
- A negative current source with PWM input and LM337 output
- PWM-programmed LM317 constant current source
The post Improve the accuracy of programmable LM317 and LM337-based power sources appeared first on EDN.
Matchmaker

Precision-matched resistors, diode pairs, and bridges are generic items. But sometimes an extra critical application with extra tight tolerances (or an extra tight budget) can dictate a little (or a lot) of DIY.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1’s matchmaker circuit can help make the otherwise odious chore of searching through a batch of parts for accurately matching pairs of resistors (or diodes) quicker and a bit less taxing. Best of all, it does precision (potentially to the ppm level) matchmaking with no need for pricey precision reference components.
Here’s how it works.
Figure 1 A1a, U1b, and U1c generate precisely symmetrical excitation of the A and B parts being matched. The asterisked resistors are ordinary 1% parts; their accuracy isn’t critical. The A>B output is positive relative to B>A if resistor/diode A is greater than B, and vice versa.
Matchmaker’s A1a and U1bc generate a precisely symmetrical square-wave excitation (timed by the 100-Hz multivibrator A1b) to measure the mismatch between test parts A and B. The resulting difference signal is boosted by preamp A1d in switchable gains of 1, 10, or 100, synchronously demodulated by U1a, then filtered to DC with a final calibrating gain of 16x by A1c.
The key to Matchmaker’s precision is the Kelvin-style connection topology of the CMOS switches U1b and U1c. U1b, because it carries no significant load current (nothing but the pA-level input bias current of A1a), introduces only nanovolts of error. Meanwhile, the resulting sensing of excitation voltage level at the parts being matched, and the cancellation of U1c’s max 200-Ω on-resistance, is therefore exact, limited only by A1a’s gain-bandwidth at 100 Hz. Since the op-amp’s gain bandwidth (GBW) is ~10 MHz, the effective residual resistance is only 200/105 = 2 mΩ. Meanwhile, the 10-Ω max differential between the MAX4053 switches (the most critical parameter for excitation symmetry) is reduced to a usually insignificant 10/105 = 100 µΩ. The component lead wire and PWB trace resistance will contribute (much) more unless the layout is carefully done.
Matching resistors to better than ±1 ppm = 0.0001% is therefore possible. No ppm level references (voltage or resistance) need apply.
Output voltage as a function of Ra/Rb % mismatch is maximized when load resistor R1 is (at least approximately) equal to the nominal resistance of the resistances being matched. But because of the inflection maximum at R1/Rab = 1, that equality isn’t at all critical, as shown in Figure 2.
Figure 2 The output level (MV per 1% mismatch at Gain = 1) is not sensitive to the exact value of R1/Rab.
R1/Rab thus can vary from 1.0 by ±20% without disturbing mismatch gain by much more than 1%. However, R1 should not be less than ~50 Ω in order to stay within A1 and U1 current ratings.
Matchmaker also works to match diodes. In that case, R1 should be chosen to mirror the current levels expected in the application, R1 = 2v / Iapp.
Due to what I attribute to an absolute freak of nature (for which I take no credit whatsoever), the output MV per 1% mismatch of forward diode voltage drop is (nearly) the same as for resistors, at least for silicon junction diodes.
Actually, there’s a simple explanation for the “freak of nature.” It’s just that a 1% differential between legs of the 2:1 Ra/Rb/R1 voltage divider is attenuated by 50% to become 1.25v/100/2 = 6.25 mV, and 6.25 mV just happens to be very close to 1% of a silicon junction diode’s ~600 mV forward drop.
So, the freakiness really isn’t all that freaky, but it is serendipitous! Matchmaker also works with Schottky diodes, but due to their smaller forward drop, it will underreport their percent mismatch by about a factor of 3.
Due to the temperature sensitivity of diodes, it’s a good idea to handle them with thermally insulating gloves. This will save time and frustration waiting for them to equilibrate, not to mention possible, outright erroneous results. In fact, considering the possibility of misleading thermal effects (accidental dissimilar metal thermocouple junctions, etc.), it’s probably not a bad idea to wear gloves when handling resistors, too!
Happy ppm match making!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Circuits help get or verify matched resistors
- Design Notes: Matched Resistor Networks for Precision Amplifier Applications
- The Effects of Resistor Matching on Common Mode Rejection
- Peculiar precision full-wave rectifier needs no matched resistors
The post Matchmaker appeared first on EDN.
RISC-V basics: The truth about custom extensions

The era of universal processor architectures is giving way to workload-specific designs optimized for performance, power, and scalability. As data-centric applications in artificial intelligence (AI), edge computing, automotive, and industrial markets continue to expand, they are driving a fundamental shift in processor design.
Arguably, chipmakers can no longer rely on generalized architectures to meet the demands of these specialized markets. Open ecosystems like RISC-V empower silicon developers to craft custom solutions that deliver both innovation and design efficiency, unlocking new opportunities across diverse applications.
RISC-V, an open-source instruction set architecture (ISA), is rapidly gaining momentum for its extensibility and royalty-free licensing. According to Rich Wawrzyniak, principal analyst at The SHD Group, “RISC-V SoC shipments are projected to grow at nearly 47% CAGR, capturing close to 35% of the global market by 2030.” This growth highlights why SoC designers are increasingly embracing architectures that offer greater flexibility and specialization.
RISC-V ISA customization trade-offs
The open nature of the RISC-V ISA has sparked widespread interest across the semiconductor industry, especially for its promise of customization. Unlike fixed-function ISAs, RISC-V enables designers to tailor processors to specific workloads. For companies building domain-specific chips for AI, automotive, or edge computing, this level of control can unlock significant competitive advantages in optimizing performance, power efficiency, and silicon area.
But customization is not a free lunch.
Adding custom extensions means taking ownership of both hardware design and the corresponding software toolchain. This includes compiler and simulation support, debug infrastructure, and potentially even operating system integration. While RISC-V’s modular structure makes customization easier than legacy ISAs, it still demands architectural consideration and robust development and verification workflows to ensure consistency and correctness.
In many cases, customization involves additional considerations. When general-purpose processing and compatibility with existing software libraries, security frameworks, and third-party ecosystems are paramount, excessive or non-standard extensions can introduce fragmentation. Design teams can mitigate this risk by aligning with RISC-V’s ratified extensions and profiles, for instance RVA23, and then applying targeted customizations where appropriate.
When applied strategically, RISC-V customization becomes a powerful lever that yields substantial ROI by rewarding thoughtful architecture, disciplined engineering, and clear product objectives. Some companies devote full design and software teams to developing strategic extensions, while others leverage automated toolchains and hardware-software co-design methodologies to mitigate risks, accelerate time to market, and capture most of the benefits.
For teams that can navigate the trade-offs well, RISC-V customization opens the door to processors truly optimized for their workloads and to massive product differentiation.
Real world use cases
Customized RISC-V cores are already deployed across the industry. For example, Nvidia’s VP of Multimedia Arch/ASIC, Frans Sijstermans, described the replacement of their internal Falcon MCU with customized RISC-V hardware and software developed in-house, now being deployed across a variety of applications.
One notable customization is support for 2KB beyond the standard 4K pages, which yielded a 50% performance improvement for legacy code. Page size changes like this are a clear example of modifications with system-level impact from processor hardware to operating system memory management.
Figure 1 The view of Nvidia’s RISC-V cores and extensions taken from the keynote “RISC-V at Nvidia: One Architecture, Dozens of Applications, Billions of Processors.”
Another commercial example is Meta’s MTIA accelerator, which extends a RISC-V core with application-specific instructions, custom interfaces, and specialized register files. While Meta has not published the full toolchain flow, the scope of integration implies an internally managed co-design methodology with tightly coupled hardware and software development.
Given the complexity of the modifications, the design likely leveraged automated flows capable of regenerating RTL, compiler backends, simulators, and intrinsics to maintain toolchain consistency. This reflects a broader trend of engineering teams adopting user-driven, in-house customization workflows that support rapid iteration and domain-specific optimization.
Figure 2 Meta’s MTIA accelerator integrates Andes RISC-V cores for optimized AI performance. Source: MTIA: First Generation Silicon Targeting Meta’s Recommendation Systems, A. Firoozshahian, et. al.
Startup company Rain.ai illustrates that even small teams can benefit from RISC-V customization via automated flows. Their process begins with input files that define operands, vector register inputs and outputs, vector unit behavior, and a C-language semantic description. These instructions are pipelined, multi-cycle, and designed to align with the stylistic and semantic properties of standard vector extensions.
The input files are extended with a minimal hardware implementation and processed through a flow that generates updated core RTL, simulation models, compiler support, and intrinsic functions. This enables developers to quickly update kernels, compile and run them on simulation models, and gather feedback on performance, utilization, and cycle count.
By lowering the barrier to custom instruction development, this process supports a hardware-software co-design methodology, making it easier to explore and refine different usage models. This approach was used to integrate their matrix multiply, sigmoid, and SiLU acceleration in the hardware and software flows, yielding an 80% reduction in power and a 7x–10x increase in throughput compared to the standard vector processing unit.
Figure 3 Here is an example of a hardware/software co‑design flow for developing and optimizing custom instructions. Source: Andes Technology
Tools supporting RISC-V customization
To support these holistic workflows, automation tools are emerging to streamline customization and integration. For example, Andes Technology provides silicon-proven IP and a comprehensive suite of design tools to accelerate development.
Figure 4 ACE and CoPilot simplify the development and integration of custom instructions. Source: Andes Technology
Andes Custom Extension (ACE) framework and CoPilot toolchain offer a streamlined path to RISC-V customization. ACE enables developers to define custom instructions optimized for specific workloads, supporting advanced features such as pipelining, background execution, custom registers, and memory structures.
CoPilot automates the integration process by regenerating the entire hardware and software stack, including RTL, compiler, debugger, and simulator, based on the defined extensions. This reduces manual effort, ensures alignment between hardware and software, and accelerates development cycles, making custom RISC-V design practical for a broad range of teams and applications.
RISC-V’s open ISA broke down barriers to processor innovation, enabling developers to move beyond the constraints of proprietary architectures. Today, advanced frameworks and automation tools empower even lean teams to take advantage of hardware-software co-design with RISC-V.
For design teams that approach customization with discipline, RISC-V offers a rare opportunity: to shape processors around the needs of the application, not the other way around. The companies that succeed in mastering this co-design approach won’t just keep pace, they’ll define the next era of processor innovation.
Marc Evans, director of Business Development & Marketing at Andes Technology, brings deep expertise in IP, SoC architecture, CPU/DSP design, and the RISC-V ecosystem. His career spans hands-on processor and memory system architecture to strategic leadership roles driving the adoption of new IP for emerging applications at leading semiconductor companies.
Related Content
- Top five fallacies about RISC-V
- Startups Help RISC-V Reshape Computer Architecture
- Accelerating RISC-V development with network-on-chip IP
- Why RISC-V is a viable option for safety-critical applications
- Codasip: Toward Custom, Safe, Secure RISC-V Compute Cores
The post RISC-V basics: The truth about custom extensions appeared first on EDN.
Assessing vinyl’s resurrection: Differentiation, optimization, and demand maximization

As long-time readers may already realize from my repeat case study coverage of the topic, one aspect of the tech industry that I find particularly interesting is how suppliers react to the inevitable maturation of a given technology. Seeing all the cool new stuff get launched each year—and forecasting whether any of it will get to the “peak of inflated expectations” region of Gartner’s hype cycle, far from the “trough of disillusionment” beyond—is all well and good:
But what happens when a technology (and products based on it) makes it through the “slope of enlightenment” and ends up at the “plateau of productivity”? A sizeable mature market inevitably attracts additional market participants: often great news for consumers, not so much for suppliers. How do the new entrants differentiate themselves from existing “players” with already established brand names, and without just dropping prices, resulting in a “race to the bottom” that fiscally benefits no one? And how do those existing “players” combat these new entrants, leveraging (hopeful positive) existing consumer awareness and prolonging innovation to ensure that ongoing profits counterbalance upfront R&D and market-cultivation expenses?
The vinyl exampleI’ve discussed such situations in the past, for example, with Bluetooth audio adapters and LED-based illumination sources. The situation I’m covering today, however, is if anything even more complicated. It involves a technology—the phonograph record—that in the not-too-distant past was well past the “plateau of productivity” and in a “death spiral”, the victim of more modern music-delivery alternatives such as optical discs and, later, online downloads and streams. But today? Just last night I was reading the latest issue of Stereophile Magazine (in print, by the way, speaking of “left for dead” technologies with recent resurgences), which included analysis of both Goldman Sachs’ most recent 2025 “Music In the Air” market report (as noted elsewhere, the most recent report available online as I write this is from 2024) and others’ reaction to it:
Analyses of the latest Goldman Sachs “Music in the Air” report show how the same news can be interpreted in different ways. Billboard sees it in a negative light: “Goldman Sachs Lowers Global Music Industry Growth Forecast, Wiping Out $2.5 Billion.” Music Business Worldwide is more measured, writing, “Despite revising some forecasts downward following a slower-than-expected 2024, Goldman Sachs maintains a positive long-term outlook for the music industry.”
The Billboard article is good, but the headline is clickbait. The Goldman Sachs report didn’t wipe out $2.5 billion. Rather, it reported a less optimistic forecast, projecting lower future revenues than last year’s report projected: The value wiped out was never real.
Stereophile editor Jim Austin continues:
Most of this [2024] growth was from streaming. Worldwide streaming revenue exceeded $20 billion for the first time, reaching $20.4 billion. Music Business Worldwide points out that that’s a bigger number than total worldwide music revenue, from all sources, for all years 2003–2020. Streaming subscription revenue was the main source of growth, rising by 9.5% year over year. That reflects a 10.6% increase in worldwide subscribers, to 752 million.
But here’s the key takeaway (bolded emphasis mine):
Meanwhile, following an excellent 2023 for physical media—it was up that year by 14.5%—trade revenue from physical media fell by 3.1% last year. Physical media represented just 16% of trade revenues in 2024, down 2% from the previous year. Physical-media revenue in Asia—a last stronghold of music you can touch—also fell. What about vinyl records? Trade revenue from vinyl records rose by 4.4% year over year.
Now combine this factoid with another one I recently came across, from a presentation made by market research firm Luminate Data at the 2023 SXSW conference:
The resurgence of vinyl sales among music fans has been going on for some time now, but the trend marked a major milestone in 2022. According to data recently released by the Recording Industry Association of America (RIAA), annual vinyl sales exceeded CD sales in the US last year for the first time since 1987.
Consumers bought 41.3 million vinyl records in the States in 2022, compared to 33.4 million compact discs…Revenues from vinyl jumped 17.2% YoY, to USD $1.2 billion in 2022, while revenues from CDs fell 17.6%, to $483 million.
Now, again, the “money quote” (bolded emphasis again mine):
In the company’s [Luminate Data’s] recent “Top Entertainment Trends for 2023” report, Luminate found that “50% of consumers who have bought vinyl in the past 12 months own a record player, compared to 15% among music listeners overall.” Naturally, this also means that 50% of vinyl buyers don’t own a record player.
Note that this isn’t saying that half of the records sold went to non-turntable-owners. I suspect (and admittedly exemplify) that turntable owners represent a significant percentage of total record unit sales (and profits, for that matter). But it’s mind-boggling to me that half the people who bought at least one record don’t even own a turntable to play it on. What’s going on?
Not owning a turntable obviates the majority (at least) of the motivation rationale I proffered in one of last month’s posts for the other half of us:
There’s something fundamentally tactile-titillating and otherwise sensory-pleasing (at least to a memory-filled “old timer” like me) to carefully pulling an LP out of its sleeve, running a fluid-augmented antistatic velvet brush over it, lowering the stylus onto the disc and then sitting back to audition the results while perusing the album cover’s contents.
And of course, some of the acquisition activity for non-turntable-owners ends up turning into gifts for the other half of us. But there’s still that “perusing the album cover’s content” angle, more generally representative of “collector” activity. It’s one of the factors that I’ve lumped into the following broad characteristic categories, curated after my reconnection with vinyl and my ensuing observations of how musicians and the record labels that represent (most of) them differentiate an otherwise-generic product to maximize buyer acquisition, variant selection and (for multi-variant collection purposes) repeat-purchase iteration motivations.
Media deviationsStandard LPs (long-play records) weigh between 100 and 140 grams. Pricier “audiophile grade” pressings are thicker, therefore heavier, ranging between 180 and 220 grams. Does the added heft make any difference, aside from the subtractive impact on your bank account balance? The answer’s at-best debatable; that said, I admittedly “go thick” whenever I have a choice. Then again, I also use a stabilizer even with new LPs, so any skepticism on your part is understandable:
Thicker vinyl, one could reasonably (IMHO, at least) argue, is more immune to warping effects. Also, as with a beefier plinth (turntable base), there’s decreased likelihood of vibration originating elsewhere (the turntable’s own motor, for example, or your feet striking the floor as you walk by) transferring to and being picked up by the stylus (aka “needle”), although the turntable’s platter mat material and thickness are probably more of a factor in this regard.
That all said, “audiophile grade” discs generally are not only thicker and heavier but also more likely to be made from “virgin” versus “noisier” recycled vinyl, a grade-of-materials differential which presumably has an even greater effect on sonic quality-of-results. Don’t underestimate the perceived quality differential between two products with different hefts, either.
And speaking of perceptions versus reality, when I recently started shopping for records again, I kept coming across mentions of “Pitman”, “Terre Haute” and various locales in Germany, for example. It turns out that these refer to record pressing plant locations (New Jersey and Indiana, in the first two cases), which some folks claim deliver(ed) differing quality of results, whether in general or specifically in certain timeframes. True or false? I’m not touching this one with a ten-foot pole, aside from reiterating a repeated past observation that one’s ears and brain are prone to rationalizing decisions and transactions that one’s wallet has already made.
Content optimizationOne of the first LPs I (re-)bought when I reconnected with the vinyl infatuation of my youth was a popular classic choice, Fleetwood Mac’s Rumours. As I shopped online, I came across both traditional one- and more expensive two-disc variants, the latter, which I initially assumed was a “deluxe edition” also including studio outtakes, alternate versions, live concert recordings, and the like. But, as it turned out, both options list the same 11 tracks. So, what was the difference?
Playback speed, it turned out. Supposedly, since a 45 rpm disc devotes more groove-length “real estate” to a given playback duration than its conventional 33 1/3 rpm counterpart, it’s able to encode a “richer” presentation of the music. The tradeoff, of course, is that the 45 RPM version more quickly uses up the available space on each side of an LP. Ergo, two discs instead of one.
More generally, a conventional 33 1/3 rpm pressing generally contains between 18 and 22 minutes of music per side. It’s possible to fit up to ~30 minutes of audio, both by leveraging “dead wax” space usually devoted solely to the lead-in and lead-out groove regions and by compressing the per-revolution groove spacing. That said, audio quality can suffer as a result, particularly with wide dynamic range and bass-rich source material.
The chronological contrast between a ~40-minute max LP and a 74-80-minute max Red Book Audio CD is obvious, particularly when you also factor in the added complications of keeping the original track order intact and preventing a given track from straddling both sides (i.e., not requiring that the listener flip the record over mid-song). The original pressing of Dire Straits’ Brothers in Arms, for example, shortened two songs in comparison to their audio CD forms to enable the album to fit on one LP. Subsequent remastered and reissued versions switched to a two-LP arrangement, enabling the representation of all songs in full. Radiohead’s Hail to the Thief, another example, was single-CD but dual-LP from the start, so as to not shorten and/or drop any tracks (the band’s existing success presumably gave it added leverage in this regard).
Remastering (speaking of which) is a common approach (often in conjunction with digitization of the original studio tape content, ironic given how “analog-preferential” many audiophiles are) used to encourage consumers to both select higher-priced album variants and to upgrade their existing collections. Jimmy Page did this, for example, with the Led Zeppelin songs found on the various “greatest hits” compilations and box sets released after the band’s discontinuation, along with reissues of the original albums. Even more substantial examples of the trend are the various to-stereo remixes of original mono content from bands like the Beach Boys and Beatles.
Half-speed mastering, done for some later versions of the aforementioned Brothers of Arms, is:
A technique occasionally used when cutting the acetate lacquers from which phonograph records are produced. The cutting machine platter is run at half of the usual speed (16 2⁄3 rpm for 33 1⁄3 rpm records) while the signal to be recorded is fed to the cutting head at half of its regular playback speed. The reasons for using this technique vary, but it is generally used for improving the high-frequency response of the finished record. By halving the speed during cutting, very high frequencies that are difficult to cut become much easier to cut since they are now mid-range frequencies.
And then there’s direct metal mastering, used (for example) with my copy of Rush’s Moving Pictures. Here’s the Google AI Overview summary:
An analog audio disc mastering technique where the audio signal is directly engraved onto a metal disc, typically copper, instead of a lacquer disc used in traditional mastering. This method bypasses the need for a lacquer master and its associated plating process, allowing for the creation of stampers directly from the metal master. This results in a potentially clearer, more detailed, and brighter sound with less surface noise compared to lacquer mastering.
Packaging and other aspects of presentationLast, but definitely not least, let’s discuss the various means by which the music content encoded on the vinyl media is presented to potential purchasers as irresistibly as possible. I’ve already mentioned the increasingly common deluxe editions and other expanded versions of albums (I’m not speaking here of multi-album box sets). Take, for example, the 25th anniversary edition of R.E.M.’s Monster, which “contains the original Monster album on the first LP, along with a second LP containing Monster, completely remixed by original producer, Scott Litt, both pressed on 180 gram vinyl. Packaging features reimagined artwork by the original cover artist, Chris Bilheimer, and new liner notes, featuring interviews from members of the band.”
The 40th anniversary remaster of Rush’s previously mentioned Moving Pictures is even more elaborate, coming in multiple “bundle” options including a 5-LP version described as follows:
The third Moving Pictures configuration will be offered as a five-LP Deluxe Edition, all of it housed in a slipcase including a single-pocket jacket for the remastered original Moving Pictures on LP 1, and two gatefold jackets for LPs 2-5 that comprise all 19 tracks from the complete, unreleased Live In YYZ 1981 concert. As noted above, all vinyl has been cut for the first time ever via half-speed Direct to Metal Mastering (DMM) on 180-gram black audiophile vinyl. Extras include a 24-page booklet with unreleased photos, [Hugh] Syme’s reimagined artwork and new illustrations, and the complete liner notes.
Both Target and Walmart also sell “exclusive vinyl” versions of albums, bundled with posters and other extras. Walmart’s “exclusive” variant of Led Zeppelin’s Physical Graffiti, for example, includes a backstage pass replica:
More generally, although records traditionally used black-color vinyl media, alternate-palette and -pattern variants are becoming increasingly popular. Take a look, for example, at Walmart’s appropriately tinted version of Amy Winehouse’s Back to Black:
You’ve gotta admit, that looks pretty cool, right?
I’m also quite taken with Target’s take on the Grateful Dead’s American Beauty:
Countless other examples exist, some attractive and others garish (IMHO, although you know the saying: “beauty is in the eye of the beholder”), eye-candy tailored for spinning on your turntable or, if you don’t have one (to my earlier factoid), displaying on your wall. That said, why Lorde and her record label extended the concept to cover a completely clear CD of her most recent just-introduced album, seemingly fundamentally incompatible with the need for a reflective media substrate for laser pickup purposes, is beyond me…
This write-up admittedly ended up being much longer than I’d originally intended! To some degree, it reflects the diversity of record-centric examples that my research uncovered. But as with the Bluetooth audio adapter and LED-based illumination case studies that preceded it, I think it effectively exemplifies one industry’s attempts to remain relevant (twice, in this case!) and maximize its ROI in response to market evolutions. What do you think of the records’ efforts to redefine themselves for the modern consumer era? And what lessons can you derive for your company’s target markets? Sound off with your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Bluetooth audio adapters and their creative developers
- LED light bulb manufacturers diversify in search of sustainable profits
- Hardware alterations: Unintended, apparent advantageous adaptations
- Vinyl vs. CD: Readers respond
- Audio myth: Vinyl better than CD?
- Vinyl vs. CD myths refuse to die
The post Assessing vinyl’s resurrection: Differentiation, optimization, and demand maximization appeared first on EDN.