Українською
  In English
EDN Network
Matchmaker

Precision-matched resistors, diode pairs, and bridges are generic items. But sometimes an extra critical application with extra tight tolerances (or an extra tight budget) can dictate a little (or a lot) of DIY.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1’s matchmaker circuit can help make the otherwise odious chore of searching through a batch of parts for accurately matching pairs of resistors (or diodes) quicker and a bit less taxing. Best of all, it does precision (potentially to the ppm level) matchmaking with no need for pricey precision reference components.
Here’s how it works.
Figure 1 A1a, U1b, and U1c generate precisely symmetrical excitation of the A and B parts being matched. The asterisked resistors are ordinary 1% parts; their accuracy isn’t critical. The A>B output is positive relative to B>A if resistor/diode A is greater than B, and vice versa.
Matchmaker’s A1a and U1bc generate a precisely symmetrical square-wave excitation (timed by the 100-Hz multivibrator A1b) to measure the mismatch between test parts A and B. The resulting difference signal is boosted by preamp A1d in switchable gains of 1, 10, or 100, synchronously demodulated by U1a, then filtered to DC with a final calibrating gain of 16x by A1c.
The key to Matchmaker’s precision is the Kelvin-style connection topology of the CMOS switches U1b and U1c. U1b, because it carries no significant load current (nothing but the pA-level input bias current of A1a), introduces only nanovolts of error. Meanwhile, the resulting sensing of excitation voltage level at the parts being matched, and the cancellation of U1c’s max 200-Ω on-resistance, is therefore exact, limited only by A1a’s gain-bandwidth at 100 Hz. Since the op-amp’s gain bandwidth (GBW) is ~10 MHz, the effective residual resistance is only 200/105 = 2 mΩ. Meanwhile, the 10-Ω max differential between the MAX4053 switches (the most critical parameter for excitation symmetry) is reduced to a usually insignificant 10/105 = 100 µΩ. The component lead wire and PWB trace resistance will contribute (much) more unless the layout is carefully done.
Matching resistors to better than ±1 ppm = 0.0001% is therefore possible. No ppm level references (voltage or resistance) need apply.
Output voltage as a function of Ra/Rb % mismatch is maximized when load resistor R1 is (at least approximately) equal to the nominal resistance of the resistances being matched. But because of the inflection maximum at R1/Rab = 1, that equality isn’t at all critical, as shown in Figure 2.
Figure 2 The output level (MV per 1% mismatch at Gain = 1) is not sensitive to the exact value of R1/Rab.
R1/Rab thus can vary from 1.0 by ±20% without disturbing mismatch gain by much more than 1%. However, R1 should not be less than ~50 Ω in order to stay within A1 and U1 current ratings.
Matchmaker also works to match diodes. In that case, R1 should be chosen to mirror the current levels expected in the application, R1 = 2v / Iapp.
Due to what I attribute to an absolute freak of nature (for which I take no credit whatsoever), the output MV per 1% mismatch of forward diode voltage drop is (nearly) the same as for resistors, at least for silicon junction diodes.
Actually, there’s a simple explanation for the “freak of nature.” It’s just that a 1% differential between legs of the 2:1 Ra/Rb/R1 voltage divider is attenuated by 50% to become 1.25v/100/2 = 6.25 mV, and 6.25 mV just happens to be very close to 1% of a silicon junction diode’s ~600 mV forward drop.
So, the freakiness really isn’t all that freaky, but it is serendipitous! Matchmaker also works with Schottky diodes, but due to their smaller forward drop, it will underreport their percent mismatch by about a factor of 3.
Due to the temperature sensitivity of diodes, it’s a good idea to handle them with thermally insulating gloves. This will save time and frustration waiting for them to equilibrate, not to mention possible, outright erroneous results. In fact, considering the possibility of misleading thermal effects (accidental dissimilar metal thermocouple junctions, etc.), it’s probably not a bad idea to wear gloves when handling resistors, too!
Happy ppm match making!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Circuits help get or verify matched resistors
- Design Notes: Matched Resistor Networks for Precision Amplifier Applications
- The Effects of Resistor Matching on Common Mode Rejection
- Peculiar precision full-wave rectifier needs no matched resistors
The post Matchmaker appeared first on EDN.
RISC-V basics: The truth about custom extensions

The era of universal processor architectures is giving way to workload-specific designs optimized for performance, power, and scalability. As data-centric applications in artificial intelligence (AI), edge computing, automotive, and industrial markets continue to expand, they are driving a fundamental shift in processor design.
Arguably, chipmakers can no longer rely on generalized architectures to meet the demands of these specialized markets. Open ecosystems like RISC-V empower silicon developers to craft custom solutions that deliver both innovation and design efficiency, unlocking new opportunities across diverse applications.
RISC-V, an open-source instruction set architecture (ISA), is rapidly gaining momentum for its extensibility and royalty-free licensing. According to Rich Wawrzyniak, principal analyst at The SHD Group, “RISC-V SoC shipments are projected to grow at nearly 47% CAGR, capturing close to 35% of the global market by 2030.” This growth highlights why SoC designers are increasingly embracing architectures that offer greater flexibility and specialization.
RISC-V ISA customization trade-offs
The open nature of the RISC-V ISA has sparked widespread interest across the semiconductor industry, especially for its promise of customization. Unlike fixed-function ISAs, RISC-V enables designers to tailor processors to specific workloads. For companies building domain-specific chips for AI, automotive, or edge computing, this level of control can unlock significant competitive advantages in optimizing performance, power efficiency, and silicon area.
But customization is not a free lunch.
Adding custom extensions means taking ownership of both hardware design and the corresponding software toolchain. This includes compiler and simulation support, debug infrastructure, and potentially even operating system integration. While RISC-V’s modular structure makes customization easier than legacy ISAs, it still demands architectural consideration and robust development and verification workflows to ensure consistency and correctness.
In many cases, customization involves additional considerations. When general-purpose processing and compatibility with existing software libraries, security frameworks, and third-party ecosystems are paramount, excessive or non-standard extensions can introduce fragmentation. Design teams can mitigate this risk by aligning with RISC-V’s ratified extensions and profiles, for instance RVA23, and then applying targeted customizations where appropriate.
When applied strategically, RISC-V customization becomes a powerful lever that yields substantial ROI by rewarding thoughtful architecture, disciplined engineering, and clear product objectives. Some companies devote full design and software teams to developing strategic extensions, while others leverage automated toolchains and hardware-software co-design methodologies to mitigate risks, accelerate time to market, and capture most of the benefits.
For teams that can navigate the trade-offs well, RISC-V customization opens the door to processors truly optimized for their workloads and to massive product differentiation.
Real world use cases
Customized RISC-V cores are already deployed across the industry. For example, Nvidia’s VP of Multimedia Arch/ASIC, Frans Sijstermans, described the replacement of their internal Falcon MCU with customized RISC-V hardware and software developed in-house, now being deployed across a variety of applications.
One notable customization is support for 2KB beyond the standard 4K pages, which yielded a 50% performance improvement for legacy code. Page size changes like this are a clear example of modifications with system-level impact from processor hardware to operating system memory management.
Figure 1 The view of Nvidia’s RISC-V cores and extensions taken from the keynote “RISC-V at Nvidia: One Architecture, Dozens of Applications, Billions of Processors.”
Another commercial example is Meta’s MTIA accelerator, which extends a RISC-V core with application-specific instructions, custom interfaces, and specialized register files. While Meta has not published the full toolchain flow, the scope of integration implies an internally managed co-design methodology with tightly coupled hardware and software development.
Given the complexity of the modifications, the design likely leveraged automated flows capable of regenerating RTL, compiler backends, simulators, and intrinsics to maintain toolchain consistency. This reflects a broader trend of engineering teams adopting user-driven, in-house customization workflows that support rapid iteration and domain-specific optimization.
Figure 2 Meta’s MTIA accelerator integrates Andes RISC-V cores for optimized AI performance. Source: MTIA: First Generation Silicon Targeting Meta’s Recommendation Systems, A. Firoozshahian, et. al.
Startup company Rain.ai illustrates that even small teams can benefit from RISC-V customization via automated flows. Their process begins with input files that define operands, vector register inputs and outputs, vector unit behavior, and a C-language semantic description. These instructions are pipelined, multi-cycle, and designed to align with the stylistic and semantic properties of standard vector extensions.
The input files are extended with a minimal hardware implementation and processed through a flow that generates updated core RTL, simulation models, compiler support, and intrinsic functions. This enables developers to quickly update kernels, compile and run them on simulation models, and gather feedback on performance, utilization, and cycle count.
By lowering the barrier to custom instruction development, this process supports a hardware-software co-design methodology, making it easier to explore and refine different usage models. This approach was used to integrate their matrix multiply, sigmoid, and SiLU acceleration in the hardware and software flows, yielding an 80% reduction in power and a 7x–10x increase in throughput compared to the standard vector processing unit.
Figure 3 Here is an example of a hardware/software co‑design flow for developing and optimizing custom instructions. Source: Andes Technology
Tools supporting RISC-V customization
To support these holistic workflows, automation tools are emerging to streamline customization and integration. For example, Andes Technology provides silicon-proven IP and a comprehensive suite of design tools to accelerate development.
Figure 4 ACE and CoPilot simplify the development and integration of custom instructions. Source: Andes Technology
Andes Custom Extension (ACE) framework and CoPilot toolchain offer a streamlined path to RISC-V customization. ACE enables developers to define custom instructions optimized for specific workloads, supporting advanced features such as pipelining, background execution, custom registers, and memory structures.
CoPilot automates the integration process by regenerating the entire hardware and software stack, including RTL, compiler, debugger, and simulator, based on the defined extensions. This reduces manual effort, ensures alignment between hardware and software, and accelerates development cycles, making custom RISC-V design practical for a broad range of teams and applications.
RISC-V’s open ISA broke down barriers to processor innovation, enabling developers to move beyond the constraints of proprietary architectures. Today, advanced frameworks and automation tools empower even lean teams to take advantage of hardware-software co-design with RISC-V.
For design teams that approach customization with discipline, RISC-V offers a rare opportunity: to shape processors around the needs of the application, not the other way around. The companies that succeed in mastering this co-design approach won’t just keep pace, they’ll define the next era of processor innovation.
Marc Evans, director of Business Development & Marketing at Andes Technology, brings deep expertise in IP, SoC architecture, CPU/DSP design, and the RISC-V ecosystem. His career spans hands-on processor and memory system architecture to strategic leadership roles driving the adoption of new IP for emerging applications at leading semiconductor companies.
Related Content
- Top five fallacies about RISC-V
- Startups Help RISC-V Reshape Computer Architecture
- Accelerating RISC-V development with network-on-chip IP
- Why RISC-V is a viable option for safety-critical applications
- Codasip: Toward Custom, Safe, Secure RISC-V Compute Cores
The post RISC-V basics: The truth about custom extensions appeared first on EDN.
Assessing vinyl’s resurrection: Differentiation, optimization, and demand maximization

As long-time readers may already realize from my repeat case study coverage of the topic, one aspect of the tech industry that I find particularly interesting is how suppliers react to the inevitable maturation of a given technology. Seeing all the cool new stuff get launched each year—and forecasting whether any of it will get to the “peak of inflated expectations” region of Gartner’s hype cycle, far from the “trough of disillusionment” beyond—is all well and good:
But what happens when a technology (and products based on it) makes it through the “slope of enlightenment” and ends up at the “plateau of productivity”? A sizeable mature market inevitably attracts additional market participants: often great news for consumers, not so much for suppliers. How do the new entrants differentiate themselves from existing “players” with already established brand names, and without just dropping prices, resulting in a “race to the bottom” that fiscally benefits no one? And how do those existing “players” combat these new entrants, leveraging (hopeful positive) existing consumer awareness and prolonging innovation to ensure that ongoing profits counterbalance upfront R&D and market-cultivation expenses?
The vinyl exampleI’ve discussed such situations in the past, for example, with Bluetooth audio adapters and LED-based illumination sources. The situation I’m covering today, however, is if anything even more complicated. It involves a technology—the phonograph record—that in the not-too-distant past was well past the “plateau of productivity” and in a “death spiral”, the victim of more modern music-delivery alternatives such as optical discs and, later, online downloads and streams. But today? Just last night I was reading the latest issue of Stereophile Magazine (in print, by the way, speaking of “left for dead” technologies with recent resurgences), which included analysis of both Goldman Sachs’ most recent 2025 “Music In the Air” market report (as noted elsewhere, the most recent report available online as I write this is from 2024) and others’ reaction to it:
Analyses of the latest Goldman Sachs “Music in the Air” report show how the same news can be interpreted in different ways. Billboard sees it in a negative light: “Goldman Sachs Lowers Global Music Industry Growth Forecast, Wiping Out $2.5 Billion.” Music Business Worldwide is more measured, writing, “Despite revising some forecasts downward following a slower-than-expected 2024, Goldman Sachs maintains a positive long-term outlook for the music industry.”
The Billboard article is good, but the headline is clickbait. The Goldman Sachs report didn’t wipe out $2.5 billion. Rather, it reported a less optimistic forecast, projecting lower future revenues than last year’s report projected: The value wiped out was never real.
Stereophile editor Jim Austin continues:
Most of this [2024] growth was from streaming. Worldwide streaming revenue exceeded $20 billion for the first time, reaching $20.4 billion. Music Business Worldwide points out that that’s a bigger number than total worldwide music revenue, from all sources, for all years 2003–2020. Streaming subscription revenue was the main source of growth, rising by 9.5% year over year. That reflects a 10.6% increase in worldwide subscribers, to 752 million.
But here’s the key takeaway (bolded emphasis mine):
Meanwhile, following an excellent 2023 for physical media—it was up that year by 14.5%—trade revenue from physical media fell by 3.1% last year. Physical media represented just 16% of trade revenues in 2024, down 2% from the previous year. Physical-media revenue in Asia—a last stronghold of music you can touch—also fell. What about vinyl records? Trade revenue from vinyl records rose by 4.4% year over year.
Now combine this factoid with another one I recently came across, from a presentation made by market research firm Luminate Data at the 2023 SXSW conference:
The resurgence of vinyl sales among music fans has been going on for some time now, but the trend marked a major milestone in 2022. According to data recently released by the Recording Industry Association of America (RIAA), annual vinyl sales exceeded CD sales in the US last year for the first time since 1987.
Consumers bought 41.3 million vinyl records in the States in 2022, compared to 33.4 million compact discs…Revenues from vinyl jumped 17.2% YoY, to USD $1.2 billion in 2022, while revenues from CDs fell 17.6%, to $483 million.
Now, again, the “money quote” (bolded emphasis again mine):
In the company’s [Luminate Data’s] recent “Top Entertainment Trends for 2023” report, Luminate found that “50% of consumers who have bought vinyl in the past 12 months own a record player, compared to 15% among music listeners overall.” Naturally, this also means that 50% of vinyl buyers don’t own a record player.
Note that this isn’t saying that half of the records sold went to non-turntable-owners. I suspect (and admittedly exemplify) that turntable owners represent a significant percentage of total record unit sales (and profits, for that matter). But it’s mind-boggling to me that half the people who bought at least one record don’t even own a turntable to play it on. What’s going on?
Not owning a turntable obviates the majority (at least) of the motivation rationale I proffered in one of last month’s posts for the other half of us:
There’s something fundamentally tactile-titillating and otherwise sensory-pleasing (at least to a memory-filled “old timer” like me) to carefully pulling an LP out of its sleeve, running a fluid-augmented antistatic velvet brush over it, lowering the stylus onto the disc and then sitting back to audition the results while perusing the album cover’s contents.
And of course, some of the acquisition activity for non-turntable-owners ends up turning into gifts for the other half of us. But there’s still that “perusing the album cover’s content” angle, more generally representative of “collector” activity. It’s one of the factors that I’ve lumped into the following broad characteristic categories, curated after my reconnection with vinyl and my ensuing observations of how musicians and the record labels that represent (most of) them differentiate an otherwise-generic product to maximize buyer acquisition, variant selection and (for multi-variant collection purposes) repeat-purchase iteration motivations.
Media deviationsStandard LPs (long-play records) weigh between 100 and 140 grams. Pricier “audiophile grade” pressings are thicker, therefore heavier, ranging between 180 and 220 grams. Does the added heft make any difference, aside from the subtractive impact on your bank account balance? The answer’s at-best debatable; that said, I admittedly “go thick” whenever I have a choice. Then again, I also use a stabilizer even with new LPs, so any skepticism on your part is understandable:
Thicker vinyl, one could reasonably (IMHO, at least) argue, is more immune to warping effects. Also, as with a beefier plinth (turntable base), there’s decreased likelihood of vibration originating elsewhere (the turntable’s own motor, for example, or your feet striking the floor as you walk by) transferring to and being picked up by the stylus (aka “needle”), although the turntable’s platter mat material and thickness are probably more of a factor in this regard.
That all said, “audiophile grade” discs generally are not only thicker and heavier but also more likely to be made from “virgin” versus “noisier” recycled vinyl, a grade-of-materials differential which presumably has an even greater effect on sonic quality-of-results. Don’t underestimate the perceived quality differential between two products with different hefts, either.
And speaking of perceptions versus reality, when I recently started shopping for records again, I kept coming across mentions of “Pitman”, “Terre Haute” and various locales in Germany, for example. It turns out that these refer to record pressing plant locations (New Jersey and Indiana, in the first two cases), which some folks claim deliver(ed) differing quality of results, whether in general or specifically in certain timeframes. True or false? I’m not touching this one with a ten-foot pole, aside from reiterating a repeated past observation that one’s ears and brain are prone to rationalizing decisions and transactions that one’s wallet has already made.
Content optimizationOne of the first LPs I (re-)bought when I reconnected with the vinyl infatuation of my youth was a popular classic choice, Fleetwood Mac’s Rumours. As I shopped online, I came across both traditional one- and more expensive two-disc variants, the latter, which I initially assumed was a “deluxe edition” also including studio outtakes, alternate versions, live concert recordings, and the like. But, as it turned out, both options list the same 11 tracks. So, what was the difference?
Playback speed, it turned out. Supposedly, since a 45 rpm disc devotes more groove-length “real estate” to a given playback duration than its conventional 33 1/3 rpm counterpart, it’s able to encode a “richer” presentation of the music. The tradeoff, of course, is that the 45 RPM version more quickly uses up the available space on each side of an LP. Ergo, two discs instead of one.
More generally, a conventional 33 1/3 rpm pressing generally contains between 18 and 22 minutes of music per side. It’s possible to fit up to ~30 minutes of audio, both by leveraging “dead wax” space usually devoted solely to the lead-in and lead-out groove regions and by compressing the per-revolution groove spacing. That said, audio quality can suffer as a result, particularly with wide dynamic range and bass-rich source material.
The chronological contrast between a ~40-minute max LP and a 74-80-minute max Red Book Audio CD is obvious, particularly when you also factor in the added complications of keeping the original track order intact and preventing a given track from straddling both sides (i.e., not requiring that the listener flip the record over mid-song). The original pressing of Dire Straits’ Brothers in Arms, for example, shortened two songs in comparison to their audio CD forms to enable the album to fit on one LP. Subsequent remastered and reissued versions switched to a two-LP arrangement, enabling the representation of all songs in full. Radiohead’s Hail to the Thief, another example, was single-CD but dual-LP from the start, so as to not shorten and/or drop any tracks (the band’s existing success presumably gave it added leverage in this regard).
Remastering (speaking of which) is a common approach (often in conjunction with digitization of the original studio tape content, ironic given how “analog-preferential” many audiophiles are) used to encourage consumers to both select higher-priced album variants and to upgrade their existing collections. Jimmy Page did this, for example, with the Led Zeppelin songs found on the various “greatest hits” compilations and box sets released after the band’s discontinuation, along with reissues of the original albums. Even more substantial examples of the trend are the various to-stereo remixes of original mono content from bands like the Beach Boys and Beatles.
Half-speed mastering, done for some later versions of the aforementioned Brothers of Arms, is:
A technique occasionally used when cutting the acetate lacquers from which phonograph records are produced. The cutting machine platter is run at half of the usual speed (16 2⁄3 rpm for 33 1⁄3 rpm records) while the signal to be recorded is fed to the cutting head at half of its regular playback speed. The reasons for using this technique vary, but it is generally used for improving the high-frequency response of the finished record. By halving the speed during cutting, very high frequencies that are difficult to cut become much easier to cut since they are now mid-range frequencies.
And then there’s direct metal mastering, used (for example) with my copy of Rush’s Moving Pictures. Here’s the Google AI Overview summary:
An analog audio disc mastering technique where the audio signal is directly engraved onto a metal disc, typically copper, instead of a lacquer disc used in traditional mastering. This method bypasses the need for a lacquer master and its associated plating process, allowing for the creation of stampers directly from the metal master. This results in a potentially clearer, more detailed, and brighter sound with less surface noise compared to lacquer mastering.
Packaging and other aspects of presentationLast, but definitely not least, let’s discuss the various means by which the music content encoded on the vinyl media is presented to potential purchasers as irresistibly as possible. I’ve already mentioned the increasingly common deluxe editions and other expanded versions of albums (I’m not speaking here of multi-album box sets). Take, for example, the 25th anniversary edition of R.E.M.’s Monster, which “contains the original Monster album on the first LP, along with a second LP containing Monster, completely remixed by original producer, Scott Litt, both pressed on 180 gram vinyl. Packaging features reimagined artwork by the original cover artist, Chris Bilheimer, and new liner notes, featuring interviews from members of the band.”
The 40th anniversary remaster of Rush’s previously mentioned Moving Pictures is even more elaborate, coming in multiple “bundle” options including a 5-LP version described as follows:
The third Moving Pictures configuration will be offered as a five-LP Deluxe Edition, all of it housed in a slipcase including a single-pocket jacket for the remastered original Moving Pictures on LP 1, and two gatefold jackets for LPs 2-5 that comprise all 19 tracks from the complete, unreleased Live In YYZ 1981 concert. As noted above, all vinyl has been cut for the first time ever via half-speed Direct to Metal Mastering (DMM) on 180-gram black audiophile vinyl. Extras include a 24-page booklet with unreleased photos, [Hugh] Syme’s reimagined artwork and new illustrations, and the complete liner notes.
Both Target and Walmart also sell “exclusive vinyl” versions of albums, bundled with posters and other extras. Walmart’s “exclusive” variant of Led Zeppelin’s Physical Graffiti, for example, includes a backstage pass replica:
More generally, although records traditionally used black-color vinyl media, alternate-palette and -pattern variants are becoming increasingly popular. Take a look, for example, at Walmart’s appropriately tinted version of Amy Winehouse’s Back to Black:
You’ve gotta admit, that looks pretty cool, right?
I’m also quite taken with Target’s take on the Grateful Dead’s American Beauty:
Countless other examples exist, some attractive and others garish (IMHO, although you know the saying: “beauty is in the eye of the beholder”), eye-candy tailored for spinning on your turntable or, if you don’t have one (to my earlier factoid), displaying on your wall. That said, why Lorde and her record label extended the concept to cover a completely clear CD of her most recent just-introduced album, seemingly fundamentally incompatible with the need for a reflective media substrate for laser pickup purposes, is beyond me…
This write-up admittedly ended up being much longer than I’d originally intended! To some degree, it reflects the diversity of record-centric examples that my research uncovered. But as with the Bluetooth audio adapter and LED-based illumination case studies that preceded it, I think it effectively exemplifies one industry’s attempts to remain relevant (twice, in this case!) and maximize its ROI in response to market evolutions. What do you think of the records’ efforts to redefine themselves for the modern consumer era? And what lessons can you derive for your company’s target markets? Sound off with your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Bluetooth audio adapters and their creative developers
- LED light bulb manufacturers diversify in search of sustainable profits
- Hardware alterations: Unintended, apparent advantageous adaptations
- Vinyl vs. CD: Readers respond
- Audio myth: Vinyl better than CD?
- Vinyl vs. CD myths refuse to die
The post Assessing vinyl’s resurrection: Differentiation, optimization, and demand maximization appeared first on EDN.
Turn pedals into power: A practical guide to human-powered energy

With a pedal generator, you can turn human effort into usable energy—ideal for off-grid setups, emergency backups, or just a fun DIY project. This guide gives you a fast-track look at how pedal generators work and how to build one on your own. Let’s turn motion into power!
Pedal generators, also known as pedal power generators, convert human kinetic energy into usable electrical power through a straightforward electromechanical process. As the user pedals, a rotating shaft drives a DC generator or alternator, producing voltage proportional to the speed and torque applied. A flywheel may be integrated to smooth out fluctuations, while a rectifier and voltage regulator ensure stable output for charging batteries or powering devices.
Figure 1 A commercial pedal generator delivers power through a standard 12-V automotive outlet. Source: K-Tor
Below is the blueprint of a basic pedal-powered generator built around a standard bicycle dynamo (bottle dynamo). It produces electricity as you pedal—using either your legs or arms—which can be used to charge small batteries or power portable electronics.
Figure 2 This blueprint illustrates how a basic pedal-powered generator works. Source: Author
It’s worth noting that a quick test was performed using the L-7113ID-5V LED as the test lamp/minimal load. Although overall efficiency varies with load and pedaling cadence, the system provides a hands-on demonstration of energy conversion ideal for educational setups.
Chances are you have already spotted that a DC motor can also function as a generator and that DC motors specifically designed for that purpose are now readily available. Below is a bit-enhanced version of the pedal generator built around a compact three-phase brushless DC (BLDC) motor.
Figure 3 A modestly upgraded pedal generator built around a three-phase brushless DC motor supplies unfiltered DC voltage for further conditioning. Source: Author
Just a quick note: If you are using a linear regulator, the small forward voltage drop you get from a Schottky diode (usually just a few tenths of a volt) does not really move the needle on efficiency. That’s because the regulator itself is dropping a lot more voltage across its control element. Where it does matter is when you are working with a low-dropout (LDO) regulator and trying to keep the output voltage as close as possible to the raw DC input. In that case, every little bit helps.
Also, it’s worth noting that readily available three-phase AC micro-generators can serve as viable substitutes, assuming they match your system’s specs. A typical example is the CrocSee Micro 3-phase brushless AC generator (Figure 4).
Figure 4 The micro generator’s internal view shows how elegant engineering simplifies complexity. Source: Author
To set expectations, pedal power is not sufficient to run an entire house, but it can be surprisingly useful. You can generate electricity for powering small devices and recharging batteries, all while using them. Pedal-powered generators can also work in tandem with other renewable sources, such as solar, to create a more versatile and sustainable setup.
On a related note, a pedal-powered bicycle generator (bike generator) is a practical solution that doubles as both an energy source and an exercise machine for household use. There are many ways to build a household bicycle generator, each offering its own set of advantages and trade-offs. Fortunately, even with basic tools and skills, constructing a functional bicycle generator is relatively straightforward.
Figure 5 A simple drawing shows how a household bicycle generator turns pedaling into electricity using a PMDC motor and a friction roller. Source: Author
Keep in mind that a flywheel can be a crucial component in this setup, as the dynamics of pedaling a stationary bicycle differ markedly from those of riding on the road. The flywheel helps smooth out the mechanical input, making the energy conversion process more consistent.
To convert this mechanical energy into electricity, a collector motor (Permanent Magnet DC Motor) serves well as a generator, offering reliable performance and simplicity. Alternatively, you can use a bicycle hub dynamo instead of the collector motor, but this demands some expertise.
Since the flywheel contributes to maintaining a relatively steady voltage output, it’s often feasible to run certain appliances directly from the generator, especially those that can tolerate raw, unregulated voltage. However, electronic devices and batteries are more sensitive to voltage fluctuations. Without proper regulation, they may malfunction or suffer damage, making a voltage regulator or controller a crucial addition to the system.
For a DC output pedal generator, such as the bicycle generator discussed here, a shunt regulator is the more suitable choice. Its ability to clamp excess voltage and safely dissipate surplus energy provides a critical layer of protection that a series regulator simply does not offer. Given the variable and often unpredictable nature of human-powered generation, overvoltage is a real concern, and the shunt regulator is specifically designed to handle this risk.
While a series regulator may offer slightly better efficiency under full load, its inability to manage voltage spikes or operate reliably without a constant load makes it less appropriate for this kind of setup. In contrast, the shunt regulator delivers consistent performance and robust overcharge protection, making it the safer and more practical option for a simple pedal generator system.
Additionally, in certain low-voltage, low-current systems that harvest energy from kinetic sources, pulse frequency modulation (PFM) modules can efficiently manage both power storage and delivery. These modules are particularly useful when energy input is sporadic or minimal, helping to optimize performance in compact or intermittent-generation setups.
Many folks working with motors might be surprised to learn that both brushed DC motors and brushless DC motors can actually function as generators. A brushed DC motor is a solid choice when you need a DC voltage output, while a BLDC motor is better suited for generating AC. If you are using a brushless DC motor to get DC output, you will need a rectifier circuit. On the flip side, if you are trying to get AC from a brushed DC motor, you will need DC-to-AC conversion electronics.
Moreover, it’s often assumed that a brushed DC motor running in generator mode is far less efficient than when it is driving a load as a motor. But with the right motor selection, load matching, and operating speed, you can achieve surprisingly good efficiency. Just be sure to consider both electrical and mechanical factors when dialing in your operating conditions.
See below a simplistic system diagram of a practical pedal power generator.
Figure 6 Here is a system diagram of a pedal generator that helps you build your own version. Source: Author
The core principle is straightforward: the raw input voltage (VT) is continuously monitored and compared against a stable reference voltage (VR). When VT exceeds VR, a power MOSFET activates the dump load, which must be capable of safely dissipating the excess energy.
Conversely, when VT falls below the reference, the dump load is deactivated. To prevent rapid switching near the threshold, it’s advisable to incorporate a small amount of hysteresis into the comparator circuit.
Now it’s over to you; review it, experiment, and bring your own version to life. Keep pedaling forward!
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Energy harvesting
- TEG energy harvesting: hype or hope?
- Bike2: A Novel Powertrain for Electric Bikes
- An Energy-Harvesting Scheme That Is Nearly Useless?
- Energy Harvesting: Maybe Electricity Can Grow on Trees?
The post Turn pedals into power: A practical guide to human-powered energy appeared first on EDN.
DMM Plug-In Test Resistor with temperature sensing

Proper precision calibration resistors are expensive and usually bulky, often in a large box or can. These are overkill for low-cost handheld digital multimeters (DMMs) and LCR Meters, especially when used as a “sanity check” before making critical measurements.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Most of the time, I would use a precision axial-leaded resistor for this purpose. Still, I thought about making something more convenient that provided the precision resistor with some mechanical protection to directly plug into a DMM or LCR meter. If using a high-resolution bench DMM like a Keysight KS34465A or Keithley DMM6500, often you want to know the resistor temperature as well as the resistance value, and the thought came to thermally link the precision resistor with a precision thermistor for this purpose. In the spirit of low-cost DIY, the idea to link the axial-lead precision resistor with an axial-lead precision thermistor with a short section of heat shrink tubing seemed reasonable, and you can’t get much simpler or cheaper than the concept shown in Figure 1!
Figure 1 Linking an axial-lead precision resistor and an axial-lead precision thermistor with a short section of heat shrink tubing.
I still needed mechanical protection for the resistor/thermistor combo, and how to make direct DMM connections. A standard dual banana plug is a good host for the combo, but only has two connection terminals. Using a 3D printer, I created a custom 3D printed “plug” to support the thermistor leads as shown in Figure 2.
Figure 2 A custom, 3D-printed “plug” supports the thermistor leads, allowing for a resistor/thermistor combo with mechanical protection.
Figure 3 shows the resistor/thermistor combo mounting scheme, where the precision axial resistor leads are inserted into the dual banana plug holes and secured by the plug’s internal screws (leave some slack in the resistor leads to help reduce mechanical stress on the precision resistor). Note how the resistor lead ends are looped over, creating small terminals for external “clip lead” or “Kelvin clips” measurements. The axial thermistor leads are threaded through the custom 3D printed plug and loop at the top.
Figure 3 The resistor/thermistor combo mounting scheme, where the precision axial resistor leads are inserted into the dual banana plug holes and secured by the plug’s internal screws.
Overall, the concept creates a small compact holder for the precision resistor and thermistor combo with convenient connections to the resistor measurement instrument directly by the dual banana plug. Temperature measurements use small clip leads to the thermistor leads, which protrude from the top of the dual banana plug and a custom 3D printed plug top, as shown in Figure 4.
Figure 4 A resistor temperature reading showing the small clip leads to the thermistor leads, which protrude from the top of the dual banana plug and custom 3D printed plug.
When using this setup, I found the bench-type DMMs like the DMM6500 have banana input terminals 3~3.5 °C warmer than ambient (the KS34465A was 2~3 °C warmer). This helps explain the long settling time for bench DMM new banana connections to stabilize, where differential thermal EMFs can corrupt sensitive measurements. Handheld DMMs seem to stabilize much quicker since the internal handheld temperature is slightly above ambient, whereas the bench DMMs are much warmer.
Anyway, I hope some folks find this DIY Precision Resistor with built-in thermistor concept convenient and useful, although wouldn’t recommend this for precision resistors below ~100 Ω, this is 4-wire Kelvin territory, and certainly not considered “Metrology Qualified.”
Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Ex-elis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.
Related Content
- Simple 5-component oscillator works below 0.8V
- Injection locking acts as a frequency divider and improves oscillator performance
- Investigating injection locking with DSO Bode function
- DIY custom Tektronix 576 & 577 curve tracer adapters
The post DMM Plug-In Test Resistor with temperature sensing appeared first on EDN.
Accelerator improves RAID array management

Microchip’s Adaptec SmartRAID 4300 series of NVMe RAID storage accelerators speeds access to NVMe storage in AI data centers. It achieves this through a disaggregated architecture that separates storage software from the hardware layer, leveraging dedicated PCIe controllers to offload CPU processing and accelerate RAID operations. Microchip reports the SmartRAID 4300 achieves up to 7× higher I/O performance compared to the previous generation in internal testing.
In the SmartRAID 4300 architecture, storage software runs on the host CPU while the accelerator offloads parity-based redundancy tasks, such as XOR operations. This allows write operations to bypass the accelerator and go directly from the host CPU to NVMe drives at native PCIe speeds. By avoiding in-line bottlenecks, the design supports high-throughput configurations with up to 32 CPU-attached x4 NVMe devices and 64 logical drives or RAID arrays. It is compatible with both PCIe Gen 4 and Gen 5 host systems.
The SmartRAID 4300 accommodates NVMe and cloud-capable SSDs for versatile enterprise deployments. It uses architectural techniques like automatic core idling and autonomous power reduction to optimize efficiency. The accelerator also provides security features, including hardware root of trust, secure boot, attestation, and Self-Encrypting Drive (SED) support to ensure data protection.
For information on production integration, contact Microchip sales or an authorized distributor here.
The post Accelerator improves RAID array management appeared first on EDN.
Rugged film capacitors off high pulse strength

EPCOS B3264xH double-sided metallized polypropylene film capacitors from TDK withstand pulse stress up to 6500 V/µs. Suited for resonant topologies—particularly LLC designs—these compact, AEC-Q200-compliant capacitors operate continuously from -55°C to +125°C, ensuring reliable performance in harsh environments.
The capacitors cover a rated DC voltage range of 630 V to 2000 V with capacitance values from 2.2 nF to 470 nF. Their specialized dielectric system, combining polypropylene with double-sided metallized PET film electrodes, enables both high pulse strength and current handling. These characteristics make them well-suited for onboard chargers and DC/DC converters in xEVs, as well as uninterruptible power supplies, industrial switch-mode power supplies, and electronic ballasts.
TDK reports that B3264xH capacitors offer high insulation resistance, low dissipation factor, and strong self-healing properties, resulting in a 200,000-hour service life at +85°C and full rated voltage. They are available in three lead spacings—10 mm, 15 mm, and 22.5 mm—to allow integration in space-constrained circuit layouts.
The post Rugged film capacitors off high pulse strength appeared first on EDN.
Hybrid redrivers aid high-speed HDMI links

With integrated display data channel (DDC) listeners, Diodes’ 3.3-V, quad-channel hybrid ReDrivers preserve HDMI signal integrity for high-resolution video transmission. The PI3HDX12311 supports HDMI 2.1 fixed rate link (FRL) signaling up to 12 Gbps and transition-minimized differential signaling (TMDS) up to 6 Gbps. The PI3HDX6311 supports HDMI 2.0 at up to 6 Gbps.
Both devices operate in either limited or linear mode. In HDMI 1.4 applications, they function as limited redrivers, using a predefined differential output swing—set via swing control—to maintain HDMI-compliant levels at the receptacle. For HDMI 2.0 and 2.1, they switch to linear mode, where the output swing scales with the input signal, effectively acting as a trace canceller. This mode remains transparent to link training signals and, in the PI3HDX12311, supports 8K video resolution and data rates up to 48 Gbps (12 Gbps per channel).
The PI3HDX12311 and PI3HDX6311 provide Dual-Mode DisplayPort (DP++) V1.1 level shifting and offer flexible coupling options, allowing AC, DC, or mixed coupling on both input and output signals. To reduce power consumption, the devices monitor the hot-plug-detect (HPD) pin and enter a low-power state if HPD remains low for more than 2 ms.
In 3500-unit quantities, the PI3HDX12311 costs $0.99 each, and the PI3HDX6311 costs $0.77 each.
The post Hybrid redrivers aid high-speed HDMI links appeared first on EDN.
Bluetooth 6.0 modules target varied applications

KAGA FEI is expanding its Bluetooth LE portfolio with two Bluetooth 6.0 modules that offer different memory configurations. Like the existing EC4L15BA1, the new EC4L10BA1 and EC4L05BA1 are based on Nordic Semiconductor’s nRF54L series of wireless SoCs and integrate a built-in antenna.
The EC4L15BA1 offers the highest memory capacity, with 1.5 MB of NVM and 256 KB of RAM. For applications with lighter requirements, the EC4L10BA1 includes 1.0 MB of NVM and 192 KB of RAM, while the EC4L05BA1 provides 0.5 MB of NVM and 96 KB of RAM. This range enables use cases from industrial IoT and healthcare to smart home devices and cost-sensitive, high-volume designs.
The post Bluetooth 6.0 modules target varied applications appeared first on EDN.
CCD sensor lowers noise for clearer inspections

Toshiba’s TCD2728DG CCD linear image sensor uses lens reduction to cut random noise, enhancing image quality in semiconductor inspection equipment and A3 multifunction printers. As a lens-reduction type sensor, it optically compresses the image before projection onto the sensor. According to Toshiba, the TCD2728DG has lower output amplifier gain than the earlier TCD2726DG and reduces random noise by about 40%, down to 1.9 mV.
The color CCD image sensor features 7500 image-sensing elements across three lines, with a pixel size of 4.7×4.7 µm. It supports a 100-MHz data rate (50-MHz × 2 channels), enabling high-speed processing of large image volumes. This makes it well-suited for line-scan cameras in inspection systems that require real-time decision-making. A built-in timing generator and CCD driver simplify integration and help streamline system development.
The sensor’s input clocks accept a CMOS-level 3.3-V drive. It operates with 3.1- V to 3.5-V analog, digital, and clock driver supplies (VAVDD, VDVDD, VCKDVDD), plus a 9.5-V to 10.5-V supply for VVDD10. Typical RGB sensitivity values are 6.7 V/lx·s, 8.5 V/lx·s, and 3.1 V/lx·s, respectively.
Toshiba has begun volume shipments of the TCD2728DG CCD image sensor in 32-pin WDIPs.
Toshiba Electronic Devices & Storage
The post CCD sensor lowers noise for clearer inspections appeared first on EDN.
Car speed and radar guns

The following would apply to any moving vehicle, but just for the sake of clear thought, we will use the word “car”.
Imagine a car coming toward a radar antenna that is transmitting a microwave pulse which goes out toward that car and then comes back from that car in a time interval called “T1”. Then that same radar antenna transmits a second microwave pulse that also goes out toward that still oncoming car and then comes back from that car, but in a time interval called “T2”. This concept is illustrated in Figure 1.
Figure 1 Car radar timing where T1 is the time it takes for a first pulse to go out toward a vehicle get reflected back to the radiating source, and T2 is the time it takes for a second pulse to go out toward the same vehicle and get reflected back to the radiating source.
The further away the car is, the longer T1 and T2 will be, but if a car is moving toward the antenna, then there will be a time difference between T1 and T2 for which the distance the car has moved will be proportional to that time difference. In air, that scale factor comes to 1.017 nanoseconds per foot (ns/ft) of distance (see Figure 2).
Figure 2 Calculating roundtrip time for a radar signal required to catch a vehicle traveling at 55 mph and 65 mph.
Since we are interested in the time it takes to traverse the distance from the antenna to the car twice (round trip), we would measure a time difference of 2.034 ns/ft of car travel.
A speed radar measures the positional change of an oncoming or outgoing car. Since 60 mph equals 88 ft/s, we know that 55 mph comes to (80+2/3) ft/s. If the interval between transmitted radar pulses were one pulse per second, a distance of (80+2/3) feet would correspond to an ABS(T1-T2) time difference of 164.0761 ns. A difference in time intervals of more than that many nanoseconds would then be indicative of a driver exceeding a speed limit of 55 mph.
For example, a speed of 65 mph would yield 193.9081 ns, and on most Long Island roadways, it ought to yield a speeding ticket.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Mattel makes a real radar gun, on the cheap
- Simple Optical Speed Trap
- Whistler’s DE-7188: Radar (And Laser) Detection Works Great
- Accidental engineering: 10 mistakes turned into innovation
The post Car speed and radar guns appeared first on EDN.
Impedance mask in power delivery network (PDN) optimization

In telecommunication applications, target impedance serves as a crucial benchmark for power distribution network (PDN) design. It ensures that the die operates within an acceptable level of rail voltage noise, even under the worst-case transient current scenarios, by defining the maximum allowable PDN impedance for the power rail on the die.
This article will focus on the optimization techniques to meet the target impedance using a point-of-load (PoL) device, while providing valuable insights and practical guidance for designers seeking to optimize their PDNs for reliable and efficient power delivery.
Defining target impedance
With the rise of high-frequency signals and escalating power demands on boards, power designers are prioritizing noise-free power distribution that can efficiently supply power to the IC. Controlling the power delivery network’s impedance across a certain frequency range is one approach to guarantee proper operation of high-speed systems and meet performance demands.
This impedance can generally be estimated by dividing the maximum allowed ripple voltage by the maximum expected current step load. The power delivery network’s target impedance (ZTARGET) can be calculated with below equation:
Achieving ZTARGET across a wide frequency spectrum requires a power supply at lower frequencies, combined with strategically placed decoupling capacitors at middle and higher frequencies. Figure 1 shows the impedance frequency characteristics of multi-layer ceramic capacitors (MLCCs).
Figure 1 The impedance frequency characteristics of MLCCs are shown across a wide frequency spectrum. Source: Monolithic Power Systems
Maintaining the impedance below the calculated threshold ensures that even the most severe transient currents generated by the IC, as well as induced voltage noise, remain within acceptable operational boundaries.
Figure 2 shows the varying target impedance across different frequency ranges, based on data from Qualcomm website. This means every element in the power distribution must be optimized at different frequencies.
Figure 2 Here is a target impedance example for different frequency ranges. Source: Qualcomm
Understanding PDN impedance
In theory, a power rail aims for the lowest possible PDN impedance. However, it’s unrealistic to achieve an ideal zero-impedance state. A widely adopted strategy to minimize PDN impedance is placing various decoupling capacitors beneath the system-on-chip (SoC), which flattens the PDN impedance across all frequencies. This prevents voltage fluctuations and signal jitter on output signals, but it’s not necessarily the most effective method to optimize power rail design.
Three-stage low-pass filter approach
To further explore optimizing power rail design, the fundamentals of PDN design must be re-examined in addition to considering new approaches to achieve optimal performance. Figure 3 shows the PDN conceptualized as a three-stage low-pass filter, where each stage of this network plays a specific role in filtering and stabilizing the current drawn from the SoC die.
Figure 3 The PDN is conceptualized as a three-stage low-pass filter. Source: Monolithic Power Systems
The three-stage low-pass filter is described below:
- Current drawn from the SoC die: The process begins with current being drawn from the SoC die. Any current drawn is filtered by the package, which interacts with die-side capacitors (DSCs). This initial filtering stage reduces the current’s slew rate before it reaches the PCB socket.
- PCB layout considerations and MLCCs: Once the current passes through the PCB ball grid arrays (BGAs), the second stage of filtering occurs as the current flows through the power planes on the PCB and encounters the MLCCs. During this stage, it’s crucial to focus on selecting capacitors that effectively operate at specific frequencies. High-frequency capacitors placed beneath the SoC do not significantly influence lower frequency regulation.
- Voltage regulator (VR) with power planes and bulk capacitors: The final stage involves the VR and bulk capacitors, which work together to stabilize the power supply by addressing lower-frequency noise.
The PDN’s three-stage approach ensures that each component contributes to minimizing impedance across different frequency bands. This structured methodology is vital for achieving reliable and efficient power delivery in modern electronic systems.
Case study: Telecom evaluation board analysis
This in-depth examination uses a telecommunications-specific evaluation board from MPS, which demonstrates the capabilities of the MPQ8785, a high-frequency, synchronous buck converter, in a real-world setting. Moreover, this case study underlines the importance of capacitor selection and placement to meet the target impedance.
To initiate the process, PCB parasitic extraction is performed on the MPS evaluation board. Figure 4 shows a top view of the MPQ8785 evaluation board layout, where two ports are selected for analysis. Port 1 is positioned after the inductor, while Port 2 is connected to the SoC BGA.
Figure 4 PCB parasitic extraction is performed on the telecom evaluation board. Source: Monolithic Power Systems
Capacitor models from vendor websites are also included in this layout, including the equivalent series inductance (ESL) and equivalent series resistance (ESR) parasitics. As many capacitor models as possible are allocated beneath the SoC in the bottom of the PCB to maintain a flat impedance profile.
Table 1 Here is the initial capacitor selection for different quantities of capacitors targeting different frequencies. Source: Monolithic Power Systems
Figure 5 shows a comparison of the target impedance profile defined by the PDN mask for the core rails to the actual initial impedance measured on the MPQ8785 evaluation board using the initially selected capacitors. This graphical comparison enables a direct assessment of the impedance characteristics, facilitating the evaluation of the PDN performance.
Figure 5 Here is a comparison between the target impedance profile and initial impedance using the initially selected capacitors. Source: Monolithic Power Systems
Based on the data from Figure 5, the impedance exceeds the specified limit within the 300-kHz to 600-kHz frequency range, indicating that additional capacitance is required to mitigate this issue. Introducing additional capacitors effectively reduces the impedance in this frequency band, ensuring compliance with the specification.
Notably, high-frequency capacitors are also observed to have a negligible impact on the impedance at higher frequencies, suggesting that their contribution is limited to specific frequency ranges. This insight informs optimizing capacitor selection to achieve the desired impedance profile.
Through an extensive series of simulations that systematically evaluate various capacitor configurations, the optimal combination of capacitors required to satisfy the impedance mask requirements was successfully identified.
Table 2 The results of this iterative process outline the optimal quantity of capacitors and total capacitance. Source: Monolithic Power Systems
The final capacitor selection ensures that the PDN impedance profile meets the specified mask, thereby ensuring reliable power delivery and performance. Figure 6 shows the final impedance with optimized capacitance.
Figure 6 The final impedance with optimized capacitance meets the specified mask. Source: Monolithic Power Systems
With a sufficient margin at frequencies above 10 MHz, capacitors that primarily affect higher frequencies can be eliminated. This strategic reduction minimizes the occupied area and decreases costs while maintaining compliance with all specifications. Performance, cost, and space considerations are effectively balanced by using the optimal combination of capacitors required to satisfy the impedance mask requirements, enabling robust PDN functionality across the operational frequency range.
To facilitate the case study, the impedance mask was modified within the 10-MHz to 40-MHz frequency range, decreasing its overall value to 10 mΩ. Implementing 10 additional 0.1-µF capacitors was beneficial to reduce impedance in the evaluation board, which then effectively reduced the impedance in the frequency range of interest.
Figure 7 shows the decreased impedance mask as well as the evaluation board’s impedance response. The added capacitance successfully reduces the impedance within the specified frequency range.
Figure 7 The decreased PDN mask with optimized capacitance reduces impedance within the specified frequency range. Source: Monolithic Power Systems
Compliance with impedance mask
This article used the MPQ8785 evaluation board to optimize PDN performance, ensuring compliance with the specified impedance mask. Through this optimization process, models were developed to predict the impact of various capacitor types on impedance across different frequencies, which facilitates the selection of suitable components.
Capacitor selection for optimized power rail design depends on the specific impedance mask and frequency range of interest. A random selection of capacitors for a wide variety of frequencies is insufficient for optimizing PDN performance. Furthermore, the physical layout must minimize parasitic effects that influence overall impedance characteristics, where special attention must be given to optimizing the layout of capacitors to mitigate these effects.
Marisol Cabrera is application engineer manger at Monolithic Power Systems (MPS).
Albert Arnau is application engineer at Monolithic Power Systems (MPS).
Robert Torrent is application engineer at Monolithic Power Systems (MPS).
Related Content
- SoC PDN challenges and solutions
- Power 107: Power Delivery Networks
- Debugging approach for resolving noise issues in a PDN
- Optimizing capacitance in power delivery network (PDN) for 5G
- Power delivery network design requires chip-package-system co-design approach
The post Impedance mask in power delivery network (PDN) optimization appeared first on EDN.
Flip ON Flop OFF: high(ish) voltages from the positive supply rail

We’ve seen lots of interesting conversations and Design Idea (DI) collaboration devising circuits for power switching using inexpensive (and cute!) momentary-contact SPST pushbuttons. A recent and interesting extension of this theme by frequent contributor R Jayapal addresses control of relatively high DC voltages: 48 volts in his chosen case.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In the course of implementing its high voltage feature, Jayapal’s design switches the negative (Vss a.k.a. “ground”) rail of the incoming supply instead of the (more conventional) positive (Vdd) rail. Of course, there’s absolutely nothing physically wrong with this choice (certainly the electrons don’t know the difference!). But because it’s a bit unconventional, I worry that it might create possibilities for the unwary to make accidental, and potentially destructive, misconnections.
Figure 1’s circuit takes a different tack to avoid that.
Figure 1 Flip ON/Flop OFF referenced to the V+ rail. If V+ < 15v, then set R4 = 0 and omit C2 and Z1. Ensure that C2’s voltage rating is > (V+ – 15v) and if V+ > 80v, R4 > 4V+2
Figure 1 returns to an earlier theme of using a PFET to switch the positive rail for power control, and a pair of unbuffered CMOS inverters to create a toggling latch to control the FET. The basic circuit is described in “Flip ON Flop OFF without a Flip/Flop.”
What’s different here is that all circuit nodes are referenced to V+ instead of ground, and Zener Z1 is used to synthesize a local bias reference. Consequently, any V+ rail up to the limit of Q1’s Vds rating can be accommodated. Of course, if even that’s not good enough, higher rated FETs are available.
Be sure to tie the inputs of any unused U1 gates to V+.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Flip ON flop OFF
- Flip ON Flop OFF for 48-VDC systems
- Flip ON Flop OFF without a Flip/Flop
- Elaborations of yet another Flip-On Flop-Off circuit
- Latching D-type CMOS power switch: A “Flip ON Flop OFF” alternative
The post Flip ON Flop OFF: high(ish) voltages from the positive supply rail appeared first on EDN.
The next AI frontier: AI inference for less than $0.002 per query

Inference is rapidly emerging as the next major frontier in artificial intelligence (AI). Historically, the AI development and deployment focus has been overwhelmingly on training with approximately 80% of compute resources dedicated to it and only 20% to inference.
That balance is shifting fast. Within the next two years, the ratio is expected to reverse to 80% of AI compute devoted to inference and just 20% to training. This transition is opening a massive market opportunity with staggering revenue potential.
Inference has a fundamentally different profile—it requires lower latency, greater energy efficiency, and predictable real-time responsiveness than training-optimized hardware, which entails excessive power consumption, underutilized compute, and inflated costs.
When deployed for inference, the training-optimized computing resources result in a cost-per-query at one or even two orders of magnitude higher than the benchmark of a cost of $0.002 per query established by a 2023 McKinsey analysis based on the Google 2022 search activity estimated to be in average 100,000 queries per second.
Today, the market is dominated by a single player whose quarterly results reflect its stronghold. While a competitor has made some inroads and is performing respectably, it has yet to gain meaningful market share.
One reason is architectural similarity; by taking a similar approach to the main player, rather than offering a differentiated, inference-optimized alternative, the competitor faces the same limitations. To lead in the inference era, a fundamentally new processor architecture is required. The most effective approach is to build dedicated, inference-optimized infrastructure, an architecture specifically tailored to the operational realities of processing generative AI models like large language models (LLMs).
This means rethinking everything from compute units and data movement to compiler design and LLM-driven architectures. By focusing on inference-first design, it’s possible to achieve significant gains in performance-per-watt, cost-per-query, time-to-first-token, output-token-per-second, and overall scalability, especially for edge and real-time applications where responsiveness is critical.
This is where the next wave of innovation lies—not in scaling training further, but in making inference practical, sustainable, and ubiquitous.
The inference trinity
AI inference hinges on three critical pillars: low latency, high throughput and constrained power consumption, each essential for scalable, real-world deployment.
First, low latency is paramount. Unlike training, where latency is relatively inconsequential—a job taking an extra day or costing an additional million dollars is still acceptable as long as the model is successfully trained—inference operates under entirely different constraints.
Inference must happen in real time or near real time, with extremely low latency per query. Whether it’s powering a voice assistant, an autonomous vehicle or a recommendation engine, the user experience and system effectiveness hinge on sub-millisecond response times. The lower the latency, the more responsive and viable the application.
Second, high throughput at low cost is essential. AI workloads involve processing massive volumes of data, often in parallel. To support real-world usage—especially for generative AI and LLMs—AI accelerators must deliver high throughput per query while maintaining cost-efficiency.
Vendor-specified throughput often falls short of peak targets in AI workload processing due to low-efficiency architectures like GPUs. Especially, when the economics of inference are under intense scrutiny. These are high-stakes battles, where cost per query is not just a technical metric—it’s a competitive differentiator.
Third, power efficiency shapes everything. Inference performance cannot come at the expense of runaway power consumption. This is not only a sustainability concern but also a fundamental limitation in data center design. Lower-power devices reduce the energy required for compute, and they ease the burden on the supporting infrastructure—particularly cooling, which is a major operational cost.
The trade-off can be viewed from the following two perspectives:
- A new inference device that delivers the same performance at half the energy consumption can dramatically reduce a data center’s total power draw.
- Alternatively, maintaining the same power envelope while doubling compute efficiency effectively doubles the data center’s performance capacity.
Bringing inference to where users are
A defining trend in AI deployment today is the shift toward moving inference closer to the user. Unlike training, inference is inherently latency-sensitive and often needs to occur in real time. This makes routing inference workloads through distant cloud data centers increasingly impractical—from both a technical and economic perspective.
To address this, organizations are prioritizing edge-based inference processing data locally or near the point of generation. Shortening the network path between the user and the inference engine significantly improves responsiveness, reduces bandwidth costs, enhances data privacy, and ensures greater reliability, particularly in environments with limited or unstable connectivity.
This decentralized model is gaining traction across industry. Even AI giants are embracing the edge, as seen in their development of high-performance AI workstations and compact data center solutions. These innovations reflect a clear strategic shift: enabling real-time AI capabilities at the edge without compromising on compute power.
Inference acceleration from the ground up
One high-tech company, for example, is setting the engineering pace with a novel architecture designed specifically to meet the stringent demands of AI inference in data centers and at the edge. The architecture breaks away from legacy designs optimized for training workloads with near-theoretical performance in latency, throughput, and energy efficiency. More entrants are certain to follow.
Below are some of the highlights of this inference technology revolution in the making.
Breaking the memory wall
The “memory wall” has challenged chip designers since the late 1980s. Traditional architectures attempt to mitigate the impact on performance introduced by data movement between external memory and processing units by layering memory hierarchies, such as multi-layer caches, scratchpads and tightly coupled memory, each offering tradeoffs between speed and capacity.
In AI acceleration, this bottleneck becomes even more pronounced. Generative AI models, especially those based on incremental transformers, must constantly reprocess massive amounts of intermediate state data. Conventional architectures struggle here. Every cache miss—or any operation requiring access outside in-memory compute—can severely degrade performance.
One approach collapses the traditional memory hierarchy into a single, unified memory stage: a massive SRAM array that behaves like a flat register file. From the perspective of the processing units, any register can be accessed anywhere, at any time, within a single clock. This eliminates costly data transfers and removes the bottlenecks that hamper other designs.
Flexible computational tiles with 16 high-performance processing cores dynamically reconfigurable at run-time executes either AI operations, like multi-dimensional matrix operations (ranging from 2D to N-dimensional), or advanced digital signal processing (DSP) functions.
Precision is also adjustable on-the-fly, supporting formats from 8 bits to 32 bits in both floating point and integer. Both dense and sparse computation modes are supported, and sparsity can be applied on the fly to either weights or data—offering fine-grained control for optimizing inference workloads.
Each core features 16-million registers. While a vast register file presents challenges for traditional compilers, two key innovations come to rescue:
- Native tensor processing, which handles vectors, tensors, and matrices directly in hardware, eliminates the need to reduce them to scalar operations and manually implements nested loops—as required in GPU environments like CUDA.
- With high-level abstraction, developers can interact with the system at a high level—PyTorch and ONNX for AI and Matlab-like functions for DSP—without the need to write low-level code or manage registers manually. This simplifies development and significantly boosts productivity and hardware utilization.
Chiplet-based scalability
A physical implementation leverages a chiplet architecture, with each chiplet comprising two computational cores. By combining chiplets with high-bandwidth memory (HBM) chiplet stacks, the architecture enables highly efficient scaling for both cloud and edge inference scenarios.
- Data center-grade inference for efficient tailoring of compute and memory resources suits edge constraints. The configuration pairs eight VSORA chiplets with eight HBM3e chiplets, delivering 3,200 TFLOPS of compute performance in FP8 dense mode and optimized for large-scale inference workloads in data centers.
- Edge AI configurations allow efficient tailoring of compute resources and lower memory requirements to suit edge constraints. Here, two chiplets + one HBM chiplet = 800 TFLOPS and four chiplets + one HBM chiplet = 1,600 TFLOPS.
Power efficiency as a side effect
The performance gains are clear as is power efficiency. The architecture delivers twice the performance-per-watt of comparable solutions. In practical terms, the chip draw stops at just 500 watts, compared to over one kilowatt for many competitors.
When combined, these innovations provide multiple times the actual performance at less than half the power—offering an overall advantage of 8 to 10 times compared to conventional implementations.
CUDA-free compilation
One often-overlooked advantage of the architecture lies in its streamlined and flexible software stack. From a compilation perspective, the flow is simplified compared to traditional GPU environments like CUDA.
The process begins with a minimal configuration file—just a few lines—that defines the target hardware environment. This file enables the same codebase to execute across a wide range of hardware configurations, whether that means distributing workloads across multiple cores, chiplets, full chips, boards, or even across nodes in a local or remote cloud. The only variable is execution speed; the functional behavior remains unchanged. This makes on-premises and localized cloud deployments seamless and scalable.
A familiar flow without complexity
Unlike CUDA-based compilation processes, the flow appears basic without layers of manual tuning and complexity through a more automated and hardware-agnostic compilation approach.
The flow begins by ingesting standard AI inputs, such as models defined in PyTorch. These are processed by a proprietary graph compiler that automatically performs essential transformations such as layer reordering or slicing for optimal execution. It extracts weights and model structure and then outputs an intermediate C++ representation.
This C++ code is then fed into an LLVM-based backend, which identifies the compute-intensive portions of the code and maps them to the architecture. At this stage, the system becomes hardware-aware, assigning compute operations to the appropriate configuration—whether it’s a single A tile, an edge device, a full data center accelerator, a server, a rack or even multiple racks in different locations.
Invisible acceleration for developers
From a developer’s point of view, the accelerator is invisible. Code is written as if it targets the main processor. During compilation, the compilation flow identifies the code segments best suited for acceleration and transparently handles the transformation and mapping to hardware, lowering the barrier for adoption and requiring no low-level register manipulation or specialized programming knowledge.
The instruction set is high-level and intuitive, carrying over capabilities from its origins in digital signal processing. The architecture supports AI-specific formats such as FP8 and FP16, as well as traditional DSP operations like FP16/ arithmetic, all handled automatically on a per-layer basis. Switching between modes is instantaneous and requires no manual intervention.
Pipeline-independent execution and intelligent data retention
A key architectural advantage is pipeline independence—the ability to dynamically insert or remove pipeline stages based on workload needs. This gives the system a unique capacity to “look ahead and behind” within a data stream, identifying which information must be retained for reuse. As a result, data traffic is minimized, and memory access patterns are optimized for maximum performance and efficiency, reaching levels unachievable in conventional AI or DSP systems.
Built-in functional safety
To support mission-critical applications such as autonomous driving, functional safety features are integrated at the architectural level. Cores can be configured to operate in lockstep mode or in redundant configurations, enabling compliance with strict safety and reliability requirements.
In the final analysis, a memory architecture that eliminates traditional bottlenecks, compute units tailored for tensor operations, and unmatched power efficiency sets a new standard for AI inference.
Lauro Rizzatti is a business advisor to VSORA, an innovative startup offering silicon IP solutions and silicon chips, and a noted verification consultant and industry expert on hardware emulation.
Related Content
- AI at the edge: It’s just getting started
- Custom AI Inference Has Platform Vendor Living on the Edge
- Partitioning to optimize AI inference for multi-core platforms
- Revolutionizing AI Inference: Unveiling the Future of Neural Processing
The post The next AI frontier: AI inference for less than $0.002 per query appeared first on EDN.
Why modulate a power amplifier?—and how to do it

We recently saw how certain audio power amplifiers can be used as oscillators. This Design Idea shows how those same parts can be used for simple amplitude modulation, which is trickier than it might seem.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The relevant device is the TDA7052A, which we explored in some detail while making it oscillate. It has a so-called logarithmic gain-control input, the gain in dBs being roughly proportional to the voltage on that pin over a limited range.
However, we may want a reasonably linear response, which would mean undoing some of the chip designers’ careful work.
First question: why—what’s the application?
The original purpose of this circuit was to amplitude-modulate the power output stage of an infrasonic microphone. That gadget generated both the sub-10‑Hz baseband signal and an audio tone whose pitch varied linearly with it, allowing one to hear at least a proxy for the infrasonics. The idea was to keep the volume low during relatively inactive periods and only increase it during the peaks, whether those were positive or negative, so that frequency and amplitude modulation would work hand in hand.
The two basic options are to use the device’s inherent “log” law (more like antilog), so that the perceived loudness was modulated, or to feed the control pin with a logarithmically-squashed signal—the inverse of the gain-control curve—to linearize the modulation. The former is simpler but sounded rather aggressive; the latter, more complicated but smoother, so we’ll concentrate on that. The gain-control curve from the datasheet, overlaid with real-life measurements, is shown in Figure 1. Because we need gain to drive the speaker, we can only use the upper, more bendy, part of the curve, with around 26 dB of gain variation available.
Figure 1 The TDA7052A’s control voltage versus its gain, with the theoretical curve and practical readings.
For accurate linear performance, an LM13700 OTA configured as an amplitude modulator worked excellently, but needed a separate power output stage and at least ±6-V supplies rather than the single, split 5-V rail used for the rest of the circuitry. An OTA’s accuracy and even precision are not needed here; we just want the result to sound right, and can cut some corners. (The LM13700’s datasheet is full of interesting applications.)
Next question: how?
At the heart of this DI is an interesting form of full-wave rectifier. We’ll look at it in detail, and then pull it to pieces.
If we take a paralleled pair of current sources, one inverting and the other not, we can derive a current proportional to the absolute value of the input: see Figure 2.
Figure 2 A pair of current sources can make a novel full-wave rectifier.
The upper, inverting, section sources current towards ground when the input is positive (with respect to the half-rail point), and the lower, non-inverting part does so for negative half-cycles. R1 sets the transconductance for both stages. Thus, the output current is a function of the absolute value of the input voltage. It’s shown as driving R4 to produce a voltage with respect to 0 V, which sounds more useful than it really is.
Conventional full-wave rectifiers usually have a voltage output, stored on a capacitor, and representing the peak levels. This circuit can’t do that: connecting a capacitor across R4 merely averages the signal. To extract the peaks, another stage would be needed: pointless. By the way, the original thoughts for this stage were standard precision rectifiers with incorporated or added current sources, but they proved to be more complicated while performing no better—except for inputs below ~5 mV, where they had less “crossover distortion.”
The maximum output voltage swing is limited by the ratios of R4 to R2 (or R3). Excessive positive inputs will tend to saturate Q1, so VOUT can approach Vs/2. (The transistor’s emitter is servoed to Vs/2.) With R4 = R2 = R3, negative swings saturate Q2, but the ratio of R3 and R4 means that VOUT can only approach Vs/4. Q1 and Q2 respond differently to overloads, with Q2’s circuit folding back much sooner. If R2, R3, and R4 are all equal, the maximum unclipped voltage swing across R4 is just less than a quarter of the supply rail voltage.
Increasing R1 and making R4 much greater than R2 or R3 allows a greater swing for those negative inputs, but at the expense of increased offset errors. Adding an extra gain stage would give those same problems while needing more parts.
Applying the current source to the power amp
Conclusion: This circuit is great for sourcing a current to ground, but if you need a linear voltage output, it’s less useful. We don’t want linearity but something close to a logarithmic response, or the inverse of the power amp’s control voltage. Feeding the current through a network containing a diode can do just that, and the resulting circuit is shown in Figure 3.
Figure 3 Schematic of a power amplifier that is amplitude-modulated using the dual current source.
The current source is just as described above. With R1 = 100k, the output peaks at 23 µA for ±2.5 V inputs. That current feeds the network R4/R5/D3, which suitably squashes the signal, ready for buffering into A2’s Vcon input. The gain characteristic is now much more linear, as the waveforms in Figure 4 indicate. The TDA7052A’s Vcon pin normally either sinks or sources current, but emitter follower Q3 overrides that as well as buffering the output from the network.
Figure 4 Some waveforms from Figure 3, showing its operation.
To show the operation more cleanly, the plots were made using a 10-Hz tri-wave to modulate a 700-Hz sine wave. (The target application would have an infrasonic signal—from, say, 300 mHz to 10 Hz—modulating a pitch-linear audio tone ranging from about 250 to 1000 Hz depending on the signal’s absolute level.)
Some further notes on the circuitry
The values for R4/R5/D3 were optimized by a process of heuristic iteration, which is fancy-speak for lots of fiddling with trimmers until things looked right on the ’scope. These worked for me with the devices to hand. Others gave similar results; the absolute values are less important than the overall combination.
R7 and R8 may seem puzzling: there’s nothing like them on the PA’s datasheet. I found that applying a little bias to the audio input pin helps minimize the chip’s internal offsets, which otherwise cause some (distorted) feedthrough from the control voltage to the outputs. With a modulating input but no audio present, trim R7 for minimum signal at the output(s). The difference is barely audible, but it shows up clearly on a ’scope as traces that are badly slewed.
The audio feed needs to come from a volume-control pot. While it might seem more obvious to incorporate gain control in the network driving A2.4—after all, that’s the primary function of that pin—that proved over-complicated, and introduced yet more temperature effects.
Temperature effects! The current source is (largely) free of them, but D3, Q3, and A2 aren’t, and I have made no attempt to compensate for their contributions. The practical solution is to make R6 variable: a large, user-friendly knob labeled “Effect”, thus turning the problem into A Feature.
A2’s Vcon pin sinks/sources some (temperature-dependent) current, so varying R6 allows reasonable, if manual, temperature compensation. Because its setting affects both the gain and the part of the gain curve that we are using, the effective baseline is shifted, allowing more or less of the audio corresponding to low-level modulating signals to pass through. Figure 5 shows its effect on the output at around 20°C.
Figure 5 Varying R6 helps compensate for temperature problems and allows different audible effects.
Don’t confuse this circuit with a “proper” amplitude modulator! But for taking an audio signal, modulating it reasonably linearly, and driving the result directly into a speaker, it works well. The actual result can be seen in Figure 6, which shows both the detected infrasonic signal resulting from a gusty day and the audio output, whose frequency changes are invisible with the timebase used, but whose amplitude can be seen to track the modulating signal quite nicely.
Figure 6 A real-life infrasonic signal with the resulting audio modulated in both frequency (too fast to show up here) and amplitude.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- Power amplifiers that oscillate— Part 1: A simple start.
- Power amplifiers that oscillate—deliberately. Part 2: A crafty conclusion.
- Revealing the infrasonic underworld cheaply, Part 1
- Revealing the infrasonic underworld cheaply, Part 2
- Ultra-low distortion oscillator, part 1: how not to do it.
- Ultra-low distortion oscillator, part 2: the real deal
The post Why modulate a power amplifier?—and how to do it appeared first on EDN.
Disassembling a LED-based light that’s not acting quite right…right?

A few months back, I came across an LED-based desk lamp queued up to go out to the trash. When I asked my wife about it, she said (or at least my recollection is that she said) that it had gone dim, so she’d replaced it with another one. But the device didn’t include any sort of “dimmer” functionality, and I wasn’t (at the time, at least) aware that LED lighting’s inherent intensity could fade over time, only that it would inevitably flat-out fail at some point.
My curiosity sufficiently piqued, especially since I’d intercepted it on the way to the landfill anyway, I decided to take it apart first. It’s Hampton Bay’s 15.5 in. Black Indoor LED Desk Lamp, originally sold by Home Depot and currently “out of stock” both in-store and online; I assume it’s no longer available for purchase. Here are some stock shots of what it looks like, to start:
See: no dimmer. Just a simple on/off toggle:
I don’t remember when we bought it or what we paid for it; it had previously resided on my wife’s sewing table. The Internet Archive has four “snapshots” of the page, ranging from the end of June 2020, when it was apparently on sale for $14.71 versus the $29.97 MSRP (I hope we snagged it then!), through early December of last year. My wife took up sewing during the COVID-19 lockdown, so a 2020-era acquisition sounds about right.
Here’s what it looks like in “action” (if you can call it that) in my furnace room, striving (and effectively failing) to differentiate its “augmentation” of the baseline overhead lighting:
Turn off the room light, and the lamp’s standalone luminary capabilities still aren’t impressive:
And here’s a close-up of the light source in “action”, if you can call it that, in my office:
Scan through the reviews on the product page and, unless I overlooked something, you won’t find anyone complaining that it’s not bright enough. Several of the positive reviews go so far as to specifically note that it’s very bright. And ironically, one of the (few) negative ones indicates that it’s too bright. The specs claim that it has a 3W output (no explicit lumens rating, however, far from a color temperature), which roughly translates to a 30W incandescent equivalent.
Time to dive in. Let’s begin with the underside, where a label is attached to a felt “foot”:
A Google search on “Arcadia AL40165” reveals nothing meaningful results-wise aside from the Home Depot product page. “Intertek 4000145” isn’t any more helpful. And, regardless of when we actually bought it, this particular unit was apparently manufactured in December 2016.
Peeling back the felt “foot”, I was initially confused by the three closed-end crimp connectors revealed underneath:
until I peeled it away the rest of the way and…oh yeah, the on/off switch:
Note the wiring colors. Typically, in my experience, the “+” DC feed corresponds to the white wire, with the “-“ return segment handled by the black wire, and the “+” portion of the circuit is what’s switched. This implementation seems opposite of that convention. Hold that thought.
Now for the light source. With the lamp switched off, you can see what appears to be a central LED surrounded by several others in circumference. Conceptually, this matches the arrangement I’ve seen before with LED light bulbs, so my initial working theory was that whatever circuitry was driving the LEDs in the perimeter had failed, leaving only the central one still operational. Why there would be such a two-stage arrangement at all wasn’t obvious, although I postulated that this same hardware might find use in another lamp with a three-position (bright/dim/off) toggle switch.
Removing the diffuser:
unfortunately dashed that theory; there was only a single LED in the center:
Here’s what it looks like illuminated, this time absent the diffuser:
A brief aside at this point: what’s with the second “right?” in the title? Well, when I mentioned to my wife the other day that I’d completed the teardown but hadn’t definitively figured out why the lamp had dimmed over time, she now said that to the best of her recollection, it had always been dim. Hmmm. If indeed I’d previously misunderstood her (and of course, my default stance is to always assume my wife is right…right?), then what we have is a faulty LED from the get-go. But just for grins, let’s pretend my dimmer-over-time recollection is correct and proceed.
One other root cause possibility is that the power supply feeding the LED is in the process of failing, thereby outputting under-spec voltage and/or current. Revisiting the earlier white-vs-black wire discussion, when I initially probed the solder connections with my multimeter using standard polarity conventions, I got a negative voltage reading:
The LED theoretically could have been operating in reverse-bias breakdown (albeit presumably not for long). But more likely, in conjunction with the earlier-mentioned switch location in the circuit, the wire colors were just reversed. Yep, that’s more like it:
Note that their connections to the LED might still be reversed, however. Or perhaps the lamp’s power supply was current output-compromised. To test both of these suppositions, I probe-connected and fueled the LED with my inexpensive-and-passable power supply instead:

With the connections using standard white vs. black conventions, I got…nothing. Reversed, the LED light output weakly matched that delivered when driven by the lamp’s own power supply. And my standalone power supply also informed me that the lamp pulls 180 mA at 10 V.
About that “lamp’s own power supply”, by the way (as-usual accompanied by a 0.75″/19.1 mm diameter U.S. penny for size comparison purposes):
The label refers to it as an “LED Driver,” but I’m guessing that it’s just a normal “wall wart”, absent a plug on the output end. And a Google search of “JG-LED1-5UPPL” (that’s the number 5, not an S, by the way) further bolsters that hypothesis (“Intertek 4002637” conversely wasn’t helpful at all, aside from suggesting that this power supply unit (PSU) was originally intended for a different lamp model). But I’m still baffled by the “DC5-10V MAX” notation in the labeled output specs…???
And removing two more screws, here’s what the plate the LED is mounted to looks like when separated from the “heatsink” behind it (note the trivial dab of thermal paste between them):
All leaving me with the same question I had at the start: what caused the LED-based desk lamp’s light output to dim, either over time or from the very beginning (depending on which spouse’s story you’re going with)? The most likely remaining culprit, I’m postulating, is the phosphor layer above the LED. I’ve already noted the scant-at-best heat-transfer interface between the LED and the metal plate behind it. More generally, as this device definitely exemplifies, my research suggests that inexpensive designs skimp on the number of LEDs to keep the BOM cost down, compensating by overdriving the one(s) that remain. The resulting thermal stress prematurely breaks down the phosphor, resulting in color temperature shifts and reduced light output, along with eventual complete component failure.
That’s my take; what’s yours? Let me know your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- LDR PC Desk Lamp
- Constant-current wall warts streamline LED driver design for lamps, cabinet lights
- Magic carpets come alive with LEDs
- Can GE’s new LED bulbs help you get to sleep?
- Six LED challenges that still remain
- LED lamp cycles on and off, why?
The post Disassembling a LED-based light that’s not acting quite right…right? appeared first on EDN.
Triac and relay combination

Check out this link for On Semiconductor’s datasheet for a line of “Zero-Cross Optocoupled Triac Drivers.”
The ability to zero-cross when turning on AC line voltage to some loads may be advantageous. The following sketch in Figure 1 is a slightly simplified version of one circuit from the above datasheet.
Figure 1 A simplified triac drive arrangement that will turn on AC line voltage to a load.
The zero-crossover behavior of the triac and its driver operates nicely as the control input signal at pin 2 decides if AC is applied to the node or not. However, I had a somewhat different triac control requirement calling for two manually operated pushbuttons, one for turning AC power on and the other for turning AC power off while preserving the zero-crossover feature. Another issue was that at the required load power, the thermal burden to be borne by the controlled triac was much too severe.
The thermal burden on the triac was relieved as follows in Figure 2.
Figure 2 The revised triac drive arrangement with a relay added such that when the pushbutton is pressed, the triac turns on AC to the load using its zero-crossover feature.
A relay was added whose coil was tied in parallel with the load and whose normally open contacts were in parallel with the anode and cathode of the triac.
When the pushbutton was pressed “on,” the triac would turn on AC to the load using its zero-crossover feature and then the relay contacts were closed across the triac. When the relay contacts closed, the load current burden was shifted away from the triac to the relay. The triac only needed to operate through the duration of the relay’s closure time which in the case I was working on, was approximately 50 ms or just a little longer than three cycles of the input AC line voltage.
We had the zero-crossover benefits, and the triac never even got warm.
One normally-open pushbutton for turning on the load’s power was set up to drive the LED at the input of the optocoupler. You can come up with a million ways to accomplish that, so we’ll just leave that discussion aside.
Another normally-closed pushbutton was set up to remove the drive from the relay’s coil. With the first pushbutton assumed to be open and idle at that moment, and since the triac was already off, the second relay’s contacts would open up, and the load power would be turned off.
John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Simple SSR has zero-cross on/off switching
- TRIAC Dimmer Avoids Snap-ON
- Optimizing the Triacs
- TRIAC Dimmer with Active Reset
The post Triac and relay combination appeared first on EDN.
Quad-core MPU adds AI edge to HMI apps

The Renesas 64-bit RZ/G3E microprocessor powers HMI applications with a quad-core Arm Cortex-A55 CPU and Ethos-U55 neural processing unit (NPU) for AI tasks. Running at up to 1.8 GHz, the Cortex-A55 handles both HMI and edge computing functions, while an integrated Cortex-M33 core performs real-time tasks independently of the main CPU to enable low-power operation.
With Full HD graphics and high-speed connectivity, the RZ/G3E is well-suited for industrial and consumer HMI systems, including factory equipment, medical monitors, retail terminals, and building automation. It outputs 1920×1080 video at 60 fps on two independent displays via an LVDS (dual-link) interface. MIPI-DSI and parallel RGB outputs are also available, along with a MIPI-CSI interface for video input and sensing tasks.
The microprocessor’s 1-GHz NPU delivers 512 GOPS for AI workloads such as image classification, object and voice recognition, and anomaly detection—while offloading the CPU. Power management features in the RZ/G3S reduce standby consumption by maintaining sub-CPU operation and peripheral functions at approximately 50 mW, dropping to about 1 mW in deep standby mode.
The RZ/G3E microprocessor is available now. Visit the product page below to check distributor availability.
The post Quad-core MPU adds AI edge to HMI apps appeared first on EDN.
Fuel gauges ensure accurate battery tracking

TI’s single-chip battery fuel gauges, the BQ41Z90 and BQ41Z50, extend battery runtime by up to 30% using a predictive modeling algorithm. Their adaptive Dynamic Z-Track algorithm delivers state-of-charge and state-of-health accuracy within 1%, enabling precise monitoring in battery-powered devices such as laptops and e-bikes.
The fuel gauges provide accurate battery capacity readings under varying load conditions, allowing designers to right-size batteries without overprovisioning. The BQ41Z90 integrates a fuel gauge, monitor, and protector for 3- to 16-cell Li-ion battery packs, while the BQ41Z50 supports 2 to 4 cells. Integration reduces board complexity and can shrink footprint by up to 25% compared to discrete implementations.
Each battery pack manager monitors voltage, current, temperature, available capacity, and other key parameters using integrated analog peripherals and an ultra-low-power 32-bit RISC processor. Both devices report data to the host system over an SMBus v3.2-compatible interface, while the BQ41Z90 also supports I²C. It additionally enables simultaneous current and voltage conversion for real-time power calculations and supports sense resistors as low as 0.25 mΩ.
Pre-production quantities of the BQ41Z90 and production quantities of the BQ41Z50 are available now on TI.com. Evaluation modules, reference designs, and simulation models are also available.
The post Fuel gauges ensure accurate battery tracking appeared first on EDN.
DDR4 memory streamlines rugged system design

Teledyne’s 16-Gbyte DDR4 memory module, designated TDD416Y12NEPBM01, is screened and qualified as an Enhanced Product (EP) for high-reliability aerospace and defense systems. The solder-down device is smaller than a postage stamp, making it well-suited for space-constrained systems where performance is critical.
Rated for -40°C to +105°C operation, the module delivers 3200 MT/s (3200 MHz) and integrates memory, termination, and passives in a compact 22×22- mm, 216-ball BGA package. It replaces multiple discrete components, helping to simplify board layout. An optional companion ECC chip is available for applications requiring error correction.
The TDD416Y12NEPBM01 interfaces with x64 and x72 memory buses and supports a range of processors and FPGAs, including those from Xilinx, Microchip, NXP, and Intel, as well as Teledyne’s LS1046-Space. According to Teledyne, the DDR4 module achieves 42% lower power, 42% less jitter, and 39% PK/PK reduction compared to conventional SODIMMs.
To request further information on the TDD416Y12NEPBM01, click here.
The post DDR4 memory streamlines rugged system design appeared first on EDN.