Українською
  In English
Новини світу мікро- та наноелектроніки
Clapp versus Colpitts

Edwin Henry Colpitts (January 19, 1872 – March 6, 1949)
James Kilton Clapp (December 03, 1897 – 1965)
The two persons above are the geniuses who gave us two classic oscillator circuits as shown in Figure 1.
Figure 1 The two classic oscillators circuits: Colpitts (left) and Clapp (right).
We’ve looked at these two oscillators individually before in “The Colpitts oscillator” and “Clapp oscillator”.
However, a side-by-side examination of the two oscillators is additional time well spent.
The Clapp oscillator was devised as an improvement over the Colpitts oscillator by virtue of adding one capacitor, C3, in the above image.
The amplifier “A” is nominally at a gain value of unity, but as a matter of practicality, the gain value is slightly lower than that because the amplifier is really a “follower”. If made with a vacuum tube, then “A” is a cathode follower. If made with a bipolar transistor, then “A” is an emitter follower. If made with a field effect transistor, then “A” is a source follower. The concept itself remains the same.
Each oscillator works because the RLC network develops a voltage step-up at the frequency of oscillation. The “R” is not an incorporated component though. The “R” (R1 or R2) simply represents an output impedance of the follower. The 10 ohms that we see here is purely an arbitrary value guess on my part. The other components are also of arbitrary value choices, but they are convenient values for illustrating just how these little beasties work.
We use SPICE simulations to examine the transfer functions of the two RLC networks as shown in Figure 2.
Figure 2 Colpitts versus Clapp spice simulations using the transfer functions of the two RLC networks.
Each RLC network has a peak in its frequency response which will result in oscillation at that peak frequency. However, the peak of the Clapp circuit is much sharper and narrower than that of the Colpitts circuit. This narrowing has the beneficial effect of suppressing spectral noise centered around the oscillation frequency.
Note in the examples above that the oscillation peaks differ by 0.16% and that the reactance of the L1 inductor and the reactance of the L2 C3 pair differ by 1.12%. That’s just a matter of my having chosen some convenient numbers with the intent of having the two curves match in that regard at the same peak frequency. (I almost succeeded.)
The Clapp oscillator has several advantages over the Colpitts oscillator. The transfer function peak of the Clapp circuit is narrower than that of the Colpitts which tends to yield an oscillator output with less spurious off-frequency energy meaning a “cleaner” signal.
Another advantage of the Clapp circuit is that capacitors C4 and C5 can be made very large as the L2 C3 combination is made to look like a very small inductance value at the oscillation frequency. The larger C4 and C5 values mean that any variations of those capacitance values brought about by variations of the input capacitance of the “A” stage have a minimal effect on the oscillation frequency.
That’s because frequency control of the Clapp circuit is primarily set by the series resonance of the L2 C3 pair rather than the parallel resonance of L1 versus the C1 C2 pair in the Colpitts circuit. If the “A” input capacitance tends to vary for this reason or that, the Clapp circuit is far less prone to an unwanted frequency shift as shown in Figure 3.
Figure 3 A Clapp versus Colpitts frequency shift comparison showing how the Clapp circuit (right) is far less prone to this unwanted shift in frequency.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- The Colpitts oscillator
- Clapp oscillator
- Emitter followers as Colpitts oscillators
- Oscillator has voltage-controlled duty cycle
The post Clapp versus Colpitts appeared first on EDN.
Polar Light achieves 625nm-wavelength red pyramidal micro-LED
Passive Q filter using a mini 1:1 audio transformer with its primary and secondary coils wired in series as an inductor, in conjunction with a cap and resistors to target mid frequencies.
![]() | submitted by /u/Probablyawerewolf [link] [comments] |
DigiKey Sponsors Eleckart Competition at Shaastra 2025 Annual Technical Festival
DigiKey, a leading global commerce distributor offering the largest selection of technical components and automation products in stock for immediate shipment, is proud to sponsor the Eleckart competition during the 2025 Shaastra Technical Festival in Chennai, India, from Jan. 3-7, 2025.
The Eleckart event will test students’ understanding of digital electronics and their problem-solving capabilities using a minimal set of resources. The event consists of two rounds. The first round will test participants’ knowledge of creating electronic circuit diagrams using DigiKey’s Scheme-it platform. The final round will be on circuit building using actual components while managing the points that are deducted through components taken. Winners will receive prizes up to ₹50,000.
The festival is hosted by the Indian Institute of Technology Madras (IITM) and showcases engineering, science and technology with competitions, lectures, exhibitions, demonstrations and workshops. Students can register for technology-related workshops focusing on the Internet of Things (IoT), rocket propulsion, Arduino, CAD for industrial designs, AI and machine learning, and quantum computing.
“DigiKey is excited to sponsor the Eleckart competition during IITM’s Shaastra Technical Festival and have a chance to connect with the 60,000 attendees that will visit the summit,” said Y.C. Wang, director of global academic programs at DigiKey. “India is one of DigiKey’s top markets and this opportunity allows us to interact with students, engineers and designers who will foster future innovations in India and around the world.”
On Jan. 4, DigiKey representatives will showcase Sparkfun’s Experiential Robotics Platform (XRP) at the Eleckart event. Students can visit the DigiKey table to learn about the organization’s largest selection of technical components and about DigiKey’s tech resources such as online conversion calculators, PCB builders and design tools. Students can also receive free DigiKey PCB rulers.
The post DigiKey Sponsors Eleckart Competition at Shaastra 2025 Annual Technical Festival appeared first on ELE Times.
Industrial MCU packs EtherCAT controller

GigaDevice has introduced the GD32H75E 32-bit MCU, featuring an integrated GDSCN832 EtherCAT subdevice controller, which is also available as a standalone device. Both components target industrial automation applications, including servo control, variable frequency drives, industrial PLCs, and communication modules.
Powered by an Arm Cortex-M7 core running at up to 600 MHz, the GD32H75E microcontroller includes a DSP hardware accelerator, double-precision floating-point unit, hardware trigonometric accelerator, and filter algorithm accelerator. It also comes with 1024 KB of SRAM, up to 3840 KB of flash memory with security protection, and a 64-KB cache to enhance CPU efficiency and real-time performance.
The MCU’s integrated EtherCAT subdevice controller, licensed from Beckhoff Automation, manages EtherCAT communication, acting as an interface between the EtherCAT fieldbus and the sub-application. It includes two internal PHY ports and an external MII. With 64-bit distributed clock support, it enables synchronization with other EtherCAT devices, achieving DC synchronization accuracy to within 1 µs.
The GD32H75E MCU is available in two variants: one with two internal Ethernet PHYs and another that supports bypass mode, both housed in BGA240 packages. Samples and development boards are available now, with mass production planned for Q2 2025.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Industrial MCU packs EtherCAT controller appeared first on EDN.
Wireless audio SoC integrates AI processing

Airoha Technology’s AB1595 Bluetooth audio chip features a 6-core architecture and a built-in AI hardware accelerator. It consolidates functions typically spread across multiple chips into a single SoC and achieves Microsoft Teams Open Office certification.
The AB1595 uses AI algorithms and input from up to 10 microphones to improve speech clarity by reducing background noise. This collaboration allows it to accurately distinguish between the user’s voice and environmental sounds, achieving professional-grade speech quality. In noisy environments like offices and cafes, it enhances voice noise suppression from 10 dB up to 40 dB, optimizing speech quality and elevating consumer headsets to professional teleconference standards.
Real-time adaptive active noise cancellation (ANC) in the AB1595 boosts environmental noise attenuation across a wide frequency range. It detects the user’s wearing condition (e.g., fit or leakage) and adjusts compensation accordingly. Internal filters automatically adapt to both the fit and surrounding noise, balancing effective noise cancellation with comfort for a superior wearing and listening experience.
Airoha reports that the AB1595 has been adopted by customers, with products expected to be available in Q1 2025. A datasheet was not available at the time of this announcement. Contact Airoha Technology here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Wireless audio SoC integrates AI processing appeared first on EDN.
85-V LED driver handles multiple topologies

Designed for automotive LED lighting systems, Diodes’ AL8866Q driver supports buck, boost, buck-boost, and single-ended primary-inductance converter (SEPIC) topologies. This DC-switching LED driver-controller operates over an input voltage range of 4.7 V to 85 V, accommodating 12-V, 24-V, and 48-V battery power rails. It is suitable for applications such as daytime running lights, high/low beams, fog lights, turn signals, and brake lights.
The AL8866Q employs a 400-kHz fixed-frequency peak current-mode control architecture. Spread spectrum frequency modulation enhances EMI performance and aids compliance with the CISPR 25 Class 5 standard.
The device enables analog or PWM dimming via its DIM pin. A 1% reference tolerance ensures better brightness control and matching between lamps. With an analog dimming range of 1% to 100%, the AL8866Q maintains ±12% output current accuracy at 20% dimming. Alternatively, PWM dimming, ranging from 0.1 kHz to 1 kHz, provides a 100:1 dynamic range.
An integrated soft-start function gradually increases the inductor and switch current, minimizing potential overvoltage and overcurrent at the output. The driver also includes an open-drain fault output to signal various fault conditions.
Prices for the AEC-Q100 Grade 1 qualified AL8866Q driver start at $0.48 each in lots of 1000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 85-V LED driver handles multiple topologies appeared first on EDN.
PCIe Gen4 SSD delivers 6200 MB/s

The P400 V4 from Patriot Memory is a PCIe Gen 4 x4 M.2 SSD, offering read speeds up to 6200 MB/s and write speeds up to 5200 MB/s. Optimized for PC and PS5 compatibility, it provides gamers and content creators with high-speed performance and enhanced thermal management. Its compact M.2 2280 form factor makes it well-suited for space-constrained systems, including thin laptops and small form-factor PCs.
With a read speed of 6200 MB/s, the P400 V4 achieves a total bytes written (TBW) rating of 1280 TB. Available in storage capacities ranging from 500 GB to 4 TB, the drive features SmartECC technology for improved reliability. To maintain consistent peak performance during intensive operations, the P400 V4 incorporates a graphene heatshield that helps prevent thermal throttling and efficiently manages thermal output.
The P400 V4’s PCIe Gen 4 x4 controller is NVMe 2.0 compliant, offering improved performance and support for the latest features. The SSD comes with a 5-year warranty and supports Windows 7, 8.0, 8.1, 10, and 11 (drivers may be required for older versions).
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post PCIe Gen4 SSD delivers 6200 MB/s appeared first on EDN.
Porotech advances partnership with Foxconn from R&D to mass production of AR and micro-LED technology
2nd Year Electrical Engineering Student - Final Project for Solid State Electronics Class - 3 Bit Binary Sequence to Decimal Value Converter
![]() | submitted by /u/Accomplished_Pace860 [link] [comments] |
Neo completes sale of Rare Metals facility in Quapaw, Oklahoma
The advent of co-packaged optics (CPO) in 2025

Co-packaged optics (CPO)—the silicon photonics technology promising to transform modern data centers and high-performance networks by addressing critical challenges like bandwidth density, energy efficiency, and scalability—is finally entering the commercial arena in 2025.
According to a report published in Economic Daily News, TSMC has successfully integrated CPO with advanced semiconductor packaging technologies, and sample deliveries are expected in early 2025. Next, TSMC is projected to enter mass production in the second half of 2025 with 1.6T optical transmission offerings.
Figure 1 CPO facilitates a shift from electrical to optical transmission to address the interconnect limitations such as signal interference and overheating. Source: TrendForce
The report reveals that TSMC has successfully trialled a key CPO technology—micro ring modulator (MRM)—at its 3-nm process node in close collaboration with Broadcom. That’s a significant leap from electrical to optical signal transmission for computing tasks.
The report also indicates that Nvidia plans to adopt CPO technology, starting with its GB300 chips, which are set for release in the second half of 2025. Moreover, Nvidia plans to incorporate CPO in its subsequent Rubin architecture to address the limitations of NVLink, the company’s in-house high-speed interconnect technology.
What’s CPO
CPO is a crucial technology for artificial intelligence (AI) and high-performance computing (HPC) applications. It enhances a chip’s interconnect bandwidth and energy efficiency by integrating optics and electronics within a single package, which significantly shortens electrical link lengths.
Here, optical links offer multiple advantages over traditional electrical transmission; they lower signal degradation over distance, reduce susceptibility to crosstalk, and offer significantly higher bandwidth. That makes CPO an ideal fit for data-intensive AI and HPC applications.
Furthermore, CPO offers significant power savings compared to traditional pluggable optics, which struggle with power efficiency at higher data rates. The early implementations show 30% to 50% reductions in power consumption, claims an IDTechEx study titled “Co-Packaged Optics (CPO): Evaluating Different Packaging Technologies.”
This integration of optics with silicon—enabled by advancements in chiplet-based technology and 3D-IC packaging—also reduces signal degradation and power loss and pushes data rates to 1.6T and beyond.
Figure 2 Optical interconnect technology has been gaining traction due to the growing need for higher data throughput and improved power efficiency. Source: IDTechEx
Heterogeneous integration, a key ingredient in CPO, enables the fusion of optical engine (OE) with switch ASICs or XPUs on a single package substrate. Here, the optical engine includes both photonic ICs and electronic ICs. The packaging in CPO generally employs two approaches. The first one involves the packaging of optical engine itself and the second one focuses on the system-level integration of the optical engine with ICs like ASICs or XPUs.
A new optical computing era
TSMC’s approach involves integrating CPO modules with advanced packaging technologies such as chip-on-wafer-on-substrate (CoWoS) or small outline integrated circuit (SOIC). It eliminates traditional copper interconnects’ speed limitations and puts TSMC at the forefront of a new optical computing era.
However, challenges such as low yield rates in CPO module production might lead TSMC to outsource some optical-engine packaging orders to other advanced packaging companies. This shows that the complex packaging process encompassing CPO fabric will inevitably require a lot of fine-tuning before commercial realization.
Still, it’s a breakthrough that highlights a tipping point for AI and HPC performance, wrote Jeffrey Cooper in his LinkedIn post. Cooper, a former sourcing lead for ASML, also sees a growing need for cross-discipline expertise in photonics and semiconductor packaging.
Related Content
- Optical interconnects draw skepticism, scorn
- TSMC crunch heralds good days for advanced packaging
- Intel and FMD’s Roadmap for 3D Heterogeneous Integration
- Heterogeneous Integration and the Evolution of IC Packaging
- CEA-Leti Develops Active Optical Interposers to Connect Chiplets
- Road to Commercialization for Optical Chip-to-Chip Interconnects
The post The advent of co-packaged optics (CPO) in 2025 appeared first on EDN.
Tech Data and Dell Technologies Sign MoU to Drive AI Adoption through Dell AI Factory
Collaboration delivers comprehensive AI solutions portfolio, backed by partner support through Tech Data’s Destination AI program
Tech Data, a TD SYNNEX Company, and Dell Technologies have signed a Memorandum of Understanding (MoU) to enable the Dell AI Factory in India, a one-stop platform that offers products, solutions and services to accelerate AI adoption across industries.
With this agreement, Tech Data and Dell will establish a Center of Excellence for Dell to showcase use cases and product demonstrations. They will also collaborate with leading Independent Software Vendors (ISVs) to deliver pre-validated, end-to-end AI solutions. These offerings seamlessly combine Dell’s advanced hardware with specialized software, simplifying AI deployments for partners and empowering them to engage customers confidently and address evolving market needs.
Tech Data’s Destination AI program will further support partners with training, technical guidance, and pre- and post-sales services, accelerating their AI readiness and driving sustainable business growth.
“We are excited to strengthen our partnership with Dell Technologies and introduce the Dell AI Factory to Channel Partners,” said Sundaresan K., Vice President and Country General Manager, Tech Data Advanced (India) Private Limited. “India’s AI market is expanding rapidly, and Partners are eager to capitalize on the immense opportunities it presents. The Dell AI Factory, combined with our Destination AI program, is designed to equip them with the advanced tools and capabilities they need to meet this growing demand and deliver cutting-edge AI solutions to their customers.”
To further strengthen the AI ecosystem, Tech Data will onboard additional ISVs, enhancing the Dell AI Factory with specialized software solutions that complement Dell’s technology. This will ensure greater adaptability to the unique needs of various industries.
“At Dell Technologies, we are committed to driving innovation that simplifies and accelerates technology adoption,” said Vivek Malhotra, Senior Director & General Manager, India Channels, Dell Technologies. “Our collaboration with Tech Data to launch the Center of Excellence in India underscores this commitment, offering channel partners a robust platform to deliver tailored AI solutions seamlessly. By combining our expertise through Dell AI Factory and advanced hardware solutions, we are equipping our partners with the tools and expertise necessary to address diverse industry challenges and unlock new growth opportunities in the AI era.”
The post Tech Data and Dell Technologies Sign MoU to Drive AI Adoption through Dell AI Factory appeared first on ELE Times.
Amazon Sidewalk: The first STM32-qualified devices are already making a difference. Check out this customer testimonial!
For the first time, Nucleo boards housing an STM32WBA5 and an STM32WLx5 received the Amazon Sidewalk certification, thus guaranteeing these STM32 MCUs will offer robust integration, high efficiency, and trusted security when deployed on an Amazon Sidewalk network. We are even showing how Subeca, an end-to-end water management platform in the United States, leveraged these STM32 devices to obtain its Amazon Sidewalk qualification, thus ensuring its customers can benefit from this vast and secure network to create a cost-effective and scalable solution for water metering and pressure management IoT systems.
What is Amazon Sidewalk?The idea behind Amazon Sidewalk is elegantly simple: using Internet-connected devices like Amazon Echos or some Ring Floodlight and Spotlight Cams, which serve as Amazon Sidewalk Bridges, to create a low-bandwidth and low-powered wireless network by piggybacking on a tiny amount of the Bridges’ bandwidth (80 Kbps). An Amazon Sidewalk device can thus connect to a Sidewalk Bridge using Bluetooth, securely connecting to its network and benefiting from the Internet. Moreover, once an Amazon Sidewalk end device is provisioned to the network via Bluetooth LE, it can rely on the long-range connectivity of the STM32WL5 to extend the network coverage over vast distances.
Amazon Sidewalk is free to use and simplifies operations. If a Sidewalk Bridge loses its Wi-Fi connection, Amazon’s technology can initiate a reconnection to the router without the user’s intervention. Bandwidth is also very low, and data usage is minimal and capped at 500 MB a month, meaning that even customers with a constrained Internet connection won’t feel its impact. Moreover, Amazon has numerous encryption and secure mechanisms to keep data private and safe. Hence, it’s possible to use Amazon Sidewalk for logistic, personal, or pet tracking, beyond-the-fence asset monitoring, smart irrigation systems, healthcare monitoring, or, as Subeca demonstrates, for more demanding applications like utilities monitoring on a national scale, as the Sidewalk coverage map suggests.

As of today, boards featuring the STM32WBA5, STM32WL5, and STM32WLE5 have received the Amazon Sidewalk qualification. The STM32WBA5 offers a Cortex-M33, a Bluetooth LE 5.4 transceiver, and can target a SESIP Level 3 certification, while the STM32WLx5 devices use a Cortex-M4 and a sub-GHz radio. Engineers might choose an STM32WBA55 and an STM32WLE5 to optimize memory usage or an STM32WBA55 and an STM32WL55 for the greater flexibility this configuration affords.
Concretely, the STM32WBA5 talks directly to the Amazon Sidewalk Bridge using a Bluetooth LE connection. And in some instances, that’s all the system needs. However, when networking multiple end nodes over large distances, like in the case of Subeca, it’s necessary to use the STM32WL5 to talk to devices using CSS (Chirp Spread Spectrum, such as LoRa) or an FSK modulation, depending on the distance and frequency range engineers wish to target.


To help developers jumpstart their projects, ST is offering software packages that help implement a network stack that easily interacts with Amazon Sidewalk. This dramatically simplifies the connection to the network, the integration of security features into the application, and the onboarding process. Put simply, while an Amazon Sidewalk guarantees that ST devices will provide the reliability and safety required, it is also a testament to our partnership with Amazon and our desire to help engineers take advantage of this technology.
Real-world applicationsThe qualification and partnership between Amazon and ST means that partners like Subeca can focus on showcasing their expertise and distinguishing their products from the competition instead of spending resources solving networking challenges. As Patrick Keaney, CEO of Subeca, explained,
“Our focus is on innovating and simplifying solutions that solve real-world challenges in the water market. We believe technology like advanced metering, leak detection, and pressure monitoring should be available to all water utilities everywhere, regardless of size. That means wireless connectivity is a must. ST’s STM32WBA5 and STM32WL5/STM32WLE5 wireless microcontrollers enabled us to bring our first Amazon sidewalk-qualified products to the market with great architectural flexibility, performance, low-power consumption in a cost-effective manner with meaningful device longevity and robust and resilient supply chain. Leveraging ST’s expansive device portfolio and ecosystem coupled with great technical support, ST offered us quality technical ingredients, ease-of-use, and portability required to transform our vision into reality.”

Avnet also showcased an Amazon Sidewalk demo at AWS Re:Invent 2024 featuring an STM32WBA5, an STM32WL55, and Avnet’s IoTConnect platform to handle the onboarding, device management, and data integration with AWS. AVnet’s solution is often a darling at ST Technology Tours because it vastly simplifies the creation of IoT systems by handling some of the most complex development operations. Put simply, the demo is one of the best examples of how ST, Amazon Sidewalk, and a member of the ST Partner Program can come together to make a difference in the operations of a company trying to take part in the IoT revolution.
Why it matters?Interconnecting a myriad of small devices to each other and the Internet has always been the IoT dream. The challenge is that building a new infrastructure from scratch is expensive, and without massive adoption, it will never reach critical mass. Amazon Sidewalk solves this issue by utilizing existing Echo devices and other Bridges connected to a router. By simply leveraging existing installations, the network is already in place. And by enabling product makers and customers to use it for free, it significantly lowers the barrier to entry.
Additionally, Amazon Sidewalk handles a lot of the complexities associated with such a network, from security to over-the-air updates. That’s why Amazon instituted a qualification program. To protect all participants in this ecosystem, Amazon authorizes devices to connect to its network. It also explains the company’s certification program. By qualifying STM32 microcontrollers, Amazon ensures that its partners use trusted devices that will run the network stack reliably and implement security features according to strict standards.
The post Amazon Sidewalk: The first STM32-qualified devices are already making a difference. Check out this customer testimonial! appeared first on ELE Times.
let my intrusive thoughts get a little carried away with a dead computer
![]() | submitted by /u/Piggy_Royale [link] [comments] |
I made resistor color code calculator
![]() | submitted by /u/GaussCarl [link] [comments] |
PWM power DAC incorporates an LM317

Instead of the conventional approach of backing up a DAC with an amplifier to boost output, this design idea charts a less traveled by path to power. It integrates an LM317 positive regulator with a simple 8-bit PWM DAC topology to obtain a robust 11-V, 1.5-A capability. It thus preserves simplicity while exploiting the built-in fault protection features (thermal and overload) of that time proven Bob Pease masterpiece. Its output is proportional to the guaranteed 2% precision of the LM317 internal voltage reference, making it securely independent of vagaries of both the 5-V logic supply rail and incoming raw DC supply.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 diagrams how it works.
Figure 1 LM317 regulator melds with HC4053 CMOS switch to make a 16-W PWM power DAC.
CMOS SPDT switches U1b and U1c accept a 10-kHz PWM signal to generate a 0 V to 9.75 V “ADJ” control signal for the U2 regulator via feedback networks R1, 2, and R3. The incoming PWM signal is AC coupled so that U1 can “float” on U2’s output. U1c provides an inverse of the PWM signal, implementing active ripple cancellation as described in “Cancel PWM DAC ripple with analog subtraction.” Note that R1||R2 = R3 to optimize ripple subtraction and DAC accuracy.
This feedback arrangement does, however, make the output voltage a nonlinear function of PWM duty factor (DF) as given by:
Vout = 1.25 / (1 – DF(1 – R1/(R1 + R2))
= 1.25 / (1 – 0.885*DF)
This is graphed in Figure 2.
Figure 2 The Vout (1.25 V to 11 V) versus PWM DF (0 to 1) where Vout = 1.25 / (1 – 0.885*DF).
Figure 3 plots the inverse of Figure 2, yielding the PWM DF required for any given Vout.
Figure 3 The inverse of Figure 2 where PWM DF = (1 – 1.25/Vout)/0.885.
The corresponding 8-bit PWM setting works out to: Dbyte = 255 (1 – 1.25 / Vout) / 0.885
Vfullscale = 1.25 / (R1/(R1 + R2)), so design choices other than 11 V are available. 11 V is the maximum consistent with HC4053’s ratings, but up to 20 V is feasible if the metal gate CD4053B is substituted for U1. Don’t forget, however, the requirement that R3 = R1||R2.
The supply rail V+ can be anything from a minimum of Vfullscale+3V to accommodate U2’s minimum headroom dropout requirement, up to the LM317’s absmax 40-V limit. DAC accuracy will be unaffected due to this chip’s excellent PSRR, although of course efficiency may suffer.
U2 should be heatsunk as dictated by heat dissipation caused by required output currents multiplied by the V- to Vout differential. Up to double-digit watts is possible at high currents and low Vout.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Cancel PWM DAC ripple with analog subtraction
- A faster PWM-based DAC
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- Cancel PWM DAC ripple with analog subtraction but no inverter
The post PWM power DAC incorporates an LM317 appeared first on EDN.
2024: A year’s worth of interconnected themes galore

As any of you who’ve already seen my precursor “2025 Look Ahead” piece may remember, we’ve intentionally flipped the ordering of my two end-of-year writeups once again this year. This time, I’ll be looking back over 2024: for historical perspective, here are my prior retrospectives for 2019, 2021, 2022 and 2023 (we skipped 2020).
As I’ve done in past years, I thought I’d start by scoring the topics I wrote about a year ago in forecasting the year to come:
- Increasingly unpredictable geopolitical tensions
- The 2024 United States election
- Windows (and Linux) on Arm
- Declining smartphone demand, and
- Internal and external interface evolutions
Maybe I’m just biased but I think I nailed ‘em all, albeit with varying degrees of impactfulness. To clarify, by the way, it’s not that if the second one would happen was difficult to predict; the outcome, which I discussed a month back, is what was unclear at the time. In the sections that follow, I’m going to elaborate on one of the above themes, as well as discuss other topics that didn’t make my year-ago forecast but ended up being particularly notable (IMHO, of course).
Battery transformationsI’ve admittedly written quite a lot about lithium-based batteries and the devices they fuel over the past year, as I suspect I’ll also be doing in the year(s) to come. Why? My introductory sentence to a recent teardown of a “vape” device answers that question, I think:
The ever-increasing prevalence of lithium-based batteries in various shapes, sizes and capacities is creating a so-called “virtuous circle”, leading to lower unit costs and higher unit volumes which encourage increasing usage (both in brand new applications and existing ones, the latter as a replacement for precursor battery technologies), translating into even lower unit costs and higher unit volumes that…round and round it goes.
Call me simple-minded (as some of you already may have done a time or few over the years!) but I consistently consult the same list of characteristics and tradeoffs among them when evaluating various battery technologies…a list that was admittedly around half its eventual length when I first scribbled it on a piece of scrap paper a few days ago, until I kept thinking of more things to add in the process of keyboard-transcribing it (thereby eventually encouraging me to delete the “concise” adjective I’d originally used to describe it)!
- Volume manufacturing availability, translating to cost (as I allude to in the earlier quote)
- Form factor implementation flexibility (or not)
- The required dimensions and weight for a given amount of charge-storage capacity
- Both peak and sustained power output
- The environmental impacts of raw materials procurement, battery manufacturing, and eventual disposal (or, ideally, recycling)
- Speaking of “environmental”, the usable operating temperature range, along with tolerance to other environment variables such as humidity, shock and vibration
- And recharge speed (both to “100% full” and to application-meaningful percentages of that total), along with the number of recharge cycles the battery can endure until it no longer can hold enough anode electrons to be application-usable in a practical sense.
Although plenty of lithium battery-based laptops, smartphones and the like are sold today, a notable “driver” of incremental usage growth in the first half of this decade (and beyond) has been various mobility systems—battery-powered drones (and, likely in the future, eVTOLs), automobiles and other vehicles, untethered robots, and watercraft (several examples of which I’ll further elaborate on later in this writeup, for a different reason). Here, the design challenges are quite interconnected and otherwise complex, as I discussed back in October 2021:
Li-ion battery technology is pretty mature at this point, as is electric motor technology, so in the absence of a fundamental high-volume technology breakthrough in the future, to get longer flight time, you need to include bigger batteries…which leads to what I find most fundamentally fascinating about drones and their flying kin: the fundamental balancing act of trading off various contending design factors that is unique to the craft of engineering (versus, for example, pure R&D or science). Look at what I’ve just said. Everyone wants to be able to fly their drone as long as possible, before needing to land and swap out battery packs. But in order to do so, that means that the drone manufacturer needs to include larger battery cells, and more of them.
Added bulk admittedly has the side benefit of making the drone more tolerant of wind gusts, for example, but fundamentally, the heavier the drone the beefier the motors need to be in order to lift it off the ground and fly it for meaningful altitudes, distances, and durations. Beefier motors burn more juice, which begs for more batteries, which make the drone even heavier…see the quagmire? And unlike with earth-tethered electricity-powered devices, you can’t just “pull over to the side of the road” if the batteries die on you.
Now toss in the added “twist” that everyone also wants their drone to be as intelligent as possible so it doesn’t end up lost or tangled in branches, and so it can automatically follow whatever’s being videoed. All those image and other sensors, along with the intelligence (and memory, and..) to process the data coming off them, burns juice, too. And don’t forget about the wireless connectivity between the drone and the user—minimally used for remote control and analytics feedback to the user…How do you balance all of those contending factors to come up with an optimum implementation for your target market?
Although the previous excerpt was specifically about drones, many of the points I raised are also relevant at least to a degree in the other mobility applications I mentioned. That said, an electric car’s powerplant size and weight constraints aren’t quite as acute as an airborne system’s might be, for example. This application-defined characteristics variability, both in an absolute sense and relative to others on my earlier list, helps explain why, as Wikipedia points out, “there are at least 12 different chemistries of Li-ion batteries” (with more to come). To wit, developers are testing out a diversity of both anode and cathode materials (and combinations of them), increasingly aided by AI (which I’ll also talk more about later in this piece) in the process, along with striving to migrate away from “wet” electrolytes, which among other things are flammable and prone to leakage, toward safer solid-state approaches.
Another emerging volume growth application, as I highlighted throughout the year, are battery generators, most frequently showcased by me in their compact portable variants. Here, while form factor and weight remain important, since the devices need to be hauled around by their owners, they’re stationary while in use. Extrapolate further and you end up with even larger home battery-backup banks that never get moved once installed. And extrapolate even further, to a significant degree in fact, and you’re now talking about backup power units for hospitals, for example, or even electrical grid storage for entire communities or regions. One compelling use case is to smooth out the inherent availability variability of renewable energy sources such as solar and wind, among other reasons to “feed” the seemingly insatiable appetites of AI workload-processing data centers in a “green”-as-possible manner. And in all these stationary-backup scenarios, installation space is comparatively abundant and weight is also of lesser concern; the primary selection criteria are factors such as cost, invulnerability, and longevity.
As such, non-lithium-based technologies will likely become increasingly prominent in the years to come. Sodium-ion batteries (courtesy of, in part, sodium’s familial proximity to lithium in the Periodic Table of Elements) are particularly near-term promising; you can already buy them on Amazon! The first US-based sodium-ion “gigafactory” was recently announced, as was the US Department of Energy’s planned $3 billion in funding for new sodium-ion (and other) battery R&D projects. Iron-based batteries such as the mysteriously named (but not so mysterious once you learn how they work) iron-air technology tout raw materials abundance (how often do you come across rust, after all?) translating into low cost. Vanadium-based “flow” batteries also hold notable promise. And there’s one other grid-scale energy storage candidate with an interesting twist: old EV batteries. They may no longer be sufficiently robust to reliably power a moving vehicle, but stationary backup systems still provide a resurrecting life-extension opportunity.
For ongoing information on this topic, in addition to my and colleagues’ periodic coverage, market research firm IDTechEx regularly publishes blog posts on various battery technology developments which I also commend to your inspection. I have no connection with the firm aside from being a contented consumer of their ongoing information output!
Drones as armamentsAs a kid, I was intrigued by the history of warfare. Not (at all) the maiming, killing and other destruction aspects, mind you, instead the equipment and its underlying technologies, their use in conflicts, and their evolutions over time. Three related trends that I repeatedly noticed were:
- Technologies being introduced in one conflict and subsequently optimized (or in other cases disbanded) based on those initial experiences, with the “success stories” then achieving widespread use in subsequent conflicts
- The oft-profound advantages that adopters of new successful warfare technologies (and equipment and techniques based on them) gained over less-advanced adversaries who were still employing prior-generation approaches, and
- That new technology and equipment breakthroughs often rapidly obsoleted prior-generation warfare methods
Re point #1, off the top of my head, there’s (with upfront apologies for any United States centricity in the examples that follow):
- Chemical warfare, considered (and briefly experimented with) during the US Civil War, with widespread adoption beginning in World War I (WWI)
- Airplanes and tanks, introduced in WWI and extensively leveraged in WWII (and beyond)
- Radar (airplanes), sonar (submarines) and other electronic surveillance, initially used in WWII with broader implementation in subsequent wars and other conflicts
- And RF and other electronics-based communications methods, including cryptography (and cracking), once again initiated in WWII
And to closely related points #2 and #3, two WWII examples come to mind:
- I still vividly recall reading as a kid about how the Polish army strove, armed with nothing but horse cavalry, to defend against invading German armored brigades, although the veracity of at least some aspects of this propaganda-tainted story are now in dispute.
- And then there was France’s Maginot Line, a costly “line of concrete fortifications, obstacles and weapon installations built by France in the 1930s” ostensibly to deter post-WWI aggression by Germany. It was “impervious to most forms of attack” across the two countries’ shared border, but the Germans instead “invaded through the Low Countries in 1940, passing it to the north”. As Wikipedia further explains, “The line, which was supposed to be fully extended further towards the west to avoid such an occurrence, was finally scaled back in response to demands from Belgium. Indeed, Belgium feared it would be sacrificed in the event of another German invasion. The line has since become a metaphor for expensive efforts that offer a false sense of security.”
I repeatedly think of case studies like these as I read about how the Ukrainian armed forces are, both in the air and sea, now using innovative, often consumer electronics-sourced approaches to defend against invading Russia and its (initially, at least) legacy warfare techniques. Airborne drones (more generally: UAVs, or unmanned aerial vehicles) have been used for surveillance purposes since at least the Vietnam War as alternatives to satellites, balloons, manned aircraft and the like. And beginning with aircraft such as the mid-1990s Predator, UAVs were also able to carry and fire missiles and other munitions. But such platforms were not only large and costly, but also remotely controlled, not autonomous to any notable degree. And they weren’t in and of themselves weapons.
That’s all changed in Ukraine (and elsewhere, for that matter) in the modern era. In part hamstrung by its allies’ constraints on what missiles and other weapons it was given access to and how and where they could be used, Ukraine has broadened drones’ usage beyond surveillance into innate weaponry, loading them up with explosives and often flying them hundreds of miles for subsequent detonation, including all the way to Moscow. Initially, Ukraine retrofit consumer drones sourced from elsewhere, but it now manufactures its own UAVs in high volumes. Compared to their Predator precursors, they’re compact, lightweight, low cost and rugged. They’re increasingly autonomous, in part to counteract Russian jamming of wireless control signals coming from their remote operators. They can even act as flamethrowers. And as the image shown at the beginning of this section suggests, they not only fly but also float, a key factor in Ukraine’s to-date success both in preventing a Russian blockade of the Black Sea and in attacking Russia’s fleet based in Crimea.
AI (again, and again, and…)AI has rapidly grown beyond its technology-coverage origins and into the daily clickbait headlines and chyrons of even mainstream media outlets. So it’s probably no surprise that this particular TLA (with “T” standing for “two” this time, versus the the usual) is a regular presence in both my end-of-year and next-year-forecast writeups, along with plenty of ongoing additional AI coverage in-between each year’s content endpoints. A month ago, for example, I strove to convince you that multimodal AI would be ascendant in the year(s) to come. Twelve months ago, I noted the increasing importance of multimodal models’ large language model (LLM) precursors over the prior year, and the month(-ish) before that, I’d forecasted that generative AI would be a big deal in 2023 and beyond. Lather, rinse and repeat.
What about the past twelve months; what are the highlights? I could easily “write a book” on just this topic (as I admittedly almost already did earlier re “Battery Transformations”). But with the 3,000-word count threshold looming, and always mindful of Aalyia’s wrath (I kid…maybe…), I’ll strive to practice restraint in what follows. I’m not, for example, going to dwell on OpenAI’s start-of-year management chaos and ongoing key-employee-shedding, nor on copyright-infringement lawsuits brought against it and its competitors by various content-rights owners…or for that matter, on lawsuits brought against it and its competitors (and partners) by other competitors. Instead, here’s some of what else caught my eye over the past year:
- Deep learning models are becoming more bloated with the passage of time, despite floating point-to-integer conversion, quantization, sparsity and other techniques for trimming their size. Among other issues, this makes it increasingly infeasible to run them natively (and solely) on edge devices such as smartphones, security cameras and (yikes!) autonomous vehicles. Imagine (a theoretical case study, mind you) being unable to avoid a collision because your car’s deep learning model is too dinky to cover all possible edge and corner cases and a cloud-housed supplement couldn’t respond in time due to server processing and network latency-and-bandwidth induced delays…
- As the models themselves grow, the amount of processing horsepower (not to mention consumed power) and time needed to train them increases as well…exponentially so.
- Resource demands for deep learning inference are also skyrocketing, especially as the trained models referenced become more multimodal and otherwise complex, not to mention the new data the inference process is tasked with analyzing.
- And semiconductor supplier NVIDIA today remains the primary source of processing silicon for training, along with (to a lesser but still notable market segment share degree) inference. To the company’s credit, decades after kicking off its advocacy of general-purpose graphics processing (GPGPU) applications, its longstanding time, money and headcount investments have borne big-time fruit for the company. That said, competitors (encouraged by customers aspiring for favorable multi-source availability and pricing outcomes) continue their pursuit of the “Green Team”.
- To my earlier “consumed power” comments, along with my even earlier “seemingly insatiable appetites of AI workload-processing data centers” comments, and as my colleague (and former boss) Bill Schweber also recently noted, “AI-driven datacenter energy demand could expand 160 percent over the next two years, leaving 40 percent of existing facilities operationally constrained by power availability,” to quote recent coverage in The Register. In response to this looming and troubling situation, in the last few days alone I’ve come across news regarding Amazon (“Amazon AI Data Centers To Double as Carbon Capture Machines”) and Meta (“Meta wants to use nuclear power for its data centers”). Plenty of other recent examples exist. But will they arrive in time? And will they only accelerate today’s already worrying global warming pace in the process?
- But, in spite of all of this spiraling “heavy lifting”, researchers continue to conclude that AI still doesn’t have a coherent understanding of the world, not to mention that the ROI on ongoing investments in what AI can do may be starting to level off (at least to some observers, albeit not a universally held opinion).
- One final opinion; deep learning models are seemingly already becoming commodities, a trend aided in part by increasingly capable “open” options (although just what “open” means has no shortage of associated controversy). If I’m someone like Amazon, Apple, Google, Meta or Microsoft, whose deep learning investments reap returns in associated AI-based services and whose models are “buried” within these services, this trend isn’t so Conversely, however, for someone whose core business is in developing and licensing models to others, the long-term prognosis may be less optimistic, no matter how rosy (albeit unprofitably so) things may currently seem to be. Heck, even AMD and NVIDIA are releasing open model suites of their own nowadays…
I’m writing this in early December 2024. You’ll presumably be reading it sometime in January 2025. I’ll split the difference and wrap up by first wishing you all a Happy New Year!
As usual, I originally planned to cover a number of additional topics in this piece, such as (in no particular order save for how they came out of my noggin):
- Matter and Thread’s misfires and lingering aspirations
- Much discussed (with success reality to follow?) chiplets
- Plummeting-cost solar panels
- Iterative technology-related constraints on China (and its predictable responses), and
- Intel’s ongoing, deepening travails
But (also) as usual I ended up with more things that I wanted to write about than I had a reasonable wordcount budget to do so. Having now passed through 3,000 words, I’m going to restrain myself and wrap up, saving the additional topics (as well as updates on the ones I’ve explored here) for dedicated blog posts to come in the coming year(s). Let me know your thoughts on my top-topic selections, as well as what your list would have looked like, in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- 2023: Is it just me, or was this year especially crazy?
- A tech look back at 2022: We can’t go back (and why would we want to?)
- A 2021 technology retrospective: Strange days indeed
- 10 consumer technology breakthroughs from 2019
- 2025: A technology forecast for the year ahead
The post 2024: A year’s worth of interconnected themes galore appeared first on EDN.
Hack - converted a passive 40w sub woofer into a powered Bluetooth & Aux, thrift store finds
![]() | Found at thrift store, passive sub woofer for 2$ and a small powered Bluetooth & Aux speaker system that was “broken” all for 10$ Crab Rave sounds real good too !!! [link] [comments] |
Ternary gain-switching 101 (or 10202, in base 3)

This design idea is centered on the humble on/off/on toggle switch, which is great for selecting something/nothing/something else, but can be frustrating when three active options are needed. One possibility is to use the contacts to connect extra, parallel resistors across a permanent one (for example), but the effect is something like low/high/medium, which just looks wrong.
That word “active” is the clue to making the otherwise idle center position do some proper work, like helping to control an op-amp stage’s gain, as shown in Figure 1.
Figure 1 An on/off/on switch gives three gain settings in a non-inverting amplifier stage and does so in a rational order.
Wow the engineering world with your unique design: Design Ideas Submission Guide
I’ve used this principle many times, but can’t recall having seen it in published circuits, and think it’s novel, though it may be so commonplace as to be invisible. It’s certainly obvious when you think about it.
A practical applicationThat’s the basic idea, but it’s always more satisfying to convert such ideas into something useful. Figure 2 illustrates just that: an audio gain-box whose amplification is switched in a ternary sequence to give precise 1-dB steps from 0 to +26 dBs. As built, it makes a useful bit of lab kit.
Figure 2 Ternary switching over three stages gives 0–26 dB gain in precise 1-dB steps.
Three gain stages are concatenated, each having its own switch. C1 and C2 isolate any DC, and R1 and R12 are “anti-click” resistors, ensuring that there’s no stray voltage on the input or output when something gets plugged in. A1d is the usual rail-splitter, allowing use on a single, isolated supply.
The op-amps are shown as common-or-garden TL074/084s. For lower noise and distortion, (a pair of) LM4562s would be better, though they take a lot more current. With a 5-V supply, the MCP6024 is a good choice. For stereo use, just duplicate almost everything and use double-pole switches.
All resistor values are E12/24 for convenience. The resistor combinations shown are much closer to the ideal, calculated values than the assumed 1% tolerance of actual parts, and give a better match than E96s would in the same positions.
Other variations on the themeThe circuit of Figure 2 could also be built for DC use but would then need low-offset op-amps, especially in the last stage. (Omit C1, C2, and other I/O oddments, obviously.)
Figure 1 showed the non-inverting version, and Figure 3 now employs the idea in an inverting configuration. Beware of noise pick-up at the virtual-earth point, the op-amp’s inverting input.
Figure 3 An inverting amplifier stage using the same switching principle.
The same scheme can also be used to make an attenuator, and a basic stage is sketched in Figure 4. Its input resistance changes depending on the switch setting, so an input buffer is probably necessary; buffering between stages and of the output certainly is.
Figure 4 A single attenuation stage with three switchable levels.
Conclusion: back to binary basicsYou’ve probably been wondering, “What’s wrong with binary switching?” Not a lot, except that it uses more op-amps and more switches while being rather obvious and hence less fun.
Anyway, here (Figure 5) is a good basic circuit to do just that.
Figure 5 Binary switching of gain from 0 to +31 dB, using power-of-2 steps. Again, the theoretical resistor values are much closer to the ideal than their actual 1% tolerances.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- To press on or hold off? This does both.
- Latching power switch uses momentary pushbutton
- A new and improved latching power switch
- Latching power switch uses momentary-action pushbutton
The post Ternary gain-switching 101 (or 10202, in base 3) appeared first on EDN.
Pages
