Українською
  In English
EDN Network
AI is stress-testing processor architectures and RISC-V fits the moment

Every major computing era has been defined not by technology, but by a dominant workload—and by how well processor architectures adapted to it.
The personal computer era rewarded general-purpose flexibility, allowing x86 to thrive by doing many things well enough. The mobile era prioritized energy efficiency above all else, enabling Arm to dominate platforms where energy, not raw throughput, was the limiting factor.
AI is forcing a different kind of transition. It’s not a single workload. It’s a fast-moving target. Model scale continues to expand through sparse and mixture-of-experts techniques that stress memory bandwidth and data movement as much as arithmetic throughput. Model architectures have shifted from convolutional networks to recurrent models to transformers and continue evolving toward hybrid and emerging sequence-based approaches.
Deployment environments span battery-constrained edge devices, embedded infrastructure, safety-critical automotive platforms, and hyperscale data centers. Processing is spread across a combination of GPUs, CPUs, and NPUs where compute heterogeneity is a given.
The timing problem
Modern AI workloads demand new operators, execution patterns, precision formats, and data-movement behaviors. Supporting them requires coordinated changes across instruction sets, microarchitectures, compilers, runtimes, and developer tooling. Those layers rarely move in lockstep.
Precision formats illustrate the challenge. The industry has moved from FP32 to FP16, BF16, INT8, and now FP8 variants. Incumbent architectures continue to evolve—Arm through SVE and SVE2, x86 through AVX-512 and AMX—adding vector and matrix capabilities.
But architectural definition is only the first step. Each new capability must propagate through toolchains, be validated across ecosystems, and ship in production silicon. Even when specifications advance quickly, ecosystem-wide availability unfolds over multiple product generations.
The same propagation dynamic applies to support sparsity, custom memory-access primitives, and heterogeneous orchestration. When workloads shift annually—or faster—the friction lies both in defining new processor capabilities and in aligning the full stack around them.

Figure 1 AI imposes multi-axis stress on processor architectures.
Traditional ISA evolution cycles—often measured in years from specification to broad silicon availability—were acceptable when workloads evolved at similar timescales. But they are structurally misaligned with AI’s rate of change. The problem is that architectural models optimized for long-term stability are now being asked to track the fast-paced and relentless reinvention of workloads.
The core issue is not performance. It’s timing.
Differentiate first, standardize later
Historically, major processor architectures have standardized first and deployed later, assuming hardware abstractions can be fully understood before being locked in. AI reverses that sequence. Many of the most important lessons about precision trade-offs, data movement, and execution behavior emerge in the development phase, while the models are still evolving.
Meta’s MTIA accelerator (MTIA ISCA23/MTIA ISCA25) makes use of custom instructions within its RISC-V–based processors to support recommendation workloads. That disclosure reflects a broader reality in AI systems: workload-specific behaviors are often discovered during product development rather than anticipated years in advance.

Figure 2 MTIA 2i architecture comprises an 8×8 array of processing elements (PEs) connected via a custom network-on-chip.

Figure 3 Each PE comprises two RISC-V processor cores and their associated peripherals (on the left) and a set of fixed-function units specialized for specific computations or data movements (on the right).
The MTIA papers further describe a model—a hardware co-design process in which architectural features, model characteristics, and system constraints evolved together through successive iterations. In such environments, the ability to introduce targeted architectural capabilities early—and refine them during development—becomes an engineering requirement rather than a roadmap preference.
In centrally governed compute architectures, extension priorities are necessarily coordinated across the commercial interests of the stewarding entity and its licensees. That coordination has ensured coherence, backward compatibility, and ecosystem stability across decades.
It also means the pace and priority of architectural change reflect considerations that extend beyond any single vendor’s system needs and accumulate costs associated with broader needs, legacy, and compatibility.
The question is whether a tightly coupled generational cadence—and a centrally coordinated roadmap—remains viable when architectural optimization across a vast array of use cases must occur within the product development cycle rather than between them.
RISC-V decouples differentiation from standardization. A small, stable base ISA provides software continuity. Modular extensions and customizations allow domain-specific capabilities within product cycles. This enables companies and teams to innovate and differentiate before requiring broad consensus.
In other words, RISC-V changes the economics of managing architectural risk. Differentiation at the architecture level can occur without destabilizing the broader software base, while long-term portability is preserved through eventual convergence.
Matrix-oriented capabilities illustrate this dynamic. Multiple vendors independently explored matrix acceleration techniques tailored to their specific requirements. Rather than fragmenting permanently, those approaches are informing convergence through RISC-V International’s Integrated Matrix Extensions (IME), Vector Matrix Extensions (VME), and Attached Matrix Extensions (AME) working groups.
The result is a path toward standardized matrix capabilities shaped by multiple deployment experiences rather than centralized generational events that need consensus ahead of time.
Standardization profiles such as RVA23 extend this approach, defining compatible collections of processor extensions while preserving flexibility beneath the surface.
In practical product terms, this structural difference shows up in development cadence. In many established architectural models, product teams anchor around a stable processor core generation and address new workload demands by attaching increasingly specialized accelerators.
Meaningful architectural evolution often aligns with major roadmap events, requiring coordinated changes across hardware resources, scheduling models, and software layers. By contrast, RISC-V’s base-and-extension model allows domain-specific capabilities to be introduced incrementally on top of a stable ISA foundation.
Extensions can be validated and supported in software without requiring a synchronized generational reset. The distinction is not about capability; it’s about where, when, and how innovation occurs in the product cycle.
From inference silicon to automotive
This difference becomes apparent in modern inference silicon.
Architectural requirements—tightly coupled memory hierarchies, custom data-movement patterns, mixed-precision execution, and accelerator-heavy fabrics—are often refined during silicon development.
Take the case of D-Matrix, which has selected a RISC-V CPU for vector compute and orchestration, memory, and workload distribution management for its 3DIMC in-memory compute inference architecture. In architectures where data movement and orchestration dominate energy and latency budgets, the control plane must adapt alongside the accelerator. Architectural flexibility in the control layer reduces development iteration friction during early product cycles.
The tension between architectural stability and workload evolution is especially visible in automotive.
ISO 26262 functional safety qualification can take years, and vehicle lifecycles span a decade or more. Yet advanced driver assistance systems (ADAS) depend on perception models that are continuously evolving with improved object detection, sensor fusion, and self-driving capabilities. As a result, the automotive industry faces a structural tension: freeze the architecture and risk falling behind or update continuously and requalify repeatedly.
A stable, safety-certified RISC-V foundation paired with controlled extensions offers one way to balance those forces—architectural continuity where validation demands it, and differentiation where workloads require it.
This approach has industry backing. Bosch, NXP, Qualcomm, Infineon, and STMicroelectronics have formed Quintauris specifically to standardize RISC-V profiles for automotive, targeting exactly this combination of long-term architectural stability with application-layer adaptability.
The fact that this represents hardware suppliers, microcontroller vendors, and system integrators simultaneously reflects how broadly the industry has recognized the problem and the approach.
A moment defined by engineering reality
RISC-V’s expanding role in AI is not a rejection of incumbent architectures, which continue to deliver performance and compatibility across a wide range of systems. It reflects a shift in engineering constraints highlighted by AI’s pace.
When workloads evolve faster than architectural generations, adaptability becomes an economic variable. The architecture that prevails is not necessarily the one that runs today’s models fastest. It’s the one that can adjust when those models change.
Legacy processor architectures provide broad stability across generations. RISC-V adds a structural advantage in adaptation velocity—the ability to accommodate differentiation within the product cycle, absorb lessons from deployment, and converge toward standardization—without forcing system architects to wait for generational events. It can adapt to tomorrow’s workloads and course-correct without breaking yesterday’s software.
Marc Evans is director of business development and marketing at Andes Technology USA, a founding premier member of RISC-V International. He is also the organizer of RISC-V Now! (www.riscv-now.com) to be held in Silicon Valley on April 20-21, 2026, a conference focused on the practical lessons of deploying RISC-V at commercial scale across AI, automotive, and data centers.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
- The AI-tuned DRAM solutions for edge AI workloads
- Designing edge AI for industrial applications
- Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs
- Bridging the gap: Being an AI developer in a firmware world
- Why power delivery is becoming the limiting factor for AI
- Silicon coupled with open development platforms drives context-aware edge AI
- Designing energy-efficient AI chips: Why power must Be an early design consideration
- Edge AI in a DRAM shortage: Doing more with less
- AI in 2026: Enabling smarter, more responsive systems at the edge
The post AI is stress-testing processor architectures and RISC-V fits the moment appeared first on EDN.
Silly simple supply sequencing

Frequent contributor R. Jayapal recently shared an interesting Design Idea (DI) for power supply control and sequencing in MCU-based applications that combine analog and digital circuitry: “Short push, long push for sequential operation of multiple power supplies.”
The application becomes challenging when there’s a requirement to have the digital side powered up and stable for a programmable interval (typically approximately a second or two) before the analog comes online.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Since Jayapal had already published a fine digital solution to the problem, I’ve taken the liberty of attempting an (almost painfully) simple analog version using an SPDT switch for ON/OFF control and RC time constants, and Schmidt triggers for sequencing. Figure 1 shows how it works.
Figure 1 Simple analog supply sequencing accomplished using an SPDT switch for ON/OFF control and RC time constants, and Schmidt triggers for sequencing.
Switching action begins with S1 in the OFF position and both C1 and C2 timing caps discharged. This holds U1 pin 1 at 15 V and pin 3 at 0 V. The latter holds enhancement-mode PFET Q1’s gate at 15 V, so both the transistor and the 15-Vout rail are OFF. Meanwhile, the former holds NFET Q2’s gate at zero and therefore Q2 and the 5-Vout rail are likewise OFF. No power flows to the connected loads.
Figure 2 shows what happens when S1 is flipped to ON.

Figure 2 Power sequence timing when S1 is flipped to ON, connecting C2 near ground through R3.
Moving S1 from OFF to ON connects C2 near ground through R3, charging it to the Schmidt trigger low-going threshold in about R3C2 = 1 ms. This reverses U1 pin 2 to 15 V, placing a net forward bias of 10 V on NFET Q2, turning on Q2, the 5-Vout rail, and connected loads. Thus, they will remain as long as S1 stays ON.
Meanwhile, back at the ranch, the reset of C1 has been released, allowing it to begin charging through R1. Nothing much else happens until it reaches U1’s ~10-V threshold, which requires roughly T1 = ln(3)R1C1 = 2.2 seconds for the component values shown. Of course, almost any desired interval can be chosen with different values. When R1C1 times out, U1pin4 snaps low, PFET Q1 turns ON, and 15-Vout goes live. Turn ON sequencing is therefore complete.
The right side of Figure 2 shows what happens when S1 is flipped to OFF.
Firstly, C1 is promptly discharged through R3, turning off Q1 and 15-Vout, putting it and whatever it powers to sleep. Then C2 begins ramping from near zero to 15 V, taking T2 = ln(3)R2C2 = 2.2 seconds to get to U1’s threshold. When it completes the trip, pin 2 goes low, turning Q2 and 5-Vout OFF. Turn OFF sequencing is therefore complete.
Marginal details of the design include the two 4148 diodes whose purpose is to make the sequencer’s response to losing and regaining the input rail voltage orderly, and to do so regardless of whether S1 is ON or OFF when/if they happen. Note that MOSFETs should be chosen for adequate current handling capacities. Note that since Q1 has 15 V of gate/source drive and Q2 gets 10 V, neither needs to be a sensitive logic-level device.
Figure 3 shows some alternative implementation possibilities for U1’s triggers in case using a hextuple device with 4 sections unused seems inconvenient or wasteful.

Figure 3 Alternative Schmidt trigger possibilities.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Short push, long push for sequential operation of multiple power supplies
- A step-by-step guide to power supply sequencing and supervision, Part 2
- Power-supply sequencing for low-voltage processors
- Trends in power supply sequencing
The post Silly simple supply sequencing appeared first on EDN.
AI in 2026: Enabling smarter, more responsive systems at the edge

As artificial intelligence (AI) continues its momentum across the electronics ecosystem, 2026 is shaping up to be a defining year for edge AI. After years of rapid advancement in cloud‑centric AI training and inference, the industry is reaching a tipping point. High‑performance intelligence is increasingly migrating to the edge of networks and into systems that must operate under stringent constraints on latency, power, connectivity, and cost.
This shift is not incremental. It reflects a broader architectural evolution in how engineers design distributed intelligence into next‑generation products, systems, and infrastructure.
Consider an application such as detecting dangerous arc faults in high‑energy electrical switches, particularly in indoor circuit breakers used in residential, commercial, or industrial environments. The challenge in detecting potential arc faults quickly enough to trip a breaker and prevent a fire hazard is that traditional threshold‑based criteria often generate an impractically high number of false positives, especially in electrically noisy environments.
An AI‑based trigger‑detection approach can significantly reduce false positives while maintaining a low rate of false negatives, delivering a more practical and effective safety system that ultimately saves lives.
What edge AI means for design
Edge AI refers to artificial intelligence processing performed on or near the physical hardware that collects and acts on data, rather than relying solely on remote cloud data centers. By embedding inference closer to where data originates, designers unlock real‑time responsiveness, tighter privacy controls, and reduced dependence on continuous network connectivity.
These capabilities allow systems to make decisions in milliseconds rather than seconds, a requirement across many industrial and embedded domains.

Figure 1 Smart factory environments demand immediate pattern recognition and decision‑making.
From factory automation to safety‑critical monitoring, the need for immediate pattern recognition and decision‑making has become a core design constraint. Systems must be engineered to operate with local intelligence that is context‑aware and resilient, maintaining performance even during intermittent or unavailable cloud connectivity.
Engineering drivers behind the edge shift
Design engineers are responding to several overlapping trends.
- Latency and determinism
Latency remains a fundamental limiter in real‑time systems. When AI models execute at the edge instead of in the cloud, network round‑trip delays are eliminated. For applications such as command recognition, real‑time anomaly detection, and precision control loops, deterministic timing is no longer optional—it is a design requirement.

Figure 2 Latency issues are driving many industrial applications toward edge AI adoption.
In the arc‑fault detection example described earlier, both latency and determinism are clearly essential in a safety‑oriented system. However, similar constraints apply to other domains. Consider an audio‑based human‑machine interface for an assistive robot or a gesture‑based interface at an airport kiosk. If system response is delayed or inconsistent, the user experience quickly degrades. In such cases, local, on‑device inference is critical to product success.
- Power and energy constraints
Embedded platforms frequently operate under strict power and energy constraints. Delivering AI inference within a fixed energy envelope requires careful balancing of compute throughput, algorithm efficiency, and hardware selection. Engineering decisions must support sustained, efficient operation while staying within the electrical and packaging limits common in distributed systems.
- Data privacy and security
Processing AI locally reduces the volume of sensitive information transmitted across networks, addressing significant privacy and security concerns. For systems collecting personal, operational, or safety‑critical data, on‑device inference enables designers to minimize external exposure while still delivering actionable insights.
For example, an occupancy sensor capable of detecting and counting the number of people in hotel rooms, conference spaces, or restaurants could enable valuable operational analytics. However, even the possibility of compromised personal privacy could make such a solution unacceptable. A contained, on‑device system becomes essential to making the application viable.
- Resource efficiency and scalability
In deployments involving thousands or millions of endpoints, the cumulative cost of transmitting raw data to the cloud and performing centralized inference can be substantial. Edge AI mitigates this burden by filtering, transforming, and acting on data locally, transmitting only essential summaries or alerts to centralized systems.
Edge AI applications driving design innovation
Across industries, edge AI is moving beyond pilot programs into full production deployments that are reshaping traditional design workflows.
Industrial systems
Predictive maintenance and anomaly detection now occur directly at the machine, reducing unplanned downtime and enabling real‑time operational adjustments without dependence on remote analytics.

Figure 3 Edge AI facilitates predictive maintenance directly at the machine.
Automotive and transportation
In‑vehicle occupancy sensing is emerging as a critical edge AI application. Systems capable of detecting the presence of passengers—including children left in rear seats—must operate reliably and in real time without dependence on cloud connectivity.
On‑device AI enables continuous monitoring using vision, radar, or acoustic data while preserving privacy and ensuring deterministic system response. These designs prioritize safety, low power consumption, and secure local processing within the vehicle’s embedded architecture.
Consumer and IoT devices
Smart devices that interpret voice, gesture, and environmental context locally deliver seamless user experiences while preserving battery life and privacy.
Infrastructure and energy
Distributed assets in energy grids, utilities, and smart cities leverage local AI to balance loads, detect dangerous arc faults, and optimize performance without saturating communication networks. A common theme emerges across these sectors: the more immediate the required intelligence, the closer the AI must reside to the data source.
Design Considerations for 2026 and beyond
Embedding intelligence at the edge introduces new complexities. Beyond system design and deployment, AI development requires structured data collection and model training as both an initial and ongoing effort. Gathering sufficiently diverse and representative data for effective model training demands careful planning and iterative refinement—processes that differ from traditional embedded development workflows.
However, once structured data collection becomes part of the engineering lifecycle, many organizations find that it leads to more practical, cost‑effective, and impactful solutions.
Beyond data strategy, engineers must address tight memory footprints, heterogeneous compute architectures, and evolving toolchains that bridge model training with efficient, deployable inference implementations. A holistic approach requires profiling real‑world operating conditions, validating model behavior under constraint, and integrating AI workflows with existing embedded software and hardware stacks.
In this context, the selection of compute architecture and development ecosystem becomes critical. Platforms offering a broad performance range, robust security mechanisms, and long product lifecycles enable designers to balance immediate requirements with long‑term roadmap considerations. Integrated development flows that support optimization, profiling, and debugging across the edge continuum further accelerate time to market.
Edge AI in 2026 is not simply a buzz phrase—it’s a strategic design imperative for systems that must act quickly, operate reliably under constraint, and deliver differentiated performance without overburdening networks or centralized infrastructure.
By bringing intelligence closer to where data is generated, engineers are redefining what distributed systems can achieve and establishing a new baseline for responsive, efficient, and secure operation across industries.
Nilam Ruparelia is associate director of Microchip’s Edge AI business unit.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
- The AI-tuned DRAM solutions for edge AI workloads
- Designing edge AI for industrial applications
- Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs
- Bridging the gap: Being an AI developer in a firmware world
- Why power delivery is becoming the limiting factor for AI
- Silicon coupled with open development platforms drives context-aware edge AI
- Designing energy-efficient AI chips: Why power must Be an early design consideration
- Edge AI in a DRAM shortage: Doing more with less
The post AI in 2026: Enabling smarter, more responsive systems at the edge appeared first on EDN.
Passive RC circuit produces gain

Could a simple passive RC network without any transformers, inductors, switches, or non-linear components produce a voltage gain?
Wow the engineering world with your unique design: Design Ideas Submission Guide
Well, it’s not “free energy,” however, yes, and can even use the same value resistor and capacitor in a ladder network shown in Figure 1, although differing values also can be utilized.
Figure 1 Passive RC circuit using resistors and capacitors of the same value in a ladder network.
Of course, this isn’t of much use other than a curiosity, but it’s a fun circuit to build and play around with!
I built one with seven sections (Figure 2) using an R of 10 kΩ and a C of 0.1 µF, then plotted the results.

Figure 2 A seven-section RC ladder network with an R of 10 kΩ and C of 0.1 µF.
The Bode plot can be seen in Figure 3. As you can see, the gain remains around 0 dBv and behaves as a low-pass filter, then slowly rises to a peak of 1.07 dBv at 1 kHz before falling off.

Figure 3 Bode plot of the passive RC circuit showing low-pass filter behavior until a slow rise to a peak of 1.07 dBv at 1 kHz.
This agrees well with the simulation shown in Figure 4.

Figure 4 Circuit simulation of a passive RC circuit that closely matches the Bode plot shown in Figure 3.
If you swap the resistors and capacitors, the circuit behaves like a high-pass filter and produces a higher gain of 1.13 dBv at 26 Hz, as shown in Figure 5, and a simulation in Figure 6.

Figure 5 Bode plot of the passive RC circuit showing high-pass filter behavior.

Figure 6 Circuit simulation of a passive RC circuit that closely matches the Bode plot shown in Figure 5.
As noted by someone, this technique can be employed with an emitter-follower, which has a voltage gain less than unity to create an oscillator. However, that’s for another upcoming Design Idea (DI), which will also include a note on how a single unbiased JFET can produce a +dBv voltage gain!
Anyway, hopefully some folks find this interesting and have some fun!
Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Exelis/Harris, ViaSat, and retiring (semi) with Wyatt Labs. During his career, he accumulated 32 US Patents and, in the past, published a few EDN Articles, including Best Idea of the Year in 1989.
Related Content
- RC networks
- SPICE Course, Part 2: Time Constant Simulation
- Designing RC active filters with standard-component values
- A simple software lowpass filter suits embedded-system applications
The post Passive RC circuit produces gain appeared first on EDN.
Build a practical 400 mA linear Li-ion charger with visible CC-CV behavior

Single-cell lithium-ion (Li-ion) chargers are widely used, yet many practical designs rely on highly integrated ICs that conceal their internal operation. The type of Li-ion charger outlined in this design is, somewhat surprisingly, not readily found in a general review of available internet and YouTube resources.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The present circuit is practical, deployable, and firmly grounded in established circuit theory, and may offer a complementary perspective to prevailing practice, particularly for designers who value analytical transparency and first-principles reasoning. It operates from a 5 V supply, commonly found in 5 V/1 A smartphone chargers, delivers 400 mA of constant current (CC), then transitions to 4.217 V constant-voltage (CV) regulation, and has been built and tested using half of an LM324 quad operational amplifier.
The circuit, shown in Figure 1, performs reliably and is well-suited for bench chargers, embedded products, and instructional laboratories. This design emphasizes simplicity, component availability, and safe charging behavior while remaining easy to analyze and adapt.

Figure 1 Schematic of the dual-loop linear Li ion charger. The schematic shows the key nodes that are referenced and plotted in the LTSpice simulation. In practice, a 1N4007 diode for D2 also worked well.
The charger uses two independent control loops acting on a PNP pass transistor. An inner loop regulates charge current, while an outer loop regulates battery voltage. The voltage loop output also provides a convenient indicator of charging status.
CC loop operationA 1.25 V reference is divided using 115 kΩ and 10 kΩ resistors to produce a 0.10 V current reference. An LM324 section compares the 1.25 V reference to the drop across a 0.25 Ω sense resistor in series with the battery return. The op amp drives an NPN transistor, which sinks base current from the PNP pass device until the sense voltage equals the reference.
The resulting charge current is 0.4 A. This current regulation is independent of battery voltage, ensuring safe charging even from deeply discharged cells.
CV loop operationA second LM324 section monitors battery voltage through a 47.5 kΩ and 20 kΩ divider. When the divided voltage reaches 1.25 V, corresponding to 4.217 V at the battery terminals, the op amp reduces drive to the pass transistor, transitioning the charger into CV mode.
The voltage loop is intentionally compensated to be slower than the current loop, ensuring a smooth handover without oscillation or overshoot. As in my case, if a commercial 1.25-V reference, e.g., TLV431, is not available, a 2k/2k potential divider connected to the common LM431-2.5-V voltage reference works reasonably well. However, since it is an integral part of both control loops, extra care should be taken to stabilize the loops to prevent oscillations.
Loop stabilityThe changeover from CC to CV crossover can cause some ringing, as shown in the measurement of U2OUT shown in the DSO capture in Figure 2. This appears as both LEDs are dimly lit, showing rapid oscillations. There are two possible remedies. The first is to dampen the voltage loop by including a small capacitance of 33 pF to 500 pF in parallel with Rtop. The second is to dampen the current loop by adding a small RC time constant from the emitter to the collector of Q2, the pass transistor driver. In LTSpice, you can probe phase margin by injecting a small AC source at the summing point or by param sweeping and observing step responses.

Figure 2 Circuit construction and measurements. Inset (a) close-up of the breadboarded circuit showing the status indication LEDs. The photograph was taken when the cell voltage was 4.09 V, which is the threshold of the CC-CV crossover (see text). (b) shows the oscillation at the node U2OUT, which drives the LEDs and forms the pass transistor pre-driver signal. The image was captured on a Tektronix TDS2024C DSO.
Charge status indicationThe output of the voltage regulating amplifier doubles as a logic-level indicator of charging state. When the battery voltage is below the regulation threshold, the output drives a red LED indicating active charging. As the battery approaches full charge and current tapers, the output level changes and illuminates a green LED. This approach eliminates the need for an additional comparator while providing clear, real-time visual feedback.
Thermal and practical considerationsWith a deeply discharged cell at approximately 2.2 V, the PNP pass transistor must dissipate roughly 1.1 W at 400 mA. Off-the-shelf, low saturation voltage transistors such as the 2SB772 will work comfortably without a heat sink. In the constructed prototype, a modest copper area was sufficient for thermal management. Although in the built version a 5W rating is used for the sense resistor, it dissipates only 40 mW, allowing a 0.25 W rated component a more than adequate margin. All active components operate within their safe operating area when supplied from a regulated 5 V source.
Experimental verificationThe charger was assembled on a prototype board and tested with a single 18650 Li-ion cell. Startup into CC mode was immediate, followed by a smooth transition to CV operation at approximately 4.22 V. Charge current tapered naturally as expected.
Supplementary files:
- A video of the circuit in operation is shown here: https://youtube.com/shorts/oSzR4XQViFs
- LTSpice simulation (.asc) file: Li-ion-ocaya-LTSpice schematic.asc
The LTspice simulation models the Li-ion cell as an ideal capacitor C in series with a small ESR = 80 mΩ, charged from a constant current I = 0.4 A source; the terminal voltage is
where the capacitive core obeys:
![]()
Over a finite interval ΔV and time Δt, the approximation C = I Δt/ΔV can be made, assuming that the current is reasonably constant. The current falls progressively in reality, imparting a non-linear character to the cell voltage transient. With the cell rising from 2.2 V to 4.217 V, the ESR contributes a small, essentially instantaneous step of = 0.4 × 0.08 = 0.032 V (32 mV), after which the slope is set by I/C. Thus, if the observed CC interval Δt for the ΔV ≈ (4.217 − 2.2) = 2.017 V rise is about 5250 s (≈ 1.46 h), then C ≈ (0.4 × 5250) / 2.017 ≈ 1040 F.
This is a first-order capacitor-plus-ESR approximation, with the caveat that real Li-ion cells have voltage–state-of-charge (SoC) and temperature dependencies that make C a state-dependent quantity rather than a fixed constant.
Figure 3 plots the LTSpice simulation values of the nodes and branches named in Figure 1. In Figure 3, the battery was assumed to be deeply discharged, denoted by a state of charge of 2.2 V.

Figure 3 LTspice simulation of the key nodes and currents in the circuit. The measurements on the actual circuit closely match these plots.
The simulation shows that it transited from CC to CV charging in approximately 1 hr 21 mins after the onset of charging. The charging current tapered off thereafter and dropped to zero. The circuit current dropped to 19 mA without the battery connected, and when the charging was completed. The measured voltage across the Li-ion was 4.21 V, with only the green LED fully on, with no flickering on either LED, as shown in Figure 4.

Figure 4 Photographs showing the current drawn by the circuit from a 5-V bench supply, and a multimeter showing a 4.21 V SoC of the Li-ion battery when charging is completed.
Compliant Li-ion chargingThis design demonstrates that a fully compliant Li-ion charging profile can be achieved using readily available analog components. It is suitable for real-world use while remaining accessible to analysis and modification. The circuit offers a practical alternative for engineers who require simplicity, transparency, and predictability in low-power Li-ion charging applications.
P.S.: Like many enthusiasts around the world, the designer lives in a region where access to electronics stores and new components is limited. The motivation for this circuit was robustness and realizability using parts salvaged from discarded equipment.
Professor Ocaya specializes in electronics and solid-state physics, which he teaches at the Qwaqwa Campus of the UFS. He is active in computing, mathematical methods, new techniques for device characterization, material science, and microcontroller-based instrument design. He holds a C3 rating from the National Research Foundation (NRF) of South Africa.
Related Content
- Circuit’s RMS output is linearly proportional to temperature over wide range
- A battery charger that does even more
- Building a Battery Charger with the CC/CV Method
- Lead-acid battery charger
The post Build a practical 400 mA linear Li-ion charger with visible CC-CV behavior appeared first on EDN.
Six critical trends reshaping 3D IC design in 2026

AI compute is scaling at ~1.35× per year, nearly twice the pace of transistor scaling. Thus, the semiconductor industry has reached a hard inflection point: if we can’t scale down, we must scale up. Increasingly, engineering teams are turning to 3D ICs to keep pace with the ascent of next-gen AI scaling.
However, designing in three-dimensions also exacerbates system complexity, leaving IC and package designers with a pressing question: how do you explore millions of design considerations and still optimize and validate system performance within schedule constraints?
This article examines six trends that will help design teams overcome this challenge and help them reshape the future of 3D IC design in 2026.
Trend 1: STCO becomes crucial for multi-chiplet integration at AI scales
Advanced packages already exceed tens of millions of pins, with trajectories pointing toward hundreds of millions. At this scale, no design teams can fully comprehend the system through traditional spreadsheets or point tools. Design complexity has fundamentally shifted to system-level orchestration.
This is where system-technology co-optimization (STCO) becomes critical by incorporating packaging architectures, die-to-die interconnects, power delivery networks, thermal paths, and mechanical reliability into a unified optimization loop.

Figure 1 STCO unifies packaging architectures, die-to-die interconnects, power delivery networks, thermal paths, and mechanical reliability into a single optimization loop. Source: Siemens EDA
A core benefit is the industry’s long-awaited “shift-left” for 3D ICs: Predictive multiphysics modeling allows teams to assess performance, power, thermal headroom, and mechanical stress concurrently and address architectural risks.
To enable true STCO, EDA toolchains must evolve from siloed analysis into integrated system platforms that create a unified 3D digital twin with shared data models, giving all stakeholders a persistent, system-level view and ensuring cross-domain optimization from a single, consistent dataset.
As chiplet-based architectures scale, STCO will become a foundational requirement for achieving performance, yield, and reliability targets in next-generation AI and high-performance computing systems.
Trend 2: Co-packaged optics reshape AI system architectures
As AI clusters push beyond 100 Tb/s per node, the gap between what silicon can generate and what traditional copper interconnects can deliver is widening fast. Even with SerDes continuing to scale, copper links are approaching fundamental limits in bandwidth density and energy efficiency, turning interconnect power into a major system bottleneck.
With global AI data center power demand projected to rise 50% by 2027, efficiency gains have become non-negotiable. This pressure is accelerating momentum behind co-packaged optics (CPO). By placing optical engines directly adjacent to switch ASICs, accelerators, and chiplets, CPO collapses electrical trace lengths from inches to millimeters, dramatically reducing signal loss while improving bandwidth density, latency, and power efficiency.

Figure 2 CPO reduces electrical trace lengths from inches to millimeters to significantly lower signal loss. Source: Siemens EDA
Nvidia reports that moving from pluggable transceivers to CPO in 1.6T networks can reduce link power from roughly 30 W to 9 W per port. Industry forecasts project over 10 million 3.2T CPO ports by 2029, signaling a shift from early pilots to volume deployment. However, this transition introduces new design challenges.
Photonic ICs are highly temperature-sensitive, while 3D CPO integration adds hybrid bonding interfaces, die thinning, and vertical heat flow that create complex thermo-mechanical interactions. Thermal gradients can induce wavelength drift, alignment errors, and long-term reliability risks—making thermal-optical co-design and multiphysics analysis essential for production-scale CPO deployment.
Trend 3: Advanced packaging innovations drive integration scale-out
New power delivery architectures and vertical integration schemes continue to emerge. As thermal-compressed bonds reach their integration limits, hybrid bonds will drive the 3D interconnect to 1 µm and below. Additionally, AI and high-performance computing (HPC) suppliers are considering wafer- and panel-level architectures to place more computing closer together, and foundries are pursuing more modular wafer-scale strategies.
Material innovation is also reshaping system integration. Glass substrates are gaining traction for large-area packaging and high-frequency AI and 6G applications, supporting more reliable signaling at higher data rates while reducing package warpage by nearly 50% in large substrates.
To adapt to this pace of change, an open and scalable workflow is critical to aligning new application requirements with manufacturability, yield, and cost. So, EDA tools must support rapid design-space exploration, early multiphysics modeling, and AI-assisted optimization to navigate the exponentially expanding solution space.
Trend 4: Novel thermal solutions rise to meet AI power density challenges
Power densities in leading-edge 3D ICs have already been compared to those at the surface of the sun. With multiple chiplets stacked in extreme proximity, 3D IC power densities create intense localized hotspots and trap heat in tiers far from the heat sink. This vertical thermal confinement is pushing conventional top-down air and cold-plate cooling approaches beyond their practical limits.
To address this challenge, microfluidic cooling architectures are being heavily researched and gaining early pilot traction. By etching micron-scale channels directly into silicon dies or interposers, engineers can route coolant within tens of micrometers of active transistors, enabling localized heat extraction and significantly shortening thermal conduction paths.
At the package interface, thermal interface materials (TIM) remain one of the dominant thermal bottlenecks. TIM1—located between the die and heat spreader—is particularly critical due to its proximity to active silicon. An effective TIM must minimize thermal resistance while maintaining mechanical compliance under thermal cycling and package-induced stress.
Among near-term solutions, indium foils have emerged as leading candidates for high-performance TIM1 applications. Researchers are also exploring advanced alternatives, including phase-change materials, graphene and carbon nanotube composites, silver-filled thermal gels, and liquid metals. Some experimental approaches aim to reduce or bypass conventional TIM layers altogether by integrating cooling structures directly onto the die surface.
Ultimately, ensuring thermal, power, and mechanical reliability is an inherently interdisciplinary challenge—one that no single innovation in chip architecture, materials, or cooling design can solve in isolation. By unifying multiphysics analysis, thermal-driven floorplanning, and system-aware design within a single digital thread, Siemens Innovator3D IC and Calibre 3DThermal enable engineers to establish reliability early on the design process, evaluate trade-offs earlier, and converge faster on manufacturable, high-performance 3D IC designs.

Figure 3 Thermal solutions for 3D ICs allow engineers to evaluate trade-offs early in the design process. Source: Siemens EDA
Trend 5: AI accelerates 3D IC designs for AI
The semiconductor industry needs more than one million additional skilled workers by 2030. There simply aren’t enough domain experts to balance signal integrity, power integrity, thermal effects, and mechanical stress across complex 3D ICs.
AI offers a practical path to scale scarce engineering expertise and close the productivity gap. One high-impact application is AI-driven, design-space exploration. Modern 3D IC architectures involve thousands to millions of tightly coupled variables, spanning die partitioning, material stacks, floorplanning, interconnect topology, and power delivery design.
Machine learning and reinforcement learning techniques accelerate exploration by rapidly predicting outcomes, learning from prior iterations, and uncovering non-obvious trade-offs that deliver measurable performance, power, and reliability gains.
Another critical application is automated power-thermal co-analysis. In 3D ICs, power dissipation directly raises temperature, while temperature feeds back into leakage and dynamic power behavior. Agentic AI and ML techniques improve both accuracy and turnaround time by automating complex modeling steps.
Predictive characterization can infer cell behavior at new temperature corners, while intelligent leakage modeling extracts temperature-dependent behavior directly from data, reducing manual calibration effort and improving model fidelity.
Over the past several years, Siemens EDA has embedded industrial-grade AI directly into 3D IC design flows, from verification and multiphysics analysis to design exploration, guided by five foundational principles:
- Accuracy: Conforming to strict physical laws
- Verifiability: Transparent decision-making
- Robustness: Consistent performance with new data
- Generalizability: Applying insights across new problems
- Usability: Seamless integration with existing CAD/CAE tools
Trend 6: Integrated multiphysics workflow sets new standards for 3D IC system performance
Thermal, mechanical, and electrical effects are no longer secondary concerns that can be checked after layout. A chiplet may meet specifications in isolation yet may suffer degraded reliability when exposed to the actual thermal gradients, stress fields, power-delivery impedance, and IR-drop profiles inside a 3D stack.
This reality is driving a clear shift left in multiphysics analysis. These effects must be considered as part of early architecture decisions, chiplet partitioning, RTL modeling, and floorplanning—when the most impactful trade-offs are still on the table.
To make this practical, the industry needs standardized “multiphysics Liberty files” that capture temperature- and stress-dependent behavior of chiplet blocks. With this information available upfront, designers can verify whether a chiplet will remain within safe operating limits under realistic thermal and mechanical conditions.
Just as important, multiphysics evaluation cannot be a one-time checkpoint. 3D IC design is highly iterative, and every change—to layout, interfaces, materials, or stack configuration—can subtly reshape thermal paths, stress distributions, and electrical parasitics. Without continuous re-validation, risk accumulates quietly until it shows up as yield loss or reliability failures.
Integrated multiphysics platforms help teams stay ahead of this complexity by anchoring analysis to a shared, authoritative representation of the full 3D assembly. Working from a single source of truth allows teams to iterate confidently, uncover risks earlier, and validate decisions consistently across the entire stack.
The tools of the trade
Success in this new era requires more than a collection of isolated point tools. Design teams need a unified, end-to-end flow that brings together architecture exploration, multiphysics analysis, and cross-domain optimization in a single platform.
3D IC tools deliver exactly this integrated approach, tearing down the traditional walls between IC design, advanced packaging, and system-level validation. By giving design teams a shared source of truth and enabling them to tackle critical challenges earlier in the design cycle, these tools help engineers close on designs faster, explore more ambitious architectures, and ultimately build the silicon that will power the next generation of AI systems.
Kevin Rinebold is technology manager for 3D IC and heterogeneous packaging solutions at Siemens EDA. He has 34 years of experience in defining, developing, and supporting advanced packaging and system planning solutions for the semiconductor and systems markets. Prior to joining Siemens EDA, Kevin was product manager for IC packaging and co-design products at Cadence.
Related Content
- Putting 3D IC to work for you
- Making your architecture ready for 3D IC
- The multiphysics challenges of 3D IC designs
- Mastering multi-physics effects in 3D IC design
- Advanced IC Packaging: The Roadmap to 3D IC Semiconductor Scaling
The post Six critical trends reshaping 3D IC design in 2026 appeared first on EDN.
Perusing a LED-based gel nail polish UV lamp

This engineer doesn’t use nail polish, but his wife does. And he deals with plenty of PCBs. What do these things have in common? Read on.
Speaking of LEDs that lose their original intensity over time and use…
In the fall of 2020, after accepting that due to the COVID pandemic she wasn’t going to be getting back inside a nail salon any time soon, my wife invested in a UV lamp so she could do her own gel polish-based nails at home. While the terminology I just used in the prior sentence, not to mention the writeup title that preceded it, might be “old news” to at least some of you, others (like me, at first) might be confused. Here goes:
Gel detailsFirst off, what is gel nail polish, both in an absolute sense and relative to its traditional counterpart? Here’s manufacturer OPI’s take:
A gel manicure is a coat of colored gel that looks deceptively similar to nail polish. It’s a thin brush-on formula, designed for high performance and a glossier finish than regular nail polish…An OPI GelColor manicure [also] lasts for up to 3 weeks…The primary difference between gel nails and a regular manicure is curing. Between each coat, you cure the color and set the gel nail polish by putting your nails under a special light.
That “special light” is a UV lamp. Initially, they were constructed using fluorescent tubes. But nowadays, mirroring the broader trend, they increasingly use LEDs instead. The one my wife first bought is Bolasen’s SunX Plus, a “Professional True 80W Salon Grade LED Nail Dryer for Gel Polish.” The link to it on Amazon’s main site now auto-forwards to a more recent battery-operated model (this one’s AC-powered), but I found a still-live product page copy on Amazon’s South Africa site (believe it or not). Here’s the associated stock artwork:

I’m not sure I want to know what the “no black hands” phrase references…




The black base shown in the stock images is missing in action; my wife found that foregoing the bottom plate expanded the lamp’s extremity-insertion gap spacing, thereby easing use. More generally, she’s now replaced this initial UV lamp with a newer successor; the original device’s intensity apparently faded over time, eventually taking excessively long to work its drying magic.
Polymer processingSpeaking of drying (or if you prefer, curing), what’s so special about UV light? Over to a blog post at the Manicurist website for an explanation:
Whether LED or UV, these lamps emit ultraviolet (UV) rays that trigger a chemical reaction called “polymerization”. Under UV exposure, the molecules in the polish bond together to form a solid and durable film known as a “polymer network”.
UV curing is more broadly used in a variety of applications and industries, as Wikipedia notes:
UV curing (ultraviolet curing) is the process by which ultraviolet light initiates a photochemical reaction that generates a crosslinked network of polymers through radical polymerization or cationic polymerization. UV curing is adaptable to printing, coating, decorating, stereolithography, and in the assembly of a variety of products and materials. UV curing is a low-temperature, high speed, and solventless process as curing occurs via polymerization. Originally introduced in the 1960s, this technology has streamlined and increased automation in many industries in the manufacturing sector.
More generally, electrical engineers out there will likely be particularly interested, for example, in UV light’s key role in the photolithography process used to make printed circuit boards!
So why, if this is a UV lamp that’s supposedly emitting light beyond the visible spectrum, is its output discernible by the human visual system (along with my smartphone’s camera)?

(cool photo, eh?)
Part of the answer may be that the LEDs in the design aren’t true UV at all, but instead leverage the lower-cost alternative referred to as near-UV. Part of it may be that the output spectral plot is sufficiently broad to still-noticeably “leak” into the violet portion of the visible range. And part of it may be that, to reassure users that the device is “on” (thereby preventing lengthy periods of peering at “pure” UV light, with likely retinal-damage consequences), the LED manufacturer added a phosphor layer to additionally generate a visible light output. Hold that thought.
Power spec uncertaintyLast (but definitely not least), before diving in, what’s with that “80W” output claim? The device actually supports two different power output modes, 80W and “low-heat” 55W, user-selectable via one of the four topside switches. When I initially plugged the lamp in without the LEDs illuminated, my Kill A Watt electricity usage meter measured 1W of power consumption:

Switch the LEDs on, in low-output mode, and I got 12W:

And in “high” mode? 23W:

12W ≠ 55W. And 23W ≠ 80W. So, what gives? At first, I wasn’t confident that my Kill A Watt was measuring power consumption correctly. But then I looked at the “wall part” that powers the lamp (in the first image of the sequence that follows, as with subsequent images in this post, accompanied by a 0.75″, i.e., 19.1 mm diameter U.S. penny for size comparison purposes):








Let’s zoom in on that last one:

Unless my math’s totally whacked, 24V times 1.5A equals 36W, not 80W, far from higher than that (to account for consumption inefficiency). So again, what gives? Was Bolasen’s marketing team being flat-out deceiving? Maybe: my cynical side certainly resonates with that conclusion.
But at the end of the day, I’ve decided to give the company the benefit of the doubt and conclude that, just as LED light bulb manufacturers do in spec’ing their devices vs incandescent precursors, Bolasen is using fluorescent UV tube intensity equivalents in rating its LED-based UV device. Online references I’ve found equate the lumens brightness rating of a 20-plus watt LED to that of an 80W fluorescent tube. Granted, that’s for visible light, but perhaps the comparison holds in the ultraviolet band as well…regardless, let me know your thoughts in the comments!
Diving insideMy background-info pontification now concluded (thank goodness, right?), let’s get to tearing down, shall we? Here’s our patient:



Raise the transport handle!


FCC-certified? Really? Call me cynical (again):


The specs say 42 total “beads” (LEDs). That matches my count in the photo that follows:

Look closely and you’ll also see, among other things, five screw heads, which I’m wagering are our pathway inside, along with an array of passive ventilation holes. Here’s the 12-LED cluster at the top (when the device is in its normal operating orientation, that is):

and the cluster-of-six at the back:

Along each side are four more clusters-of-three, such as this one:

The ones at either end, i.e., straddling the lamp’s opening in its normal orientation, are special. The one at the right side (again, in normal orientation) also includes an IR (infrared) transmitter:

while the other additionally incorporates an IR receiver:

This, dear readers, is how the lamp implements the following function (quoting the original broken English on the Amazon product page):
Use the auto-sensor, it would turn on or off automatically when you put hand/foot in or out.

Insert your appendage (hand or foot, to be precise), and its presence breaks the infrared beam that normally traverses the transmitter-to-receiver gap in an uninterrupted fashion. Voila!
OK, let’s get those five screws outta there:

and with only a bit of remaining prying to do:

we’re in!

Although this original lamp may now be too slow to operate for my wife to tolerate, it still works. I’d therefore prefer to put it back together still fully functional and then donate it for someone else to use…or maybe I’ll keep it and use it to cure resin or…hey…make my own PCBs! Regardless, I’m keeping the internal wiring intact. Don’t worry, we’ll still be able to see its guts a’plenty. Let’s start with the inside of the top half of the chassis:

The power connector pops right out of its usual location:


Now for that PCB in the center:

At left is the two-wire connection that powers the LED array. At right is the power input. And the four-wire harness coming out the bottom feeds (and is fed by) the IR transmitter and receiver. The 14-lead IC PCB-labeled U1 in the upper right corner is unmarked, alas, but is presumably the system “brains”. And at lower left is a P60NF03 n-channel MOSFET likely employed for both LED power switching and variable voltage generation (for both “80W” and “55W” modes) purposes.
Flip the PCB over:
and what now emerges into view is the multi-digit eight-segment display along with the four-switch control cluster.
Beating the heatNow for the inside of the other (lower) half. Wow, look at all those thermal-dissipating metal plates (operating in conjunction with the earlier-mentioned passive ventilation array)!

First off, here are the connections to the IR transmitter:
and IR receiver:
Now for the LED power distribution network. The two-wire harness coming from the PCB first routes to the three-LED plate that’s in the lower left, just below the six-LED plate, in the earlier overview photo:
From there it splits in two directions. The “upper” (for lack of a better word) span first routes to the aforementioned six-LED plate:
Then to a series of three mid-span three-LED plates:
And finally, to a three-LED plate at the end in proximity to the IR receiver:
The “lower” span also then cycles through its own set of three three-LED plates, the last of them alongside the IR transmitter, and terminates at the 12-LED cluster:
Dual-frequency LEDsThese one more aspect to this design that I want to make sure I highlight, which keen-eyed readers may have already noticed. Check out this closeup of one of the LED “beads”:
The yellow tint is reflective of the thin phosphor layer applied to the inside of the “bead” dome to assist in generating augmented visible light for user-operation-stupidity-prevention purposes. But peer underneath it…are there two die in there? Indeed, there are. I’d originally thought I was instead just looking at the LED’s leadframe structure:

but, in the comments to a teardown video of a different UV lamp by “Big Clive” (whose always-excellent work I’ve showcased before):
was an enlightening insight from “restorer19”:
I have the 6-led UV panel you did a video on years ago, from the same brand, and it likely uses the same LEDs – I’ve sacrificed once of the LED chips and an additional one of the phosphor domes/blobs. It appears to have two LED dies on each chip, one bonded with two wires to each end of the module, and one bonded directly downward with only one bond wire leading to it. The 2-wire die (presumably 405nm) lights a visible purple at a lower voltage (just under 3V), and the 1-wire die takes greater voltage to light up. The 1-wire die looks identical to the large one in a 365-nm LED flashlight I recently bought – the surface of the die itself seems to phosphoresce in white, and any color from the semiconductor itself is invisible. Looking at an individual LED module under magnification while powered at about 3.2V makes the two different dies obvious without being too bright to look at.
“Big Clive” had done an earlier teardown of a more elementary UV lamp containing these same dual-die LEDs (this video is, I believe, the same one that “restorer19” was referring to):
wherein he’d conjectured (at least as I interpreted his comments) that the white color, i.e. full-visible-spectrum-when-illuminated die inside might purely be for “powered on” visual user-reassurance purposes. However, a Google search using the phrase “dual die UV LED” produced an interesting (at least to me) AI Overview response:
A dual die UV LED refers to a UV-LED light source, often in nail lamps or curing devices, that combines two different LED chips (dies), usually at wavelengths like 365nm and 395nm, to effectively cure a wider range of UV-sensitive gels, including both traditional UV gels and newer LED-only gels, offering faster, more complete curing than single-wavelength lamps. These lamps are popular in nail salons for their versatility, providing professional results by ensuring all gel types, from base coats to builder gels, are fully hardened.
Key Features & Benefits
- Dual Wavelength: Uses two distinct UV wavelengths (e.g., 365nm for deeper penetration, 395nm for surface cure) for comprehensive curing.
- Broad Compatibility: Cures all gel types (UV, LED, builder, hard gels).
- Faster Curing: Significantly reduces curing time compared to older UV-only lamps.
- User-Friendly: Often includes auto-sensors, timers (15, 30, 60, 90s), and removable bases for pedicures.
- Professional Quality: Common in salons for consistent, high-quality results.
How it Works
Instead of a single type of UV emitter, a dual die lamp integrates two different LED chips within the same unit, each emitting at a specific UV wavelength, ensuring that various photoinitiators in different gels react and harden the product effectively.
In Summary: A “dual die” UV LED lamp is a modern, efficient solution for curing gel nails, combining multiple LED technologies for faster, more reliable results across the spectrum of gel products.
And, in finalizing this write-up just now prior to submitting it to Aalyia, I revisited the previously mentioned Amazon product page and noticed the following (bolded emphasis is mine):
Specifications:
- Timer: 10s/30s/60s/99s low heat mode
- Wattage: 80w(Max)
- Display: Digital Time Display
- Lamp beads: 42pcs Dual Dual Light Source
- Spectrum: 365nm+405nm
- Lifespan: 50,000H
- Voltage: 100V-240V 50Hz/60Hz
- Output: DC12V
- Lamp Size:235*223*102mm
- Ideal for: All nail gels
So, I’m guessing we now have our answer! In retrospect, I also realized that one of the earlier “stock” graphics referenced a “dual light source” and included an LED close-up revealing the dual die internal structure. That said, I’ll wrap up for now and await your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- UV-C LED achieves double the efficiency
- Considerations in the selection of UV LEDs for germicidal applications
- Teardown: What caused these CFL bulbs to fail?
- LED lighting teardowns: Five lighting designs that illuminate the future of lighting
The post Perusing a LED-based gel nail polish UV lamp appeared first on EDN.
Eddy current in focus: A rapid revisit

Eddy currents are not just textbook curiosities; they are the hidden loops that appear whenever metal meets a changing magnetic field. From DIY levitation tricks to clever braking systems, these swirling paths of electrons keep finding new ways to surprise and inspire.
In this rapid revisit, we will zoom in on the essentials, highlight a few practical pointers, and remind ourselves why this classic effect still merits a place in every innovator’s playbook.
Eddy currents: From losses to brakes to rice cookers
Eddy currents are closed loops of electrical current induced in conductors by a changing magnetic field, as described by Faraday’s law of induction. These currents circulate in planes perpendicular to the applied magnetic field.
By Lenz’s law, eddy currents generate their own magnetic field that opposes the change which created them. This opposition manifests as magnetic drag, joule heating, and energy conversion in conductive materials are exposed to time-varying fields.
The interaction between the applied field and induced currents resists motion. A classic demonstration is a magnet falling slowly through a copper tube—its descent dampened by the opposing magnetic force. As eddy currents circulate, they dissipate energy as heat due to the conductor’s resistance. This loss is problematic in devices such as transformers, motors, and induction coils, where unwanted heating reduces efficiency.
At the same time, eddy currents enable useful applications. In magnetic braking systems, for example, a moving object’s kinetic energy is deliberately converted into heat, providing smooth, contactless deceleration.

Figure 1 A generic eddy current brake is shown with rotor eddy currents resisting motion. Source: Author
Eddy currents embody both challenge and opportunity. In power systems, they waste energy as heat and demand careful design measures such as laminated transformer cores or specialized alloys to minimize losses. Yet the same principle enables precise, contactless control in magnetic braking, induction heating, and nondestructive testing.
Léon Foucault discovered eddy currents in the early 1850s; he also demonstrated Earth’s rotation with the Foucault pendulum. From Foucault’s copper disk to today’s rice cookers and industrial drives, eddy currents illustrate how a single electromagnetic effect can hinder efficiency while powering innovation. Their discovery remains a landmark in the history of electromagnetism.
Eddy currents at work: Quick insights
On paper, eddy currents arise from changing magnetic fields. They form when a conductor moves through a magnetic field or when the field around a stationary conductor varies. In short, any change in the intensity or direction of the magnetic field can drive circulating currents. Their strength scales with the rate of flux change, the loop area, and the field’s orientation, while higher conductor resistivity weakens them.
To grasp how this works, inertia makes a useful analogy. In classical mechanics, a moving body tends to keep moving, while a stationary one stays put. Electromagnetism shows a similar stubbornness: when a conductor encounters a changing magnetic field, it responds by generating an opposing flux through induction. That flux manifests as eddy currents. Picture them as invisible coils forming inside the conductor—the material itself acting like a “built-in electromagnet” that resists change.
A familiar example is the eddy current brake used in heavy vehicles and trains. These auxiliary brakes, often engaged on downhill runs, position electromagnets near a drum on the rotating axle. Once energized, the drum develops eddy currents that push back against the changing flux, creating drag. The beauty of this system lies in non-contact braking—no friction, no wear on drums or pads. Of course, the kinetic energy does not vanish; conservation of energy dictates it reemerges as Joule heating, dissipated as heat in the drum.
The same principle appears in everyday life. Induction cooktops and induction heating (IH) rice cookers rely on high-frequency currents in their coils to generate rapidly changing magnetic fields. These fields drive eddy currents in the conductive pot walls, producing Joule heat that cooks food directly and efficiently.
As a side note, eddy current brakes and electric retarders share the same physics but differ in role. An eddy current brake is a general device found in rail systems, roller coasters, or test rigs, providing smooth, non-contact braking. An eddy current electric/electromagnetic retarder, by contrast, is an auxiliary system integrated into heavy vehicles—buses, trucks, and coaches—to control speed on long descents.
Retarders ease the load on friction brakes, preventing overheating and wear, though they still demand cooling since induced currents generate substantial heat. In short, brakes emphasize stopping power, while retarders emphasize sustained drag torque for safe downhill control.

Figure 2 An electromagnetic retarder mounts mid-shaft and delivers non-contact braking for heavy vehicles. Source: Telma
Harnessing eddy currents in dynamometers
Dynamometers often rely on eddy current action in their background to absorb and measure power. In an eddy current dynamometer, a rotating metallic disc or drum is subjected to a magnetic field; as the engine drives the disc, circulating currents are induced in the metal. These eddy currents create a resistive force proportional to speed, effectively loading the engine while converting mechanical energy into heat.
The dynamometer’s role is to provide a controlled, repeatable load while precisely measuring torque and power, enabling accurate evaluation of engine or motor performance. Their application domain spans automotive testing, industrial machinery evaluation, and research laboratories where reliable power measurement is essential.

Figure 3 An eddy current dynamometer, delivering full power at high rotation speeds, is designed for fast-rotating motors. Source: Magtrol
Eddy current sensors: From magnetic fields to motion insight
An eddy current sensor, often referred to as a gap sensor, operates by generating a high-frequency magnetic field through a coil embedded in the sensor head. When a conductive measuring object approaches this field, eddy currents are induced on its surface, altering the impedance of the sensor coil.
By detecting these impedance changes, the sensor translates variations in transmission length into a precise relationship between displacement and output voltage. Their application fields span precision displacement measurement, vibration monitoring, and shaft run-out detection, with widespread use across the automobile, aerospace, and semiconductor industries.

Figure 4 An industrial-grade contactless proximity sensor measures position by interpreting eddy currents. Source: Messotron
Put another way, the eddy current method employs high-frequency magnetic fields generated by driving an alternating current through the coil in the sensor head. When a metallic target enters this field, electromagnetic induction causes magnetic flux to penetrate the object’s surface, producing circulating eddy currents parallel to that surface. These currents modify the coil’s impedance and eddy current displacement sensors detect the resulting oscillation changes to measure distance.

Figure 5 Drawing illustrates the core mechanism of an eddy current displacement sensor. Source: Author
At this point, it’s important to distinguish between an eddy current probe and an eddy current sensor. The probe is the coil assembly that induces and detects eddy currents, typically used in non-destructive testing (NDT), while the sensor integrates the probe with electronics to deliver calibrated displacement or vibration signals in industrial applications.
Also note that the sensing field of a non-contact sensor’s probe engages the target across a defined area, known as the spot size. For accurate measurement, the target must be larger than this spot size; otherwise, special calibration is required.
Spot size is directly proportional to the probe’s diameter. In eddy-current sensors, the magnetic field fully surrounds the end of the probe, creating a comparatively large sensing field. As a result, the spot size extends to many times the diameter of the probe’s sensing coil.
Wrap-up: Bridging theory and practice in eddy currents
Time for a quick break, yet so many details remain in the fascinating world of eddy currents. I am not covering every nuance here because eddy current methods are broad and specialized, with deeper dives best reserved for dedicated sections. To anchor the essentials: eddy current examination is a nondestructive testing method based on electromagnetic induction.
When applied to detect surface-breaking flaws in components and welds, it’s known as surface eddy current testing. Specially designed probes are used for this inspection, with coils mounted near one end of a plastic housing. During inspection, the technician guides the coil end of the probe across the surface of the component, scanning for variations that reveal discontinuities.
Well, now switch on your eddy current soldering iron—or set up one yourself—and start doing something practical, like building your own probes, sensors, or experimental rigs. Hands-on exploration is the best way to connect theory with practice, and this is the perfect moment to make the leap from reading to making.
For curious makers, eddy current soldering irons are not just another tool, they are a gateway into experimenting with induction heating itself. A coil generates a rapidly changing magnetic field, inducing circulating currents in the conductive tip or sleeve. These eddy currents encounter resistance and dissipate energy as heat, delivering rapid warm-up and stable temperature exactly where it is needed.
Whether you pick up a ready-made station or build a DIY rig, you will be blending theory with practice in the most tangible way. It’s a perfect project to showcase how electromagnetic principles—Faraday’s law and Lenz’s law in action—can power real-world innovation.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Maxon: Inductive encoder is based on eddy currents
- The difference between inductive proximity, displacement, and eddy-current sensors
The post Eddy current in focus: A rapid revisit appeared first on EDN.
Safe operating area

Any semiconductor has limits on how much voltage, how much current, and for how much time combinations of voltage and current can be supported in normal usage. Sometimes that information is provided as part of the device’s datasheet, and sometimes that information is NOT provided. In either case, though, there ARE limits which MUST be observed.
Any switching semiconductor device must address voltage and current issues. Drive considerations aside, from the standpoint of “safe operating area” or SOA, the voltage Vds and the current Ids of a power MOSFET and the Vce and Ic of a bipolar transistor are at issue.
Please consider the following unwisely designed circuit in Figure 1.
Figure 1 A badly designed switching circuit requiring the 2N2222 at Q1 to repeatedly dump the charge of capacitor C1 of 0.01 µF. Source: John Dunn
What we’ve done wrong here is require the 2N2222 at Q1 to repeatedly dump the charge of capacitor C1 of 0.01 µF. The Vce and the Ic versus time burdens on Q1 are as shown. The current peak of nearly 500 mA is pretty big, and to our dismay, it occurs while the value of Vce is still fairly high, which means that there is a substantial peak power dissipation demand placed on Q1.
Having constructed a Lissajous pattern of Vce versus Ic as shown in Figure 2, we process that pattern.

Figure 2 The voltage versus current Lissajous pattern for Q1. Source: John Dunn
Just one comment about obtaining that Lissajous pattern. The oscilloscope simulations in the Multisim-SPICE I was using do not support “x” versus “y” capability, and therefore cannot provide the Lissajous pattern. I made the pattern you see here by reading out the voltage and current values at each time step of the oscilloscope display and then plotting them using GWBASIC. There were 240 datums for each, a total of 480 readings, which were pretty tedious to acquire. Ordinarily, I can’t concentrate on work and listen to music at the same time, but this time, listening to some Petula Clark recordings through all of this did help to ease the monotony.
In all my years of acquaintance with the 2N2222, I have never seen any specification or any datasheet that presented the SOA boundaries for that device. In fact, I’ve never seen the SOA boundaries for any TO-18 packaged device. In the TO-5 and TO-39 packages, the one and only time I have ever come across SOA boundary information was for the 2N3053 and 2N3053A, and even today, some datasheets omit that information.
As a result, we just have to make do with what we’ve got, which for now is this partial reconstruction of the 2N3053 and 2N3053A SOA chart taken from a very old datasheet from RCA that I stashed away long ago (Figure 3).

Figure 3 Safe operating area reconstruction of the 2N3053 and 2N3053A SOA chart taken from a very old datasheet from RCA. Source: John Dunn
We replot the Vce versus Ic data using logarithmic scaling, and then we overlay that result with the SOA boundaries of our NPN, but we encounter a difficulty (Figure 4).

Figure 4 SOA examination using logarithmic scaling. Source: John Dunn
The 2N2222 has a peak power rating of 1.2 watts, while the 2N2219, a first cousin to the 2N2222, has a peak power rating of 3 watts, versus a 7 watts rating for the 2N3053. I would therefore imagine that the 2N2222 SOA boundaries are quite a bit lower than those of the 2N3053. We note that the SOA curve of Q1 operating in this circuit moves outside of the DC operating boundary for the 2N3053 and thus, in all likelihood, it moves well outside of the 2N2222 equivalent limits.
Voltage and current excursions toward the upper right of this diagram are NOT a good thing.
The 2N2222 as used here can well be expected to fail, maybe sooner, maybe later, but it is set up for eventual calamity. Regardless of other factors that may apply to this design, remedial SOA measures should be considered.
The first is to reduce the capacitance of C1 (Figure 5).

Figure 5 The effects of reducing the capacitance of C1. Source: John Dunn
Using a smaller value of C1, or perhaps using no C1 at all, will lower the peak collector current and will make the switching events occur more quickly. This will take us away from the upper right corner of the SOA plot, and from that standpoint, this is a very good thing to do.
Adding R3, as shown in Figure 6, can also reduce the peak collector current.

Figure 6 The effects of including a collector resistance. Source: John Dunn
Although using R3 will slow down the C1 discharge rate for each discharge event, doing so will keep the peak collector current down, and that is a desirable SOA outcome.
If, for some reason, C1 has to be there, omitting R3 is not a good idea.
John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Safe operating area of linear MOSFETs extended
- SOA, Watts Up, Transistor?: A Mystery of Self-Destructing MOSFETs
- The application guides the MOSFET selection process
- Practical Considerations of Trench MOSFET Stability
The post Safe operating area appeared first on EDN.
A tutorial on instrumentation amplifier boundary plots—Part 2

The first installment of this series introduced the boundary plot, an often-misunderstood plot found in instrumentation amplifier (IA) datasheets. It also discussed various IA topologies: traditional three operational amplifier (op amp), two op amp, two op amp with a gain stage, current mirror, current feedback with super-beta transistors, and indirect current feedback.
Part 1 also included derivations of the internal node equations and transfer function of a traditional three-op-amp IA.
The second installment will introduce the input common-mode and output swing limitations of op amps, which are the fundamental building blocks of IAs. Modifying the internal node equations from Part 1 yields equations that represent each op amp’s input common-mode and output swing limitation at the output of the IA as a function of the device’s input common-mode voltage.
The article will also examine a generic boundary plot in detail and compare it to plots from device datasheets to corroborate the theory.
Op-amp limitations
For an op amp to output a linear voltage, the input signal must be within the device’s input common-mode range specification (VCM) and the output (VOUT) must be within the device’s output swing range specification. These ranges depend on the supply voltages, V+ and V– (Figure 1).

Figure 1 Op-amp input common-mode (green) and output swing (red) ranges depend on supplies. Source: Texas Instruments
Figure 2 depicts the datasheet specifications and corresponding VCM and VOUT ranges for an op amp, such as TI’ OPA188, given a ±15V supply. For this device, the output swing is more restrictive than the input common-mode voltage range.

Figure 2 Op-amp VCM and VOUT ranges are shown for a ±15 V supply of the OPA188 op amp. Source: Texas Instruments
The boundary plot
The boundary plot for an IA is a representation of all internal op-amp input common-mode and output swing limitations. Figure 3 depicts a boundary plot. Operating outside the boundaries of the plot violates at least one input common-mode or output swing limitation of the internal amplifiers. Depending on the severity of the violation, the output waveform may depict anything from minor distortion to severe clipping.

Figure 3 Here is how an IA boundary plot looks like for the INA188 instrumentation amplifier. Source: Texas Instruments
This plot is specified for a particular supply voltage (VS = ±15 V), reference voltage (VREF = 0 V), and gain of 1 V/V.
Figure 4 illustrates the linear output range given two different input common-mode voltages. For example, if the common-mode input of the IA is 8 V, the output will be valid only from approximately –11 V to +11 V. If the common-mode input is mid supply (0 V), however, an output swing of ±14.78 V is available.

Figure 4 Output voltage range is shown for different common-mode voltages. Source: Texas Instruments
Notice that the VCM (blue arrows) ranges from –15 V to approximately +13.5 V. Both the mid-supply output swing and VCM ranges are consistent with the op-amp ranges depicted in Figure 2.
Each line in the boundary plot corresponds to a limitation—either VCM or VOUT—of one of the three internal amplifiers. Therefore, it’s necessary to review the internal node equations first derived in Part 1. Figure 5 depicts the standard three-op-amp IA, while Equations 1 through 6 define the voltage at each internal node.

Figure 5 Here is how a three-op-amp IA looks like. Source: Texas Instruments
(1)
(2)
(3)
(4)
(5)
(6)
In order to plot the node equation limits on a graph with VCM and VOUT axes, solve Equation 6 for VD, as shown in Equation 7:
(7) 
Substituting Equation 7 for VD in Equations 1 through 6 and solving for VOUT yields Equations 8 through 13. These equations represent each amplifier’s input common-mode (VIA) and output (VOA) limitation at the output of the IA, and as a function of the device’s input common-mode voltage.
(8) ![]()
(9) ![]()
(10) ![]()
(11) ![]()
(12) ![]()
(13) 
One important observation from Equations 8 and 9 is that the IA limitations from the common-mode range of A1 and A2 depend on the gain of the input stage, GIS. These output limitations do not depend on GIS, however, as shown by Equations 11 and 12.
Plotting each of these equations for the minimum and maximum input common-mode and output swing limitations for each op amp (A1, A2 and A3) yields the boundary plot. Figure 6 depicts a generic boundary plot. The linear operation of the IA is the interior of all plotted equations.

Figure 6 Here is an example of a generic boundary plot. Source: Texas Instruments
The dotted lines in Figure 6 represent the input common-mode limitations for A1 (blue) and A2 (red). Notice that the slope of the dotted lines depends on GIS, which is consistent with Equations 8 and 9.
Solid lines represent the output swing limitations for A1 (blue), A2 (red) and A3 (green). The slope of these lines does not depend on GIS, as shown by Equations 11 through 13.
Figure 6 doesn’t show the line for VIA3 because the R2/R1 voltage divider attenuates the output of A2; A2 typically reaches the output swing limitation before violating A3’s input common-mode range.
The lines plotted in quadrants one and two (positive common-mode voltages) use the maximum input common-mode and output swing limits for A1 and A2, whereas the lines plotted in quadrants three and four (negative common-mode voltages) use the minimum input common-mode and output swing limits.
Considering only positive common-mode voltages from Figure 6, Figure 7 depicts the linear operating region of IA when G = 1 V/V. In this example, the input common-mode limitation of A1 and A2 is more restrictive than the output swing.

Figure 7 The input common-mode range limit of A1 and A2 defines the linear operation region when G = 1 V/V. Source: Texas Instruments
Increasing the gain of the device changes the slope of VIA1 and VIA2 (Figure 8). Now both the input common-mode and output swing limitations define the linear operating region.

Figure 8 The input common-mode range and output swing limits of A1 and A2 define the linear operating range when G > 1 V/V. Source: Texas Instruments
Regardless of gain, the output swing always limits the linear operating region when it’s more restrictive than the input common-mode limit (Figure 9).

Figure 9 The output swing limit of A1 and A2 define the linear operating region independent of gain. Source: Texas Instruments
Datasheet examples
Figure 10 illustrates the boundary plot from the INA111 datasheet. Notice that the output swing limit of A1 and A2 define the linear operating region. Therefore, the output swing limitations of A1 and A2 must be equal to or more restrictive than the input common-mode limitations.

Figure 10 Boundary plot for the INA111 instrumentation amplifier shows output swing limitations. Source: Texas Instruments
Figure 11 depicts the boundary plot from the INA121 datasheet. Notice that the linear operating region changes with gain. At G = 1 V/V, the input common mode must limit the linear operating region. However, as gain increases, the linear operating region is limited by both the output swing and input common-mode limitations (Figure 8).

Figure 11 Boundary plot is shown for the INA121 instrumentation amplifier. Source: Texas Instruments
Third installment coming
The third installment of this series will explain how to use these equations and concepts to develop a tool that automates the drawing of boundary plots. This tool enables you to adjust variables such as supply voltage, reference voltage, and gain to ensure linear operation for your application.
Peter Semig is an applications manager in the Precision Signal Conditioning group at TI. He received his bachelor’s and master’s degrees in electrical engineering from Michigan State University in East Lansing, Michigan.
Related Content
- Instrumentation amplifier input-circuit strategies
- Discrete vs. integrated instrumentation amplifiers
- New Instrumentation Amplifier Makes Sensing Easy
- Instrumentation amplifier VCM vs VOUT plots: part 1
- Instrumentation amplifier VCM vs. VOUT plots: part 2
The post A tutorial on instrumentation amplifier boundary plots—Part 2 appeared first on EDN.
AI-powered MCU elevates vehicle intelligence

The Stellar P3E automotive MCU from ST features built-in AI acceleration, enabling real-time AI applications at the edge. Designed for the next generation of software-defined vehicles, it simplifies multifunction integration, supporting X-in-1 electronic control units from hybrid/EV systems to body zonal architectures.

According to ST, the Stellar P3E is the first automotive MCU with an embedded neural network accelerator. Its Neural-ART accelerator, a dedicated neural processing unit (NPU) with an advanced data-flow architecture, offloads AI workloads from the main cores, speeding up inference execution and delivering real-time, AI-based virtual sensing.
The MCU incorporates 500-MHz Arm Cortex-R52+ cores, delivering a CoreMark score exceeding 8000 points. Its split-lock feature lets designers balance functional safety with peak performance, while smart low-power modes go beyond conventional standby. The device also includes extensible xMemory, with up to twice the density of standard embedded flash, plus rich I/O interfaces optimized for advanced motor control.
Stellar P3E production is scheduled to begin in the fourth quarter of 2026.
The post AI-powered MCU elevates vehicle intelligence appeared first on EDN.
Gate drivers emulate optocoupler inputs

Single-channel isolated gate drivers in the 1ED301xMC121 series from Infineon are pin-compatible replacements for optocoupler-based designs. They replicate optocoupler input characteristics, enabling drop-in use without control circuit changes, while using non-optical isolation internally to deliver higher CMTI and improved switching performance for SiC applications.

Their opto-emulator input stage uses two pins and integrates reverse voltage blocking, forward voltage clamping, and an isolated signal transmitter. With CMTI exceeding 300 kV/µs, 40-ns propagation delay, and 10-ns part-to-part matching, the devices deliver robust, high-speed switching performance.
The series includes three variants—1ED3010, 1ED3011, and 1ED3012—supporting Si and SiC MOSFETs as well as IGBTs. Each delivers up to 6.5 A of output current to drive power modules and parallel switch configurations in motor drives, solar inverters, EV chargers, and energy storage systems. The drivers differ in UVLO thresholds: 8.5 V, 11 V, and 12.5 V for the 1ED3010, 1ED3011, and 1ED3012, respectively.
The 1ED3010MC121, 1ED3011MC121, and 1ED3012MC121 drivers are available in CTI 600, 6-pin DSO packages with more than 8 mm of creepage and clearance.
The post Gate drivers emulate optocoupler inputs appeared first on EDN.
IC enables precise current sensing in fast control loops

Allegro Microsystems’ ACS37017 Hall-effect current sensor achieves 0.55% typical sensitivity error across temperature and lifetime. High accuracy, a 750‑kHz bandwidth, and a 1‑µs typical response time make the ACS37017 suitable for demanding control loops in automotive and industrial high-voltage power conversion.

Unlike conventional sensors whose accuracy suffers from drift, the ACS37017 delivers long-term stability through a proprietary compensation architecture. This technology maintains precise measurements, ensuring control loops remain stable and efficient throughout the operating life of the vehicle or power supply.
The ACS37017 features an integrated non-ratiometric voltage reference, simplifying system architecture by eliminating the need for external precision reference components. This integration reduces BOM costs, saves board space, and removes a major source of system-level noise and error.
The high-accuracy ACS37017 expands Allegro’s current sensor portfolio, complementing the ACS37100 (optimized for speed) and the ACS37200 (optimized for power density). Request the preliminary datasheet and engineering samples on the product page linked below.
The post IC enables precise current sensing in fast control loops appeared first on EDN.
Microchip empowers real-time edge AI

Microchip provides a full-stack edge AI platform for developing and deploying production-ready applications on its MCUs and MPUs. These devices operate at the network edge, close to sensors and actuators, enabling deterministic, real-time decision-making. Processing data locally within embedded systems reduces latency and improves security by limiting cloud connectivity.

The full-stack application portfolio includes pretrained, production-ready models and application code that can be modified, extended, and deployed across target environments. Development and optimization are performed using Microchip’s embedded software and ML toolchains, as well as partner ecosystem tools. Edge AI applications include:
- AI-based detection and classification of electrical arc faults using signal analysis
- Condition monitoring and equipment health assessment for predictive maintenance
- On-device facial recognition with liveness detection for secure identity verification
- Keyword spotting for consumer, industrial, and automotive command-and-control interfaces
Microchip is working with customers deploying its edge AI solutions, providing model training guidance and workflow integration across the development cycle. The company is also collaborating with ecosystem partners to expand available software and deployment options. For more information, visit the Microchip Edge AI page.
The post Microchip empowers real-time edge AI appeared first on EDN.
AI agent automates front-end chip workflows

Cadence has launched the ChipStack AI Super Agent, an agentic AI solution for front-end silicon design and verification. The platform automates key design and test workflows—including coding, test plan creation, regression testing, debugging, and issue resolution—offering significant productivity gains for chip development teams. It leverages multiple AI agents that work alongside Cadence’s existing EDA tools and AI-based optimization solutions.

The ChipStack AI Super Agent supports both cloud-based and on-premises AI models, including NVIDIA NeMo models that can be customized for specific workflows, as well as OpenAI GPT. By combining agentic AI orchestration with established simulation, verification, and AI-assistant tools, the platform streamlines complex semiconductor workflows.
Early deployments at leading semiconductor companies have demonstrated measurable reductions in verification time and improvements in workflow efficiency. The platform is currently available in early access for customers looking to integrate AI-driven automation into front-end chip design and verification processes.
Additional information about the ChipStack AI Super Agent can be found on the Cadence AI for Design page.
The post AI agent automates front-end chip workflows appeared first on EDN.
Wearables for health analysis: A gratefulness-inducing personal experience

What should you do if your wearable device tells you something’s amiss health-wise, but you feel fine? With this engineer’s experience as a guide, believe the tech and get yourself checked.
Mid-November was…umm…interesting. After nearly two days with an elevated heart rate, which I later realized was “enhanced” by cardiac arrhythmia, I ended up overnighting at a local hospital for testing, medication, procedures, and observation. But if not for my wearable devices, I never would have known I was having problems, to my potentially severe detriment.
I felt fine the entire time; the repeated alerts coming from my smart watch and smart ring were my sole indication to seek medical attention. I’ve conceptually discussed the topic of wearables for health monitoring plenty of times in the past. Now, however, it’s become deeply personal.
Late-night, all-night alertsSunday evening, November 16, 2025, my Pixel Watch smartwatch began periodically alerting me to an abnormally high heart rate. As you can see from the archived reports from Fitbit (the few-hour data gaps each day reflect when the Pixel Watch is on the charger instead of my wrist):
![]()
![]()
![]()
and my Oura Ring 4:



for the prior two days, my normal sleeping heart rate is in the low-to-mid 40s bpm (beats per minute) range. However, during the November 16-to-17 overnight cycle, both wearable devices reported that I was spiking the mid-140s, along with a more general bpm elevation-vs-norm:
![]()

![]()

By Monday evening, I was sufficiently concerned that I shared with my wife what was going on. She recommended that in addition to continued monitoring of my pulse rate and trend, I should also use the ECG (i.e., EKG, for electrocardiogram) app that was built into her Apple Watch Ultra. I first checked to see whether there was a similar app on my Pixel Watch. And indeed, there was: Fitbit ECG. A good overview video is embedded within some additional product documentation:
Here’s an example displayed results screenshot directly from my watch, post-hospital visit, when my heart was once again thankfully beating normally:

I didn’t think to capture screenshots that Monday night—my thoughts were admittedly on other, more serious matters—but here’s a link to the Fitbit-generated November 17 evening report as a PDF, and here’s the captured graphic:

The average bpm was 110. And the report summary? “Atrial Fibrillation: Your heart rhythm shows signs of atrial fibrillation (AFib), an irregular heart rhythm.”
The next morning (PDF, again), when I re-did the test:
![]()


my average bpm was now 140. And the conclusion? “Inconclusive high heart rate: If your heart rate is over 120 beats per minute, the ECG app can’t assess your heart rhythm.”
The data was even more disconcerting this time, and the overall trend was in a discouraging direction. I promptly made an emergency appointment for that same afternoon with my doctor. She ran an ECG on the office equipment, whose results closely (and impressively so) mirrored those from my Pixel Watch. Then she told me to head directly to the closest hospital; had my wife not been there to drive me, I probably would have been transported in an ambulance.
Thankfully, as you may have already noticed from the above graphs, after bouts of both atrial flutter and fibrillation, my heart rate began to return to its natural rhythm by late that same evening. Although the Pixel Watch battery had died by ~6 am on Wednesday morning, my recovery was already well away:
![]()
and the Oura Ring kept chugging along to document the normal heartbeat restoration process:

I was discharged on Wednesday afternoon with medication in-hand, along with instructions to make a follow-up appointment with the cardiologist I’d first met at the hospital emergency room. But the “excitement” wasn’t yet complete. The next morning, my Pixel Watch started yelling at me again, this time because my heart rate was too low:
![]()

My normal resting heart rate when awake is in the low-to-mid 50s. But now it was ~10 points below that. I had an inkling that the root cause might be a too-high medication dose, and a quick call to the doctor confirmed my suspicion. Splitting each tablet in two got things back to normal:
![]()

![]()

As I write this, I’m nearing the end of a 30-day period wearing a cardiac monitor; a quite cool device, the details of which I’ll devote to an upcoming blog post. My next (and ideally last) cardiologist appointment is a month away; I’m hopeful that this arrhythmia event was a one-time fluke.
Regardless, my unplanned hospital visit, specifically the circumstances that prompted it, was more than a bit of a wakeup call for this former ultramarathoner and broader fitness activity aficionado (admittedly a few years and a few pounds ago). And that said, I’m now a lifelong devotee and advocate of smart watches, smart rings and other health monitoring wearables as effective adjuncts to traditional symptoms that, as my case study exemplifies, might not even be manifesting in response to an emerging condition…assuming you’re paying sufficient ongoing attention to your body to be able to notice them if they were present.
Thoughts on what I’ve shared today? As always, please post ‘em in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Wearable trends: a personal perspective
- The Pixel Watch: An Apple alternative with Google’s (and Fitbit’s) personal touch
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- The Oura Ring 4: Does “one more” deliver much (if any) more?
The post Wearables for health analysis: A gratefulness-inducing personal experience appeared first on EDN.
How to design a digital-controlled PFC, Part 2

In Part 1 of this article series, I explained the system block diagram and each of the modules of digital control. In this second installment, I’ll talk about how to write firmware to implement average current-mode control.
Average current-mode control
Average current-mode control, as shown in Figure 1, is common in continuous-conduction-mode (CCM) power factor correction (PFC). It has two loops: a voltage loop that works as an outer loop and a current loop that works as an inner loop. The voltage loop regulates the PFC output voltage (VOUT) and provides current commands to the current loop. The current loop forces the inductor current to follow its reference, which is modulated by the AC input voltage.
Figure 1 Average current-mode control is common in CCM PFC, where a voltage loop regulates the PFC output voltage and provides current commands to the current loop. Source: Texas Instruments
Normalization
Normalizing all of the signals in Figure 1 will enable the ability to handle different signal scales and prevent calculations from overflowing.
For VOUT, VAC, and IL, multiply their analog-to-digital converter (ADC) reading by a factor of , (assuming a 12-bit ADC):
For VREF, multiply its setpoint by a factor of):

where R1 and R2 are the resistors used in Figure 4 from Part 1 of this article series.
After normalization, all of the signals are in the range of (–1, +1). The compensator GI output d is in the range of (0, +1), where 0 means 0% duty and 1 means 100% duty.
Digital voltage-loop implementation
As shown in Figure 1, an ADC senses VOUT for comparison to VREF. Compensator GV processes the error signal, which is usually a proportional integral (PI) compensator, as I mentioned in Part 1. The output of this PI compensator will become part of the current reference calculations.
VOUT has a double-line frequency, which couples to the current reference and affects total harmonic distortion (THD). To reduce this ripple effect, set the PFC voltage-loop bandwidth much lower than the AC frequency; for example, around 10Hz. This low voltage-loop bandwidth will cause VOUT to dip too much when a heavy load is applied, however.
Meeting the load transient response requirement will require a nonlinear voltage loop. When the voltage error is small, use a small Kp, Ki gain. When the error exceeds a threshold, using a larger Kp, Ki gain will rapidly bring VOUT back to normal. Figure 2 shows a C code example for this nonlinear voltage loop.

Figure 2 C code example for this nonlinear voltage-loop gain. Source: Texas Instruments
Digital current-loop implementation takes 3 steps:
Step 1: Calculating the current reference
As shown in Figure 1, Equation 3 calculates the current-loop reference, IREF:
![]()
where A is the voltage-loop output, C is the AC input voltage a,nd B is the square of the AC root-mean-square (RMS) voltage.
Using the AC line-measured voltage subtracted by the AC neutral-measured voltage will obtain the AC input voltage (Equation 4 and Figure 3):
![]()

Figure 3 VAC calculated by subtracting AC neutral-measured voltage from AC line-measured voltage. Source: Texas Instruments
Equation 5 defines the RMS value as:

With Equation 6 in discrete format:
![]()
where V(n) represents each ADC sample, and N is the total number of samples in one AC cycle.
After sampling VAC at a fixed speed, it is squared, then accumulated in each AC cycle. Dividing the number of samples in one AC cycle calculates the square of the RMS value.
In steady state, you can treat both voltage-loop output A and the square of VAC RMS value B as constant; thus, only C (VAC) modulates IREF. Since VAC is sinusoidal, IREF is also sinusoidal (Figure 4).

Figure 4 Sinusoidal current reference IREF due to sinusoidal VAC. Source: Texas Instruments
Step 2: Calculating the current feedback signal
If you compare the shape of the Hall-effect sensor output in Figure 5 from Part 1 and IREF in Figure 4 from this installment, they have the same shape. The only difference is that the Hall-effect sensor output has a DC offset; therefore, it cannot be used directly as the feedback signal. You must remove this DC offset before closing the loop.
Figure 5 Calculating the current feedback signal. Source: Texas Instruments
Also, the normalized Hall-effect sensor output is between (0, +1); after subtracting the DC offset, its magnitude becomes (–0.5, +0.5). To maintain the (–1, +1) normalization range, multiply it by 2, as shown in Equation 7 and Figure 5:
![]()
Step 3: Closing the current loop
Now that you have both the current reference and feedback signal, let’s close the loop. During the positive AC cycle, the control loop has standard negative feedback control. Use Equation 8 to calculate the error going to the control loop:
![]()
During the negative AC cycle, the higher the inductor current, the lower the value of the Hall-effect sensor output; thus, the control loop needs to change from negative feedback to positive feedback. Use Equation 9 to calculate the error going to the control loop:
![]()
Compensator GI processes the error signal, which is usually a PI compensator, as mentioned in Part 1. Sending the output of this PI compensator to the pulse-width modulation (PWM) module will generate the corresponding PWM signals. During a positive cycle, Q2 is the boost switch and controlled by D; Q1 is the synchronous switch and controlled by 1-D. Q4 remains on and Q3 remains off for the whole positive AC half cycle. During a negative cycle, the function of Q1 and Q2 swaps: Q1 becomes the boost switch controlled by D, while Q2 works as a synchronous switch controlled by 1-D. Q3 remains on, and Q4 remains off for the whole negative AC half cycle.
Loop tuning
Tuning a PFC control loop is similar to doing so in an analog PFC design, with the exception that here you need to tune Kp, Ki instead of playing pole-zero. In general, Kp determines how fast the system responds. A higher Kp makes the system more sensitive, but a Kp value that’s too high can cause oscillations.
Ki removes steady-state errors. A higher Ki removes steady-state errors more quickly, but can lead to instability.
It is possible to tune PI manually through trial and error – here is one such tuning procedure:
- Set Kp, Ki to zero.
- Gradually increase Kp until the system’s output starts to oscillate around the setpoint.
- Set Kp to approximately half the value that caused the oscillations.
- Slowly increase Ki to eliminate any remaining steady-state errors, but be careful not to reintroduce oscillations.
- Make small, incremental adjustments to each parameter to achieve the intended system performance.
Knowing the PFC Bode plot makes loop tuning much easier; see reference [1] for a PFC tuning example. One advantage of a digital controller is that it can measure the Bode plot by itself. For example, the Texas Instruments Software Frequency Response Analyzer (SFRA) enables you to quickly measure the frequency response of your digital power converter [2]. The SFRA library contains software functions that inject a frequency into the control loop and measure the response of the system. This process provides the plant frequency response characteristics and the open-loop gain frequency response of the closed-loop system. You can then view the plant and open-loop gain frequency response on a PC-based graphic user interface, as shown in Figure 6. All of the frequency response data is exportable to a CSV file or Microsoft Excel spreadsheet, which you can then use to design the compensation loop.

Figure 6 The Texas Instruments SRFA tool allows for the quick frequency response measurement of your power converter. Source: Texas Instruments
System protection
You can implement system protection through firmware. For example, to implement overvoltage protection (OVP), compare the ADC-measured VOUT with the OVP threshold and shut down PFC if VOUT exceeds this threshold. Since most microcontrollers also have integrated analog comparators with a programmable threshold, using the analog comparator for protection can achieve a faster response than firmware-based protection. Using an analog comparator for protection requires programming its digital-to-analog converter (DAC) value. For an analog comparator with a 12-bit DAC and 3.3V reference, Equation 10 calculates the DAC value as:
![]()
where VTHRESHOLD is the protection threshold, and R1 and R2 are the resistors used in Figure 4 from Part 1.
State machine
From power on to turn-off, PFC operates at different states at different conditions; these states are called the state machine. The PFC state machine transitions from one state to another in response to external inputs or events. Figure 7 shows a simplified PFC state machine.

Figure 7 Simplified PFC state machine that transitions from one state to another in response to external inputs or events. Source: Texas Instruments
Upon power up, PFC enters an idle state, where it measures VAC and checks if there are any faults. If no faults exist and the VAC RMS value is greater than 90V, the relay closes and the PFC starts up, entering a ramp-up state where the PFC gradually ramps up its VOUT by setting the initial voltage-loop setpoint equal to the measured actual VOUT voltage, then gradually increasing the setpoint. Once VOUT reaches its setpoint, the PFC enters a regulate state and will stay there until an abnormal condition occurs, such as overvoltage, overcurrent or overtemperature. If any of these faults occur, the PFC shuts down and enters a fault state. If the VAC RMS value drops below 85V, triggering VAC brownout protection, the PFC also shuts down and enters an idle state to wait until VAC returns to normal.
Interruption
A PFC has many tasks to do during normal operation. Some tasks are urgent and need processing immediately, some tasks are not so urgent and can be processed later, and some tasks need processing regularly. These different task priorities are handled by interruption. Interruptions are events detected by the digital controller that cause a preemption of the normal program flow by pausing the current program and transferring control to a specified user-written firmware routine called the interrupt service routine (ISR). The ISR processes the interrupt event, then resumes normal program flow.
Firmware structure
Figure 8 shows a typical PFC firmware structure. There are three major parts: the background loop, ISR1, and ISR2.

Figure 8 PFC firmware structure with three major parts: the background loop, ISR1, and ISR2.. Source: Texas Instruments
The firmware starts from the function main(). In this function, the controller initializes its peripherals, such as configuring the ADC, PWM, general-purpose input/output, universal asynchronous receiver transmitter (UART), setup protection threshold, configure interrupt, initialize global variable, etc. The controller then enters a background loop that runs infinitely. This background loop contains non-time-critical tasks and tasks that do not need processing regularly.
ISR2 is an interrupt service routine that runs at 10KHz. The triggering of ISR2 suspends the background loop. The CPU jumps to ISR2 and starts executing the code in ISR2. Once ISR2 finishes, the CPU returns to where it was upon suspension and resumes normal program flow.
The tasks in ISR2 that are time-critical or processed regularly include:
- Voltage-loop calculations.
- PFC state machine.
- VAC RMS calculations.
- E-metering.
- UART communication.
- Data logging.
ISR1 is an interrupt service routine running at every PWM cycle: for example, if the PWM frequency is 65KHz, then ISR1 is running at 65KHz. ISR1 has a higher priority than ISR2, which means that if ISR1 triggers when the CPU is in ISR2, ISR2 suspends, and the CPU jumps to ISR1 and starts executing the code in ISR1. Once ISR1 finishes, the CPU goes back to where it was upon suspension and resumes normal program flow.
The tasks in ISR1 are more critical than those in ISR2 and need to be processed more quickly. These include:
- ADC measurement readings.
- Current reference calculations.
- Current-loop calculations.
- Adaptive dead-time adjustments.
- AC voltage-drop detection.
- Firmware-based system protection.
The current loop is an inner loop of average current-mode control. Because its bandwidth must be higher than that of the voltage loop, put the current loop in faster ISR1, and put the voltage loop in slower ISR2.
AC voltage-drop detection
In a server application, when an AC voltage drop occurs, the PFC controller must detect it rapidly and report the voltage drop to the host. Rapid AC voltage-drop detection becomes more important when using a totem-pole bridgeless PFC.
As shown in Figure 9, assuming a positive AC cycle where Q4 is on, the turn-on of synchronous switch Q1 discharges the bulk capacitor, which means that it is no longer possible to guarantee the holdup time.

Figure 9 The bulk capacitor discharging after the AC voltage drops. Source: Texas Instruments
To rapidly detect an AC voltage drop, you can use a firmware phase-locked loop (PLL) [3] to generate an internal sine-wave signal that is in phase with AC input voltage, as shown in Figure 10. Comparing the measured VAC with this PLL sine wave will determine the AC voltage drop, at which point all switches should turn off.

Figure 10 Rapid AC voltage-drop detection by using a firmware PLL to generate an internal sine-wave signal that is in phase with AC input voltage. Source: Texas Instruments
Design your own digital control
Now that you have learned how to use firmware to implement an average current-mode controller, how to tune the control loop, and how to construct the firmware structure, you should be able to design your own digitally controlled PFC. Digital control can do much more. In the third installment of this article series, I will introduce advanced digital control algorithms to reduce THD and improve the power factor.

Bosheng Sun is a system engineer and Senior Member Technical Staff at Texas Instruments, focused on developing digitally controlled high-performance AC/DC solutions for server and industry applications. Bosheng received a Master of Science degree from Cleveland State University, Ohio, USA, in 2003 and a Bachelor of Science degree from Tsinghua University in Beijing in 1995, both in electrical engineering. He has published over 30 papers and holds six U.S. patents.
Related Content
- How to design a digital-controlled PFC, Part 1
- Digital control for power factor correction
- Digital control unveils a new epoch in PFC design
- Power Tips #124: How to improve the power factor of a PFC
- Power Tips #115: How GaN switch integration enables low THD and high efficiency in PFC
- Power Tips #116: How to reduce THD of a PFC
References
- Sun, Bosheng, and Zhong Ye. “UCD3138 PFC Tuning.” Texas Instruments application report, literature No. SLUA709, March 2014.
- Texas Instruments. n.d. SFRA powerSUITE digital power supply software frequency response analyzer tool for C2000
MCUs. Accessed Dec. 9, 2025. - Bhardwaj, Manish. “Software Phase Locked Loop Design Using C2000
Microcontrollers for Single Phase Grid Connected Inverter.” Texas Instruments application report, literature No. SPRABT3A, July 2017.
The post How to design a digital-controlled PFC, Part 2 appeared first on EDN.
Edge AI in a DRAM shortage: Doing more with less

Memory is having a difficult year. As manufacturers prioritize DDR5 and high-bandwidth memory (HBM) for data centers and large-scale AI workloads, availability has tightened and costs have risen sharply: up to 3–4x compared to Q3 2025 levels and market signals suggest the peak has not yet been reached.
Even hyperscalers—typically at the frontline—are reportedly receiving only about 70% of their allocated volumes, and analysts expect tight conditions to persist well into 2026 and possibly even 2027.
The strain isn’t evenly distributed, with the steepest price hikes and longest lead times concentrated in higher-capacity modules. Those components sit directly in the path of cloud infrastructure demand, and their pricing reflects it. On the other hand, lower-capacity modules (1-2 GB) have remained accessible and far more stable.
This trend is now influencing how teams think about system design. AI workloads built around large memory footprints now run into procurement challenges; systems engineered to operate within modest memory baselines avoid both the price spikes and the uncertainty. The outcome is important: in a shortage, architecture built for efficiency gives teams more strategic freedom compared to architectures built for abundance.
The most effective solution: DRAM-less AI accelerator
In a constrained memory market, the most robust solution is also the simplest: remove the dependency on external DRAM entirely. Take the case of Hailo-8 and Hailo-8L AI accelerators. By keeping the full inference pipeline on-chip, Hailo-8/8L eliminate the most expensive and supply-constrained component in the system.
In practical terms, avoiding DRAM can reduce bill of materials by up to $100 per device, while also improving power efficiency, latency, and system reliability. Though not every AI application can avoid DRAM.
Generative AI workloads inherently require more memory, and systems that run them will continue to rely on external DRAM. But even in this case, memory constraints strongly favor moving inference closer to the edge.
Running generative AI on the edge allows teams to work with smaller, domain-specific models rather than large, general-purpose ones designed for the cloud. Smaller models translate directly into smaller DRAM requirements, reducing cost, easing procurement, and improving power efficiency. This is where edge-focused accelerators come into play, enabling efficient generative AI inference while keeping memory footprints as lean as possible.
Privacy and latency have long shaped the case for running intelligence on the device. In 2025, another factor cemented it: the expectation that generative AI simply be there. Users now rely on transcription, summarization, audio cleanup, translation, and basic reasoning often with no tolerance for startup delays or network dependency.
Recent cloud outages from AWS, Azure and Cloudflare underscored how fragile cloud-only assumptions can be. When the networks faced disruptions, everyday features across consumer apps and enterprise workflows failed. Even brief interruptions highlighted how a single infrastructure dependency can take down tools that users now rely on dozens of times a day.
As AI moves deeper into everyday workflows and users expect agentic AI capabilities to be available instantly, a hybrid approach proves far more resilient. Keep frequently used intelligence local, either on the device or in a nearby gateway, while using the cloud for heavier or less frequent tasks. And crucially, when models are small enough to operate within 1-2 GB of memory, that hybrid approach becomes far easier to implement using memory configurations that are still readily sourced.
Small models change the equation
Until recently, generative AI required the memory and compute scale of the cloud. A new class of small language models (SLMs) and compact vision language models (VLMs) now deliver strong instruction following, reliable tool use, and competitive benchmark performance at a fraction of the parameters.
Releases like IBM’s Granite 4.0 Nano line demonstrate how far efficient architectures have come. These models show that some generative AI tasks and applications no longer need massive, expensive system memory—they need well-defined domains, optimized inference paths, and efficient pre- and post-processing.
For hardware teams, this evolution has many practical benefits. Smaller models reduce the “memory tax” that has been baked into AI design for years. When an entire intelligence pipeline can operate in 1-2 GB of DRAM, several constraints loosen simultaneously:
- Costs fall as systems avoid the inflated pricing of high-capacity DRAM.
- Supply-chain risk drops as lower-capacity memory chips remain easier to procure.
- Power consumption improves because smaller models with hardware-assisted offload (NPU or AI accelerator) run cooler and more efficiently.
- System reliability increases as local inference keeps essential features online even during network outages.
An AI architecture designed for efficiency rather than abundance fits squarely within the ethos of edge computing. Many high-value agentic AI tasks—summarizing a conversation, describing an image, or translating speech—do not require massive models. In narrow domains, compact models can deliver faster, more private and consistent results because they operate with fewer unknowns.
The path forward
If the DRAM shortage proves anything, it’s that the most resilient AI systems are the ones designed around constraints, not excess. Teams are re-evaluating assumptions about model size, memory baselines, and what “good enough” looks like for common tasks. They’re recognizing that domain-specific intelligence often performs better than brute-force scale—especially in environments that demand consistency, privacy, and low power draw.
Edge AI fits naturally within this moment. Its memory profile lines up with the DRAM capacities that remain accessible, and its deployment model brings stability to the tasks users rely on most. As supply tightness continues, organizations that invest in leaner model design and hybrid deployment strategies will be better positioned to deliver stable, responsive AI without absorbing high memory costs.
Avi Baum is chief technology officer (CTO) and co-founder of Hailo.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
- The AI-tuned DRAM solutions for edge AI workloads
- Designing edge AI for industrial applications
- Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs
- Bridging the gap: Being an AI developer in a firmware world
- Why power delivery is becoming the limiting factor for AI
- Silicon coupled with open development platforms drives context-aware edge AI
- Designing energy-efficient AI chips: Why power must Be an early design consideration
The post Edge AI in a DRAM shortage: Doing more with less appeared first on EDN.
Self-oscillating sawtooth generator spans 5 decades of frequencies

There are many ways of generating analog sawtooth waveforms with oscillating circuits. Here’s a method that employs a single supply voltage rail to produce a buffered signal whose frequency can be varied over a range from 10Hz to 1MHz (Figure 1).

Figure 1 The sawtooth output waveform is the signal “saw” available at the output of op amp U1a. Its frequency is set by the value of resistor R6 which can vary from 120 Ω to 12 MΩ.
Wow the engineering world with your unique design: Design Ideas Submission Guide
U3, powered through R5, uses Q2 and R6 to create a constant current source. U3 enforces a constant voltage Vref of 1.2 V between its V+ and FB pins. Q2 is a high-beta NPN transistor that passes virtually all of R2’s current Vref/R6 through its collector to charge C3 with a constant current, producing the linear ramp portion of this ground-referenced sawtooth.
Op-amp U1 buffers this signal and applies it to an input of comparator U2a. The comparator’s other input’s voltage causes its output to transition low when the sawtooth rises to 1 volt. U2A, R1, Q1, R8, C1, and U2b produce a 100 ns one-shot signal at the output of U2b, which drives the gate of M1 high to rapidly discharge C3 to ground.
The frequency of the waveform is 1.2 / ( R6 × C3 ) Hz. With the availability of U3’s Vref tolerances as low as 0.2% and a 0.1% tolerance for R6, the circuit’s overall tolerances are generally limited by an at best 1% C3 combined with the parasitic capacitances of M1.
Waveforms at several different frequencies are seen in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, and Figure 7.
Figure 2 10 Hz sawtooth for an R6 of 12 MΩ.

Figure 3 100 Hz sawtooth for an R6 of 1.2 MΩ.

Figure 4 1 kHz sawtooth for an R6 of 120 kΩ.

Figure 5 10 kHz sawtooth for an R6 of 12 kΩ.

Figure 6 100 kHz sawtooth for an R6 of 1.2 kΩ.

Figure 7 1 MHz sawtooth for an R6 of 120 Ω.
Figures 3 and 4 show near-ideal sawtooth waveforms. But Figure 2, with its 12 MΩ R6, shows that even when “off,” M1 has a non-infinite drain-source resistance which contributes to the non-linearity of the ramp. It’s also worth noting that although U3’s FB pin typically pulls less than 100 nA, that’s the current that the 12 MΩ R6 is intended to source, so waveform frequency accuracy for this value of resistor is problematic.
Figures 5, 6, and 7 show progressive increases in the effects of the 100nS discharge time for C3 and of the finite recovery time of the op amp when its output saturates near the ground rail.
These circuits do not require any matched-value components. Accuracies are improved by the use of precision versions of R4, R6, R7, and U3, but the circuit’s operation does not necessitate these.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- Simple sawtooth generator operates at high frequency
- Adjustable triangle/sawtooth wave generator using 555 timer
- DAC (PWM) Controlled Triangle/Sawtooth Generator
- Another PWM controls a switching voltage regulator
The post Self-oscillating sawtooth generator spans 5 decades of frequencies appeared first on EDN.
Full circle current loops: 4mA-20mA to 0mA-20mA

A topic that has recently drawn a lot of interest (!) and no fewer than four separate design articles (!!) here in Design Ideas, is the conversion of 0 to 20mA current sources into industrial standard 4mA to 20mA current loop signals. Here’s the list—so far—in reverse chronological order. Apologies if (as is quite possible) I’ve missed one—or N.
- Another silly simple precision 0/20mA to 4/20mA converter
- Silly simple precision 0/20mA to 4/20mA converter
- Combine two TL431 regulators to make versatile current mirror
- A 0-20mA source current to 4-20mA loop current converter
With so much energy already devoted to that one side of this well-tossed coin, it seemed only fair to pay a little attention to the flip side of the conversion function coin. Figure 1 shows the result. Its (fairly) simple circuit performs a precision conversion from 4-20mA to 0-20mA. Here’s how it works.
Figure 1 The flip side of the current conversion coin: Iout = (IinR1 – 1.24v)/R2 = 1.25(Iin – 4mA).
Wow the engineering world with your unique design: Design Ideas Submission Guide
The core of the circuit is the Vin = IR1 = 1.24 V to 7.20 V developed by the 4-20mA input working into R1 and sensed by the Vref input of Z1. The principle in play is discussed in Figure 1 of “Precision programmable current sink.”
The resulting Z1 cathode current is (Iin R1 – Vref)/R2 = 0 to 20 mA as I increases from 4 mA to 20 mA. Or it would be if not for the phenomenon of Vref modulation by Z1 cathode voltage. The D1, Q2 cascode pair greatly attenuates this effect by holding Z1’s cathode voltage near zero and constant. It also extends Z1’s cathode voltage limit from an inadequate 7 V to the 30 V capability of Q2. Of course, a different choice for Q2 could extend it further. But if 30 V will do, the >1000 typical beta of the 5089 is good for accuracy.
Current booster Q1 extends Z1’s 15 mA max current limit while also reducing thermal effects. The net result holds Z1’s maximum power dissipation to single-digit milliwatts.
With 0.1% precision R1 and R2 and the ±0.5% tolerance TLV431B, better than 1% accuracy can be achieved with the untrimmed Figure 1 circuit. If this level of precision is still inadequate, manual post-assembly trim can be added with just two extra parts, as shown in Figure 2. Calibration is achieved with one pass.
- Set input current to 4.00 mA
- Adjust R4 for output current of ~50 µA. Note this is only 0.25% of full-scale, so don’t worry about hitting it exactly. You probably won’t.
- Set input current to 20 mA
- Adjust R5 for an output current of 20 mA

Figure 2 R4 and R5 trims allow post-assembly precision optimization.
Input max overhead voltage is 8 V, output overhead is 9 V. Worst case (resistor limited) fault current with 24 V supply = 80 mA.
Readers may notice a capacitor labeled “Ca” in Figures 1 and 2. This is the “Ashu capacitance” that Design Idea (DI) contributor and current source circuitry expert Ashutosh Sapre discovered to be essential for frequency stability of the cascode topology. Thanks, Ashu!
And a closing note. Since the output scale factor is set by and inversely proportional to R2, if any full-scale other than 20 mA is desired, it’s easily achieved by an appropriate choice for R2.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Another silly simple precision 0/20mA to 4/20mA converter
- Silly simple precision 0/20mA to 4/20mA converter
- Combine two TL431 regulators to make versatile current mirror
- A 0-20mA source current to 4-20mA loop current converter
The post Full circle current loops: 4mA-20mA to 0mA-20mA appeared first on EDN.
















