Українською
  In English
EDN Network
LM555 begets basic bang bang thermostat

If your favorite tool is a hammer, every problem will look like a nail. Abraham Maslow
Given how often I tinker with the LM555 and LMC555 analog timers, Maslow might have written that famous aphorism specifically for me. And it. Well, here I go again. Bang bang.
Figure 1’s circuit morphs the versatile 555 into a circuit that’s quite different from its usual role as an analog oscillator or timer. Here it’s combined with an NTC (negative tempco) thermistor, and one (or optionally two) resistors to make a resistor-programmed ON/OFF thermostat. It’s easily configured for heating or cooling.
Here’s how it works.
Figure 1 Basic bang bang heating configuration. Setpoint thermistor resistance = Rb/2. Optional Rh sets desired temperature hysteresis. Output rated at up to 15 volts and 300 mA = 4.5 W
Wow the engineering world with your unique design: Design Ideas Submission Guide
One of the secrets (or at least scantily documented) features of the 555 is what happens if you tie Threshold (pin 6) to Vdd as shown in Figure 1. What happens is Trigger (pin 7) then becomes an inverting analog comparator input that drives Output (3) and Discharge (7) high if Trigger < Vdd/3, and low if Trigger > Vdd/3.
When you combine that action with an NTC thermistor and bias resistor Rb as shown, presto! You get a simple but practical thermostat. It turns power (and a substantial amount of it: up to 15 V and 300 mA) ON to the load (e.g., a resistive heater) if the thermistor’s temperature is cooler than the setpoint (thermistor resistance > Rb/2). Power goes OFF when the temperature is warmer (thermistor < Rb/2).
But wait, there’s more. Because accurate thermostatic action depends only on resistor ratios rather than absolute voltages, V+ needn’t be regulated. In fact, if the load isn’t bothered by ripple (e.g., a resistor heater certainly won’t care), it doesn’t even need to be filtered!
Furthermore, if you swap the positions of the thermistor and resistor as shown in Figure 2, and connect a cooling fan (or perhaps a thermoelectric cooler), the temperature regulation inverts. It will now maintain a constant maximum instead of a minimum temperature. If the output load is inductive (e.g., a fan motor), don’t worry about possible inductive transients. The LM555 output pin includes its own kickback protection.

Figure 2 Cooling configuration: Setpoint thermistor resistance = 2Rb.
If hysteresis (dT) is required, for typical NTC tempcos (~4 %/oC), an easy (if approximate) rule of thumb value for Rh = 680k/dToC.

Figure 3 Typical configurations for 50oC setpoint. Heating (left), Cooling (right) with ~1o hysteresis.
Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974. They have included best Design Idea of the year in 1974 and 2001.
Related Content
- Add one resistor to give bipolar LM555 oscillator a 50:50 duty cycle
- Improve 555 frequency linearity
- 555 Temperature Controller Circuit
- Car Thermostat Circuit
The post LM555 begets basic bang bang thermostat appeared first on EDN.
The system architect’s sketchbook: The football has main character energy


Deepak Shankar, founder of Mirabilis Design and developer of VisualSim Architect platform for chip and system designs, has created this cartoon for electronics design engineers.
The post The system architect’s sketchbook: The football has main character energy appeared first on EDN.
What is the EDA problem worth solving with AI?

AI has become EDA’s favorite buzzword, but behind the keynotes and product names the reality is far messier. Cadence, Synopsys, and Siemens EDA are racing to brand incremental heuristics as “platform AI,” while agentic startups promise copilots that mostly smooth over the pain of using legacy tools.
At the same time, giants in the chip design industry—the users of EDA—like Samsung and Nvidia are quietly assembling their own internal AI stacks, universities are sidelined from real industrial data, and foundation model labs like OpenAI and DeepMind are treated as sophisticated pattern-matching systems rather than creators of true intelligence.
This article argues that all four camps are, in different ways, missing the real opportunity: using AI to change what kinds of hardware–software systems we can verify at all, rather than just speeding up what we already do.
It traces how business incentives, closed ecosystems, and data hoarding are holding the field back—and outlines what a genuinely transformative, open, and collaborative AI-for-chips ecosystem would need to look like.
The current AI content in EDA
For the first time in decades, chip design feels like it’s on the verge of a genuine reset. AI isn’t just a new knob on a timing engine or another heuristic in the regression farm; it’s a chance to rethink how we understand, verify, and evolve insanely complex hardware–software systems.
The question is no longer whether AI will touch chip development, but how deep it will go—and whether we’ll use it merely to polish old workflows or to expand what’s possible to design and prove correct at all.
But are we currently progressing into a direction that is worthy of problem solving? It’s not difficult to imagine how such a future would look like: specialized LLMs, APIs to connect EDA tools, serious research, and exchange of representative user data to optimize flows.
But is the industry currently set up this way?
The big 3 vendor perspective(s)
The EDA industry is loudly declaring that AI has arrived. Cadence, Siemens EDA, and Synopsys (the big three) all showcase “AI-driven” platforms, “agentic” workflows, and “generative” capabilities in their keynotes. Agentic startups promise AI copilots for chip design.
Samsung, Nvidia, and other mega-customers are quietly building their own internal AI stacks. And in the background, universities and foundation model labs like OpenAI and DeepMind are doing their own thing, mostly disconnected from this industrial theater.
Look past the branding and see something much less coherent: four camps, each optimizing for its own incentives, and none addressing the hardest verification and design problems in a serious, integrated way.
The first camp is the big three. One has a narrative that is aggressively polished: AI as a unifying fabric across architecture, implementation, verification, and signoff. On paper, it’s exactly the right idea. In practice, most of what’s publicly visible is a scattering of ML and LLM features bolted onto existing products, wrapped in a platform story that is much stronger in marketing than in reproducible methodology.
There are claims about AI-guided coverage closure and scenario generation, but far fewer detailed case studies that a skeptical verification lead could take apart and rely on. Technically, this company narrative shows it’s doing useful work; strategically, it’s primarily about defending revenue and establishing itself as the “AI platform” customers must buy into.
A second narrative takes a different tone: more pragmatic, less breathless. Their AI pitch is 10–30% improvements in regression time, PPA closure, and debug efficiency. They emphasize that ML is built into the solvers and optimizers rather than exposed as a gimmicky chatbot layer.
For organizations taping out serious silicon, this is credible and attractive: keep existing flows and get incremental wins. But that’s also the problem. It’s AI as advanced heuristics, not AI as a rethinking of verification for trillion-cycle, software-heavy, multi-die systems. The message is “do the same thing, just a bit faster,” which is business-rational and intellectually timid.
And the third narrative, for its part, grounds its AI story in hardware-assisted verification and DFT. They are at least honest about where the real pain is: emulation farms straining under 40‑billion‑gate chiplet designs; massive software stacks; and DFT and power analysis workflows that choke traditional environments. Their use of AI is mostly about better resource utilization, faster compiles, accelerated DFT workloads on emulators, and automated generation of reports and transactors.
This is important, and some of it is genuinely innovative on the infrastructure side. However, it mostly skirts the core question of correctness. There is very little about AI for deep semantic understanding of designs, for test synthesis, for inferring invariants, or for blending learning with formal reasoning at scale. This narrative is focusing on shoveling the verification mountain more efficiently, not on changing the shape of the mountain.
Across all three incumbents, the pattern is consistent. They are not leading on foundational AI for verification. They are inserting ML/LLM features into their products in ways that strengthen their moats and justify platform lock-in. Their AI is largely proprietary, closed, and bound to a single vendor ecosystem. It’s technically competent and strategically defensive.
AI startups—new “Tabula Rasa” approaches
The second camp—agentic AI vendors like ChipAgents, Moore’s Lab, and Bronco AI—looks more disruptive at a glance. They don’t try to build the solvers; instead, they target the workflow of the engineer. These systems ingest RTL, testbenches, logs, coverage reports, specifications, bug trackers, and wikis.
They use large language models plus tool APIs to answer questions like “Why did this regression fail?” or “What should I do next?” They can orchestrate multi-step flows: launch regressions, analyze results, file tickets, update documentation, and propose follow-up tests.
This is a genuine improvement over the current state of affairs where engineers burn countless hours on log archaeology and context switching between silos of information. But being critical, agentic AI today is far better at smoothing human pain points than at addressing the core technical difficulty of verification. These systems sit on top of the incumbents’ tools and rely on whatever APIs those tools expose.
If those APIs are thin, unstable, or intentionally limiting, the “agent” degrades into a clever log parser. And because current LLMs are still brittle on precise semantics, concurrency, and strict correctness, most agentic systems are pattern matchers and orchestrators, not genuine reasoning engines about hardware behaviour. They can triage, guide, and accelerate, but they rarely change what you can prove about a design.
The giant users
The third camp consists of the giant end users like Samsung and Nvidia, who look at all of this and decide to build their own AI ecosystems. They have reasons the vendors can only envy: vast proprietary design portfolios, massive software workloads, custom verification flows, and decades of institutional memory about failures and workarounds. They do something closer to what should have existed from the beginning.
They build internal copilots and agents that understand their architectures, coding styles, constraints, safety regimes, and business priorities. They integrate across the big three vendors’ tools, and a forest of in-house tools. They treat the vendors’ products as engines behind the scenes and construct a domain-specific AI layer on top.
From their point of view, this is the only rational approach. For the ecosystem, it has a downside. Each large customer ends up recreating similar internal stacks in private: similar integrations, similar prompt engineering, similar hacks to get around tool limitations. None of this is published or generalized. The most advanced “AI for chips” work is happening inside the firewalls of a few giants, and the lessons do not propagate.
It is effective and myopic at the same time.
The academic perspective
Meanwhile, the fourth camp being university research occupies an awkward and increasingly marginal position. Historically, academia has been where the big conceptual leaps in verification and synthesis occurred: SAT/SMT-based reasoning, CEGAR, IC3/PDR, and many other ideas that quietly underpin modern tools.
Today, universities explore promising combinations of learning and formal reasoning, program synthesis, and new abstractions for system behavior. But they generally lack access to full-scale industrial designs, closed commercial tools, and realistic data. Tool vendors are hesitant to open their ecosystems; customers are understandably cautious about sharing real designs. Funding pressures drive many projects toward small, benchmark-driven demonstrations rather than risky, large-scale collaborations.
The result is that some of the most interesting ideas—how to fuse symbolic reasoning with learned models, how to automatically infer specifications, and how to reason about software and hardware jointly—are explored on toy problems with no clear path into mainstream flows.
The industry, for its part, is busy shipping incremental ML wrappers, and hardly anyone is building serious bridges between the two worlds. It’s not that universities lack relevance; it is that the industry has structured itself such that the most radical research is almost guaranteed to remain peripheral.
The model foundations
Overlaying all of this are the foundation model labs: OpenAI, Anthropic, Google DeepMind, Meta, and others. These organizations are building the most capable general reasoning systems currently available, and they are rapidly evolving techniques for program synthesis, tool use, and formal-ish reasoning in natural language environments. Yet, in the EDA world, they are mostly treated as commodity model providers: grab GPT or Claude, fine-tune a narrow layer, wire up a chat interface to data logs, and call it an AI feature.
What is largely missing is serious, domain-driven co-design: injecting the structure of hardware, formal semantics, type systems, property languages, and symbolic engines into the models themselves, and conversely exposing the models’ strengths back into the verification stack.
Foundation models will never be optimal for RTL and concurrency out of the box, but the EDA incumbents have done very little to create the conditions under which such specialization could happen in a principled way. If and when one of the big model labs decides that “programs that compile to silicon” is a strategic domain, the current generation of vendor platforms will likely look quaint.
Outlook: Is the industry solving the right problem(s) and what’s the problem worth solving?
Taken together, these four camps are all underperforming relative to what is technically possible. The big three are shipping incremental heuristics and calling them platforms. Agentic vendors are improving workflows but are constrained to shallow semantics.
Samsung, Nvidia, and their peers are building powerful but private stacks that do not lift the state of the art for anyone else. Universities are generating genuinely new ideas without real channels for impact. Foundation model labs are shaping the AI substrate, but the interface with hardware design is thin and unimaginative.
The future that would move the needle is not mysterious. It would involve foundation models explicitly specialized and constrained by rich formal and domain structures; EDA tools exposing deep, stable APIs so that both research systems and agentic orchestrators can drive real flows; serious industrial–academic collaborations around real designs, software workloads, and verification obligations; and end users like Samsung and Nvidia contributing abstractions, interfaces, and benchmark problems instead of quietly hoarding bespoke solutions.
Instead, the industry is drifting toward a patchwork of proprietary “AI experiences” bound to each vendor, plus a small number of sophisticated but opaque internal efforts at a handful of giants. The risk is that we declare victory far too early—that “AI in EDA” hardens into a set of shallow, walled-garden add-ons while the central challenge of scalable correctness for software-heavy, multi-die systems remains largely unsolved.
The real question is not who can generate the flashiest AI marketing or the neatest chatbot demo inside an integrated design environment (IDE). It’s who is willing to open enough of their stack, share enough structure and data, and collaborate deeply enough that AI can change what we are capable of verifying at all, not just shave a few percent off the run time of regressions we already know how to run. Right now, no one in this ecosystem can honestly claim that mantle.
“A new hope”
Despite the current mess of walled gardens, shallow copilots, and private AI stacks, the ingredients for something far better are finally on the table. We have foundation models that can reason over code, decades of formal methods waiting to be supercharged rather than sidelined, and a new generation of engineers who are comfortable treating tools as collaborators, not black boxes.
If vendors open real APIs, if giants like Samsung and Nvidia share abstractions instead of just artifacts, and if universities and model labs are invited into serious, data-rich collaborations, AI can do more than accelerate today’s flows—it can change what we dare to design.
The hopeful view is simple: the next great leap in chips won’t come from any one camp winning the landgrab, but from all of them finally deciding that solving the hard problems together is more valuable than owning the buzzword alone.
Will we get there? Only time will tell.
Simon Davidmann is an EDA industry pioneer and serial technology entrepreneur with over 40 years of experience in simulation and verification. His career has been instrumental in shaping the foundational languages and methodologies used in modern chip design, particularly those now critical for AI/ML hardware. Davidmann was the co-creator of Superlog that became SystemVerilog. After selling Imperas to Synopsys in 2023 and being Synopsys VP for Processor Modeling & Simulation, he left Synopsys and is now an AI + EDA researcher at Southampton University, UK.
Related Content
- AI features in EDA tools: Facts and fiction
- EDA’s big three compare AI notes with TSMC
- DAC 2025: Towards Multi-Agent Systems In EDA
- How AI-based EDA will enable, not replace the engineer
- Next Gen AI EDA Startups Have Potential to Disrupt Design Automation
The post What is the EDA problem worth solving with AI? appeared first on EDN.
APEC 2026 showcases advances in power electronics

The annual Applied Power Electronics Conference & Exposition (APEC 2026) showcases hundreds of companies that exhibit their latest component and technology advances for system power designers across a wide range of industries. Many of these devices deliver on growing requirements for higher efficiency and higher power density, along with simplifying design to reduce complexity and accelerate time to market.
Power device manufacturers claim major technology advances, including topologies and packaging, for applications ranging from AI data centers and humanoid robotics to fast-charging mobile devices. Still a big area of development is wide-bandgap (WBG) semiconductors, including gallium nitride (GaN) and silicon carbide (SiC) power devices, addressing the need for simpler designs and more flexibility.
Here is a selection of power devices featured at APEC 2026 that target improvements in efficiency and power density, along with simplifying design and saving board space. These are used in a wide range of applications, including AI data centers, appliances, automotive, e-mobility, industrial automation, and robotics.
Breakthroughs and advancesOffering an alternative to resonant power designs, Power Integrations (PI) announced a topology that it calls a breakthrough for flyback power supply design by extending the power range of flyback converters to 440 W. The TOPSwitchGaN flyback IC family combines the company’s PowiGaN technology with its TOPSwitch IC architecture, reducing complexity and improving manufacturability. It can also eliminate heat sinks in many cases, according to PI, and shorten design time and lower total system cost.
TOPSwitchGaN ICs feature 92% efficiency across the load range—from 10% to 100% load—and exceed European Energy-related Products (ErP) regulations at less than 50-mW power consumption for standby and off modes, and it is accomplished without the need for synchronous rectification, PI said. They are suited for high-end appliances, e-bike chargers, and industrial applications.
PowiGaN switches deliver a much lower on-state resistance (RDS(on)) than silicon, which reduces conduction losses, dramatically increasing the power capability of flyback converters, PI said. Thanks to the integration of the 800-V PowiGaN switches, the devices can operate at switching frequencies of up to 150 kHz to minimize transformer size. Other specs include no-load consumption at below 50 mW at 230 VAC, including line sense, and up to 210 mW of output power for 300-mW input at 230 VAC to run housekeeping functions when units are in standby mode.
For ultra-slim designs, TOPSwitchGaN ICs are available in low-profile eSOP-12 surface-mount packaging that enables 135 W (85–265 VAC) without a heat sink for applications such as appliances. These devices are also available in an eSIP-7 package, and thanks to its vertical orientation, it minimizes the printed-circuit-board (PCB) footprint. It has a thermal impedance equivalent to a TO-220-packaged part. By mounting a metal heat sink, the extended power range is achieved for applications including power tools, e-bikes, and garage openers.
Reference designs include the DER-1079 (60-W, wide-range isolated flyback power supply unit (PSU) for appliances), the DER-1019 (356-W highline [89 V/4 A]) isolated flyback industrial PSU), and the RDR-1018 e-bike charger kit (168-W wide-range isolated flyback design).
Power Integrations’ TOPSwitchGaN flyback ICs (Source: Power Integrations)
pSemi, a Murata company, also claimed groundbreaking power products, targeting high-energy-density applications. At APEC 2026, pSemi unveiled the PE26100 multilevel buck converter for fast-charging mobile devices and the PE25304 advanced integrated charge pump switching-capacitor power module to enable high-efficiency power conversion in humanoid robotic, dexterous-hand power applications.
The PE26100 is an expanded application focus for its high‑performance PE26100 multilevel buck converter, which is now optimized for main, direct battery charging in next‑generation smartphones, tablets, and other compact mobile devices. It delivers a fast‑charging capability, high output current, up to 6 A, and high thermal performance in an ultra‑thin form factor for space‑constrained consumer electronics.
pSemi said the architecture and performance characteristics make it uniquely suited for today’s transition toward high‑power USB Power Delivery (USB‑PD) and programmable power supply (PPS) fast‑charging ecosystems. Supporting 4.5-V to 18.5-V input, the device enables four‑level buck mode for higher USB‑PD voltage inputs and three‑level buck mode for mid‑ to low input voltages. For USB PPS applications, the PE26100 can also operate as a fixed‑ratio, capacitor‑divider charge pump, offering divider ratios of 2:1 and 3:1 depending on programmed input voltage.
The PE25304 is an advanced integrated charge pump switching‑capacitor power module for high efficiency and performance in space‑constrained, high‑power applications. Designed to divide input voltage by four, the PE25304 is purpose‑built for 48-V input architectures, with a wide operating range from 20 V to 60 V, making it suited for dexterous-hand robotics and mechatronic systems. It can also be used in drones, medical devices, embedded AI modules, and industrial automation systems.
The module is housed in an ultra-low-profile package (2 mm) and can deliver up to 72 W of output power. It also features a 97% conversion efficiency, reducing power loss and thermal buildup.
Texas Instruments (TI) unveiled several isolated power modules for applications from data centers to electric vehicles that require improvements in power density, efficiency, and safety. In particular, the UCC34141-Q1 and UCC33420 isolated power modules leverage TI’s IsoShield technology. This is a proprietary multichip packaging solution that delivers up to 3× higher power density than discrete solutions in isolated power designs and shrinks solution size by as much as 70% by packing more power into smaller spaces while reducing area, cost, and weight.
Traditionally, power designers use power modules to save board space and simplify design. Advancements in packaging technology such as the IsoShield enable higher performance and efficiency gains. The IsoShield copackages a high-performance planar transformer and an isolated power stage, offering functional, basic, and reinforced isolation capabilities.
It enables a distributed power architecture, helping manufacturers meet functional safety requirements by avoiding single-point failures, TI said. In addition to shrinking the solution size, it delivers up to 2 W of power for automotive, industrial, and data center applications that require reinforced isolation. For example, the increased power density helps deliver lighter and more efficient EVs that extend range and improve performance.
TI also announced other advancements in data centers, automotive, humanoid robots, sustainable energy, and USB Type-C applications, including an 800-V to 6-V DC/DC power distribution board. Pre-production and production quantities of the isolated power modules, along with evaluation modules, reference designs, and simulation models, are available now on TI.com.
TI’s UCC34141-Q1 and UCC33420 isolated power modules (Source: Texas Instruments Inc.)
MaxLinear Inc. unveiled its modular intelligent power management solution for next-generation broadband system-on-chip (SoC) designs. The platform includes the MxL7080 power management controller, MxL76500 smart regulating stage (SRS) modules, and high-efficiency MxL76125 22-V/15-A synchronous buck regulator. It delivers a thermally optimized power architecture for high‑bandwidth, multi-service access platforms, including cable, fiber, and fixed wireless access gateways; Ethernet routers; and customer premise equipment.
The platform addresses the need for scalable, multi-rail power management architectures capable of supporting higher power density, tighter voltage tolerances, and improved thermal performance as SoC designs get more complex.
The MxL7080 power management controller, paired with four MxL76500 SRS modules, provides a reference‑based, multiphase power architecture for high‑performance SoCs. This architecture provides improved thermal distribution to reduce localized hotspots, a simplified layout and routing flexibility, and precise multi‑rail sequencing with dynamic voltage scaling support.
The MxL76125 buck regulator, housed in a 4 × 5-mm QFN package, enhances point‑of‑load (PoL) flexibility for complex broadband and access platforms. It offers a wide 5-V to 22-V input voltage range supporting 5-V, 12-V, and 20-V system rails and high efficiency up to 96%, with light‑load PFM mode to reduce idle power. Other features include a fast transient response using COT‑based control with ceramic output capacitors and integrated protection including OCP, OVP, OTP, UVLO, and short‑circuit protection.
The complete (MxL7080 + MxL76500 + MxL76125) power solution is optimized for multi-access gateway platforms. These devices are available now in RoHS-compliant, green/halogen-free, industry-standard packages. Evaluation boards and samples are available at the MxL7080, MxL76500, and MxL76125 product pages.
MaxLinear’s intelligent power management solution (Source: MaxLinear Inc.)
SiC and GaN power solutions
Microchip Technology Inc. has launched its BZPACK mSiC power modules, offering high flexibility with a range of topologies, which include half-bridge, full-bridge, three-phase, and PIM/CIB configurations. This flexibility allows power designers to optimize performance, cost, and system architecture.
Targeting demanding power-conversion environments, the BZPACK mSiC power modules exceed high voltage-high humidity-high temperature reverse bias (HV‑H3TRB) testing, surpassing the industry standard of 1,000 hours, making them suited for industrial and renewable energy applications. The modules provide a Comparative Tracking Index 600-V case, stable RDS(on) across temperature ranges, and substrate options in aluminum oxide or aluminum nitride.
The BZPACK power modules are also designed to reduce system complexity and enable faster assembly by offering a baseplate-less design with press-fit, solderless terminals and an optional pre-applied thermal interface material.
The power modules leverage Microchip’s advanced mSiC technology and performance of its MB and MC mSiC MOSFET families for industrial and automotive applications, with AEC-Q101-qualified options available. These devices support common gate-source voltages (VGS ≥ 15 V) and are available in industry-standard packages.
The MC family integrates a gate resistor, which offers benefits in improved switching control, low switching energy, and improved stability in multi-die module configurations. Package options include TO-247-4 Notch and die form (waffle pack).
Microchip offers a range of SiC diodes, MOSFETs, and gate drivers. The BZPACK mSiC power modules are available in production quantities.
Microchip’s BZPACK mSiC modules (Source: Microchip Technology Inc.)
SemiQ Inc. launched its QSiC Dual3 family of 1,200-V half-bridge MOSFET modules for motor drives in data center cooling systems, grid converters in energy storage systems, and industrial drivers. These are designed to replace IGBT modules with minimal redesign, with all MOSFET die screened using wafer-level gate-oxide burn-in tests exceeding 1,450 V.
Enabling power converters with high conversion efficiency and power density, the series of six devices includes an optional parallel Schottky barrier diode (SBD) to further reduce switching losses in high-temperature environments. Two of the family’s six devices have an RDS(on) of 1 mΩ and a power density of 240 W/in.3 in a 62 × 152-mm package. The modules also feature a low junction-to-case thermal resistance and enable a simplified system design with smaller, lighter heat sinks.
The devices include the GCMX1P0B120S4B1, GCMX1P4B120S4B1, GCMX2P0B120S4B1, GCMS1P0B120S4B1 (SBD), GCMS1P4B120S4B1 (SBD), and GCMS2P0B120S4B1 (SBD). Datasheets for the QSiC Dual3 modules can be downloaded here.
SemiQ’s QSiC Dual3 modules (Source: SemiQ Inc.)
In the GaN space, Efficient Power Conversion (EPC) introduced the EPC91121 motor drive inverter evaluation board, built around its Gen 7 EPC2366 40-V eGaN power transistor. The board is designed for fast prototyping and evaluation, integrating the key functions required for a motor drive inverter, including gate drivers, housekeeping power supplies, voltage and temperature monitoring, and current sensing.
The 40-V EPC2366 Gen 7 eGaN FET offers an ultra-low RDS(on) of 0.84 mΩ, enabling extremely efficient power conversion and fast switching performance. The three-phase inverter solution can deliver up to 70-Apeak (50-ARMS) output current from input voltages ranging between 18 V and 30 V, making it suited for battery-powered systems operating around a 24-V supply.
The platform supports PWM switching frequencies up to 150 kHz, which is significantly higher than typical silicon-based motor drives, according to EPC. This reduces magnetic component size, minimizes switching losses, and improves overall system responsiveness, the company said.
The board, measuring 79 × 80 mm, provides high-bandwidth current sensing on all three phases, supporting measurements up to ±125 A, while phase and DC-bus voltage sensing provide the feedback required for precise monitoring and advanced motor control techniques such as field-oriented control (FOC) and space-vector PWM. Other features include shaft encoder and Hall-sensor interfaces and multiple test points.
Applications include drones, robotics, industrial automation, handheld power tools, and other compact electromechanical systems in which high efficiency and power density are critical.
The EPC91121 reference design board and devices are available now from DigiKey and Mouser. Design support files, including schematic, bill of materials, and Gerber files, are available on the EPC91121 product page.
EPC’s EPC91121 BLDC motor drive evaluation board (Source: Efficient Power Conversion)
Renesas Electronics Corp. unveiled its high-voltage TP65B110HRU at APEC 2026, claiming the first bidirectional switch using depletion-mode (d-mode) GaN technology, capable of blocking both positive and negative currents in a single device with integrated DC blocking. Target applications include single-stage solar microinverters, AI data centers, and on-board EV chargers.
The device simplifies power converter designs and replaces conventional back-to-back FET switches with a single low-loss, fast-switching, easy-to-drive device, Renesas said. “By integrating bidirectional blocking functionality on a single GaN product, power conversion can be achieved in a single stage using fewer switching devices.”
This is an alternative to today’s high-power-conversion designs that use unidirectional silicon or SiC switches, which block current in only one direction when in the off state. Many of these single-stage designs use conventional unidirectional switches back to back, Renesas said, resulting in a fourfold increase in switch count and reduced efficiency.
Renesas’s 650-V SuperGaN devices are based on a proprietary, normally off technology. The TP65B110HRU combines a high-voltage bidirectional d-mode GaN chip co-packaged with two low-voltage silicon MOSFETs with high threshold voltage (3 V), high gate margin (±20 V), and built-in body diodes for efficient reverse conduction. It offers high-dV/dt capability of >100 V/ns, with minimum ringing and short delays during on/off transitions.
Comparing the Renesas bidirectional GaN switch with enhancement-mode bidirectional GaN devices, the Renesas switch is compatible with standard gate drivers that require no negative gate bias. The result is a simpler, lower-cost gate-loop design and fast, stable switching in both soft- and hard-switching operations without a performance penalty, the company said.
The TP65B110HRU bidirectional GaN switch, housed in a TOLT top-side-cooled package, is available now, along with the RTDACHB0000RS-MS-1 evaluation kit. Also available are two reference solutions (500-W Solar Microinverter and Three-Phase Vienna Rectifier System) that leverage the TP65B110HRU and other Renesas-compatible devices.
Renesas’s TP65B110HRU bidirectional GaN switch (Source: Renesas Electronics Corp.)
Renesas also announced a GaN charging solution for industrial and IoT electronics applications. The GaN-based Half-Wave LLC (HWLLC) platform supports 500-W or higher operation across IoT, industrial, and infrastructure systems. The HWLLC converter topology scales a compact power architecture from 100-W-class designs to 500 W, targeting high-speed chargers for power tools, e-bikes, and other appliances.
The topology addresses the size, heat, and efficiency penalties of legacy topologies. It also helps designers move beyond 100-W USB-C charging devices and adopt 240-W USB EPR charging to shrink proprietary brick chargers in smartphones, laptops, and many gaming systems, Renesas said. The fast-charging technology was recently incorporated into Belkin’s GaN-based Z-Charger that features Renesas’s zero-standby-power (ZSP) chip with advanced SuperGaN d-mode GaN technology.
Building on its proprietary ZSP technology, the solution encompasses four new controller ICs, including the RRW11011 interleaved power-factor correction (PFC) and HWLLC combo controller, the RRW30120 USB-PD protocol and closed-loop controller, the RRW40120 half-bridge GaN gate driver, and the RRW43110 intelligent synchronous rectifier controller.
The RRW11011 PFC with phase-shift control cancels ripple, reduces component size and cost, and balances current. It also allows designers to lower operating temperature while delivering the wide output range (5 V to 48 V) required by USB Extended Power Range (EPR) and other variable-load charging systems. The RRW30120 USB-PD protocol and closed-loop controller achieve a maximum USB power delivery of 240 W. Together in a 240-W USB EPR power adapter design, the solution claims the highest power density in the industry (3 W/cc) and 96.5% peak efficiency.
The four devices enabling the HWLLC solution are available in addition to the EBC10293 240-W USB-PD EPR evaluation board. Reference solutions include the 240-W AC/DC Adapter and 300-W Lighting Power Platform.
Renesas’s Half-Wave LLC GaN charging solution (Source: Renesas Electronics Corp.)
AI data centers
Infineon Technologies AG released several power solutions aimed at AI data centers, including voltage regulation devices, digital power controller ICs, and CoolGaN-based high-voltage intermediate bus converter (IBC) reference designs.
Infineon expanded its voltage regulation portfolio with the XDPE1E digital multiphase PWM buck controllers and TDA49720/12/06 PMBus PoL voltage regulators to deliver higher compute performance per rack in AI data centers as next-generation platforms drive new requirements for power architectures.
The XDPE1E3G6A and XDPE1E496A, digital three- and four-loop multiphase PWM buck controllers, respectively, target multi-processor AI platforms and advanced VR inductor topologies. They offer highly configurable phase allocation and fully programmable phase firing order and support multiple protocols, including PMBus, AVSBus, SVID, and SVI3. Digital features, including active transient response, fast DVID, automatic phase shedding, and PFM, help address dynamic AI loads. Infineon also offers built-in tools such as Digital Scope, Black Box recording, and protection features.
To address the increasing number of non-core rails in AI systems, which require efficient regulation with accurate monitoring and control, Infineon developed the TDA49720/12/06 family of fully integrated PoL DC/DC buck regulators with PMBus-compliant digital telemetry. This family, with 6-A, 12-A, and 20-A options in 3 × 3-mm and 3 × 3.5-mm packages, helps maximize power density and simplify layout on accelerator cards and server boards.
The PMBus telemetry enables accurate reporting of key parameters, including output voltage, load current, input voltage, and die temperature. The devices also feature a proprietary valley-current-mode constant-on-time control scheme that enables fast transient response, cycle-by-cycle current limiting, and support for all-MLCC output capacitance designs. The devices operate from 2.7-V to 16-V input and across a wide junction temperature range of −40°C to 150°C.
Infineon’s XDPE1E496A digital multiphase PWM buck controller (Source: Infineon Technologies AG)
Infineon also expanded its XDP digital power controller IC family with the XDPP1188-200C, targeting higher power levels in AI servers. The device supports intermediate bus conversions from 48 V to 12 V or lower, as well as future higher-voltage DC systems, including the conversion of ±400-V or 800-VDC bus voltage to 48 V, 24 V, or 12 V.
The XDPP1188-200C complements Infineon’s CoolGaN-based high-voltage IBC reference designs (also introduced at APEC) and supports custom high-/medium-voltage IBC designs up to 800 VDC in AI data centers. The XDPP1188-200C allows optimization for customer-specific requirements. In 48-V systems, the controller works seamlessly with medium-voltage IBC modules, delivering an optimized power supply chain from the intermediate bus to processor voltage regulation.
Key features include an advanced feed-forward control mechanism to improve response time and stability under dynamic input transient conditions, and a nonlinear fast transient response to handle the rapid power demand fluctuations in AI servers. The device also integrates advanced power management techniques at light-load conditions and supports bidirectional configuration, enabling flexible power management.
The XDPP1188-200C digital power controller is sampling now. Volume production is expected in the first quarter of 2026.
Infineon’s XDPP1188-200C digital power controller (Source: Infineon Technologies AG)
Infineon also introduced two high-voltage IBC reference designs to help customers make the shift to AI server power architectures powered by ±400 VDC and 800 VDC.
Leveraging Infineon’s 650-V CoolGaN switches, the reference designs address two architectures: The 800-VDC to 50-V design is an intermediate stage for downstream 48-V IBC modules, while the 800-VDC to 12-V design enables direct conversion for compact server boards. The XDPP1188-200C digital controller is available for custom implementations, as noted earlier, with output voltages of 48 V, 24 V, or 12 V.
The 800-VDC or ±400-V to 50-V high-voltage IBC reference design demonstrates more than 98% efficiency at full load. Leveraging Infineon’s high- and medium-voltage CoolGaN switches, EiceDRIVER gate drivers, and a PSOC microcontroller (MCU), it consists of two 3-kW 400-V to 50-V converter building blocks, which are configured in an input-series-output-parallel (ISOP) arrangement. It scales to 6-kW TDP and supports up to 10.8 kW for 400 µs, using a planar PCB integrated transformer with multiple synchronous rectifier stages and soft switching across all load conditions to reduce electromagnetic interference. It claims an exceptional 2.5-kW/in.3 power density in a 60 × 60 × 11-mm form factor.
The second reference design is an ultra-thin, high-voltage IBC demo board with an 8-mm height, which converts an 800-VDC bus voltage directly to a 12-V intermediate rail. The design delivers 6-kW TDP and supports up to 10.8 kW for 400 µs. It features a power density above 2,300 W/ in.3, up to 98.2% peak efficiency, and 97.1% efficiency at full load. It operates as an ISOP half-bridge LLC converter, leveraging Infineon’s 650-V CoolGaN and 40-V OptiMOS 7 switches, with EiceDRIVER gate drivers and a PSOC MCU.
Infineon’s high-voltage IBC demo board (Source: Infineon Technologies AG)
A host of other semiconductor solution providers highlighted their latest and greatest at APEC 2026. Toshiba America Electronic Components Inc., for example, showcased several new products and technologies, ranging from its UMOS 11 MOSFETs and top-side-cooled TOGT package to SiC modules and MCU and motor control solutions.
On display were Toshiba’s expanded family of UMOS 11 MOSFETs in industry-standard packages. These devices feature improved switching characteristics and reduced RDS(on) per area compared with the previous UMOS 10 generation. The company also highlighted its WBG semiconductor portfolio, including high-power SiC power modules for grid-level and industrial systems; 750-V and 1,200-V SiC die and modules for automotive drivetrain inverter applications; and GaN devices.
Toshiba also featured its top-side-cooled TOGT packaging that targets high-power-density applications. It enables heat dissipation through the top of the package to reduce thermal stress on the PCB.
Other solutions presented at the show include MCU and motor control solutions (MCU, MCD, and SmartMCD devices) for automotive body electronics, electronic control units (ECUs), and industrial control applications. System reference designs highlighted include high-efficiency power supply platforms such as 3-kW server PSUs for data center applications, automotive ECU power architectures, and motor control reference designs for pump and power tool systems.
Toshiba’s UMOS 11 MOSFETs (Source: Toshiba America Electronic Components Inc.)
The post APEC 2026 showcases advances in power electronics appeared first on EDN.
A fully floating BJT-based LED current driver

The circuit in Figure 1 combines a VBE-referenced current source with a current mirror to implement a simple two-terminal, fully floating LED current sink or source. This approach is well-suited for applications in which tight current accuracy is not required, such as driving LED strings where a 5–10% current tolerance is acceptable.
Figure 1 A simple, fully floating LED current driver based on a VBE-referenced current source and a BJT current mirror. The circuit operates as either a current sink or source and supports output currents up to 100 mA. Note: R2=R3. All resistors are ¼ W and 5%.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The LED driver can drive an arbitrary number of series-connected LEDs, provided the available supply voltage is at least 2.3 V. The topology supports both high-side and low-side operation, as shown in Figure 2. Output current ranges from a few milliamps up to 100 mA, with no requirement for heat sinks.

Figure 2 High-side and low-side operating configurations enabled by the fully floating nature of the LED driver.
The current source formed by BJTs Q1 and Q2 is set by resistor R1. A current mirror implemented with BJTs Q3 and Q4, using equal emitter resistors (R2 = R3), forces nearly equal currents in branches I1 and I2, as long as the voltage drop across the emitter resistors is at least 0.5 V. This requirement helps compensate for VBE mismatch between the transistors. The total LED current is therefore doubled, while power dissipation is evenly shared among the devices.
Experimental data (Table 1) confirm the expected behavior: output current scales with R1, and the minimum supply voltage increases from 2.3 V at 9.3 mA to 2.8 V at 97 mA, consistent with the headroom required by the VBE-referenced source and mirror.
|
R1 |
R2=R3 |
Iout |
Vsupplymin |
|
150Ω |
100 Ω |
9.3mA |
2.3V |
|
82 Ω |
56 Ω |
18.2mA |
2.4V |
|
33 Ω |
22 Ω |
44mA |
2.5V |
|
15 Ω |
10 Ω |
97mA |
2.8V |
Table 1 Experimental data showing R1, R2/R3, and corresponding Iout and Vsupplymin.
With a minimum operating voltage of approximately 2.8V, the circuit dissipates about 280 mW at a maximum output current of 100 mA. Higher supply voltages reduce efficiency due to increased power dissipation in the driver.
Because the LED current is VBE-dependent, it exhibits temperature sensitivity, with a temperature coefficient of approximately -0.3 %/°C. Using a resistor with a negative temperature coefficient for R1 can partially compensate for this effect.
—Luca Bruno has a Master’s Degree in Electronic Engineering from the Politecnico of Milan. He has written 16 EDN Design Ideas.
Related Content
- LED strings driven by current source/mirror
- Current mirror drives multiple LEDs from a low supply voltage
- A current mirror reduces Early effect
- A two-way mirror — current mirror that is
The post A fully floating BJT-based LED current driver appeared first on EDN.
The truth about AI inference costs: Why cost-per-token isn’t what it seems

The AI industry has converged on a deceptively simple metric: cost per token. It’s easy to understand, easy to compare, and easy to market. Every new system promises to drive it lower. Charts show steady declines, sometimes dramatic ones, reinforcing the impression that AI inference is rapidly becoming cheaper and more efficient.
But simplicity, in this case, is misleading.
A token is not a fundamental unit of cost in isolation. It is the visible output of a deeply complex system that spans model architecture, hardware design, system scaling, memory behavior, power consumption, and operational efficiency. Reducing that complexity to a single number creates a dangerous illusion: improvements in cost per token necessarily reflect improvements in the underlying system.
They often do not.
To understand what is really happening, we need to step back and look at the full system—specifically, the total cost of ownership (TCO) of an AI inference deployment.
From benchmark numbers to real systems
Most comparisons in the industry start from benchmark results. Inference benchmarks such as MLPerf provide a useful baseline because they fix key variables—model, latency constraints, and workload characteristics—allowing different systems to be evaluated under the same conditions.
Take a large-scale model such as Llama 3.1 405B. On a modern GPU system like Nvidia’s GB200 NVL72, MLPerf reports an aggregate throughput that translates to roughly 138 tokens per second per accelerator. An alternative inference-focused architecture might deliver a lower figure—say, 111 tokens per second per accelerator.
At first glance, the conclusion seems obvious: the GPU is faster.
But this is precisely where the problem begins. That number describes the performance of a single accelerator under specific benchmark conditions. It says very little about how the system behaves when deployed at scale.
And in real-world data centers, scale is everything.
The illusion of linear scaling
In theory, performance should scale linearly with the number of accelerators. Double the hardware, double the throughput. In practice, this never happens. Communication overhead, synchronization, memory contention, and architectural inefficiencies all conspire to reduce effective performance as systems grow.
This effect is captured by what is often called scaling efficiency. It’s one of the most important and most overlooked parameters in AI infrastructure.
A system that achieves 97% scaling efficiency will behave differently from one that achieves 85%, even if their per-chip performance appears comparable. Over dozens or hundreds of accelerators, that difference compounds rapidly.
This is where inference-specific architectures begin to separate themselves.
Unlike training, inference does not require backpropagation. The execution flow is more predictable, the data movement patterns are more structured, and the opportunity for optimization is significantly greater. Architectures that are purpose-built for inference can exploit this determinism to sustain high utilization across large systems.
One architecture is a case in point. By moving away from the traditional GPU execution model and adopting a deeply pipelined, dataflow-oriented design, it minimizes the coordination overhead that typically erodes scaling efficiency. The result is not just higher peak utilization, but more important, consistently high utilization at scale.
When the system flips the narrative
Once performance is evaluated at the level that actually matters—servers, racks, and data centers—the comparison often changes.
Throughput per server depends not only on per-accelerator performance, but also on how many accelerators are packed into a system and how efficiently they work together. Throughput per rack adds another layer, incorporating system density and infrastructure constraints. When power is introduced into the equation, the relevant metric becomes throughput per kilowatt.
It is at this level that architectural differences become impossible to ignore.
GPU-based systems are optimized for flexibility. They can handle a wide range of workloads, but that generality introduces inefficiencies when running highly structured inference tasks. Data must move between memory hierarchies, threads must be synchronized, and execution units often sit idle waiting for dependencies to resolve.
The architecture mentioned above takes a different approach. By eliminating the traditional memory hierarchy bottlenecks and replacing them with a large, flat register file combined with a dataflow execution model, it effectively removes the “memory wall” that limits sustained performance in GPU systems. Data is kept close to compute, and execution proceeds in a continuous pipeline rather than in discrete, synchronized steps.
The consequence is subtle but powerful: even if peak per-chip performance appears lower, the effective throughput at the system level can be significantly higher. More importantly, that performance is achieved with far greater energy efficiency.
Power: The constraint that doesn’t go away
Energy consumption is not just a cost factor; it’s the constraint that ultimately defines the scalability of AI infrastructure.
Electricity prices, power usage effectiveness (PUE), and utilization rates are not theoretical constructs. They are operational realities that directly impact the economics of every deployment. A system that consumes less energy per token has an intrinsic advantage that compounds over time.
This is where inference-native architectures again demonstrate their value.
Because the architecture’s design minimizes unnecessary data movement and maximizes pipeline utilization, it delivers more tokens per unit of energy. The metric that matters is not peak FLOPS, but tokens per kilowatt—and on that axis, architectural efficiency becomes the dominant factor.
In large-scale deployments, this translates directly into lower operating costs and improved total cost of ownership.
The hidden influence of workload assumptions
Benchmarking does not eliminate bias—it simply moves it.
Parameters such as context length, output token size, and concurrency have a profound impact on system behavior. A model running at 128K context imposes different demands than one operating at 8K. Latency, memory pressure, and throughput all shift accordingly.
Architectures that rely on heavy memory movement are particularly sensitive to these changes. As context length grows, the cost of moving data becomes increasingly dominant.
By contrast, architectures that localize data and streamline execution are more resilient to these shifts. This is another area where the architecture’s register-centric, dataflow design provides an advantage: it reduces dependence on external memory bandwidth and maintains more consistent performance across varying workloads.
From metrics to economics
When performance, power, and infrastructure are combined, the discussion moves from engineering to economics.
Total cost of ownership captures the full picture: capital expenditure, operating costs, energy consumption, and system utilization over time. It reflects not just how fast a system can run, but how efficiently it can deliver value in a real deployment.
This is where many cost-per-token claims fall apart.
A lower cost per token can be achieved in multiple ways—by improving efficiency, by adjusting assumptions, or by accepting lower margins. Without a system-level view, it’s impossible to distinguish between these scenarios.
What matters is not the headline number, but the underlying drivers.
The risk of optimizing the wrong thing
The industry’s focus on cost per token has created a subtle distortion. Instead of optimizing systems, we risk optimizing metrics. This is not unique to AI. Every technology cycle has its preferred metrics, and every metric can be gamed if taken out of context.
A truly efficient system is one that aligns performance, energy consumption, and scalability. It delivers consistent throughput, minimizes waste, and operates effectively under real-world constraints. This is precisely the direction that inference-specific architectures are taking.
The aforementioned architectural approach illustrates this shift. Rather than attempting to adapt a general-purpose architecture to an increasingly specialized workload, it starts from the workload itself and builds upward. The result is a system that is not only efficient in theory, but also in practice—at scale, under load, and within the constraints of real data centers.
Toward a more honest conversation
None of this diminishes the achievements of GPU-based systems. They have been instrumental in the rise of modern AI and remain incredibly powerful platforms. But the workloads are changing. Large language model inference is not the same as training, and it’s not the same as graphics. As the industry shifts toward deployment at scale, the limitations of general-purpose architectures become more apparent.
At the same time, new architectures, as described above, are emerging that are designed specifically for these workloads. They may not always win on peak performance metrics, but they are optimized for the realities of inference: predictable execution, high utilization, and energy efficiency.
If we want to compare these systems fairly, we need to move beyond simplified metrics and toward system-level evaluation.
The bottom line
Cost per token is not wrong—but it is incomplete.
The real question is not how cheaply a token can be produced in isolation, but how efficiently a system can deliver tokens over time, at scale, within the constraints of power, infrastructure, and workload demands.
When viewed through that lens, the path forward becomes clearer.
The next generation of AI infrastructure will not be defined by the highest peak performance or the most aggressive benchmark result. It will be defined by architectures that align performance with efficiency, and efficiency with economics.
And in that context, the industry may find that the most important innovation is not faster hardware—but better architecture.
Lauro Rizzatti is a business development executive at VSORA, a pioneering technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.
Related Content
- Chiplets Are The New Baseline for AI Inference Chips
- Custom AI Inference Has Platform Vendor Living on the Edge
- The next AI frontier: AI inference for less than $0.002 per query
- Revolutionizing AI Inference: Unveiling the Future of Neural Processing
- Purpose-built AI inference architecture: Reengineering compute design
The post The truth about AI inference costs: Why cost-per-token isn’t what it seems appeared first on EDN.
TP-Link’s Kasa EP25: Energy monitoring for a hoped-for utility bill nose-dive

How easy is it to analyze and optimize how much power the device connected to a smart plug is drawing? The answer depends in part on which hardware and firmware version you’re running.
Next up in my ongoing TP-Link smart home device ecosystem series of hands-on evaluations and teardowns:
- Tapo or Kasa: Which TP-Link ecosystem best suits ya?
- TP-Link’s Kasa HS103: A smart plug with solid network connectivity
- TP-Link’s Kasa EP10: If at first it doesn’t connect, buy, buy again
is the EP25 smart plug, which builds on the EP10 foundation with two feature set additions: Apple HomeKit (and Siri, for that matter) support, along with energy monitoring capabilities.
I bought a two-pack (with an associated “P2” product name suffix) from Amazon’s Resale (formerly Warehouse) sub-site for $13.29 plus tax during a 30%-off promotion last November. They also come in an “EP25P4” four-pack version. I’ll start with some stock photos:






Although I’ve identified the EP25 as the enhanced sibling of the EP10, particularly referencing the naming-format commonality, those of you who’ve already analyzed the above graphic with device dimensions (not to mention the side switch location) might understandably be confused. Doesn’t it look more like the earlier, beefier, HS103? Indeed, it does. Here it is below the EP10:

And now underneath the HS103:

Perhaps the larger chassis was necessary to fit the additional feature-implementing circuitry? There’s one way to find out for sure; take it apart. So, let’s start, as usual with some box shots, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:


This isn’t what the box backside originally looked like, actually:

When it arrived, there was a barcode-inclusive sticker stuck to it, as is typical with products that cycle back through the Amazon Resale sub-site after initial sale-then-customer return:

But stuck to it was something I’d not experienced before: another sticker, with a smaller black rectangle near its center:

I had a sneaking suspicion that I’d find a RFID or other tracking tag on the other side. I was right:
Continuing around the outer package sides:



Judging from the already-severed clear tape on the bottom of the box, in contrast to the still-intact tape holding the top flap in place, I assumed the original owner got inside through the bottom-end pathway:

Yup. I don’t know what surprises me more (and I’ve also seen it plenty of times before): how brutishly some folks mangle the various packaging piece(s) to get to the device(s) inside, or that they still have the impudence to return the goods for refund afterwards. Now to cut the top’s transparent tape and try out the alternative entry path:

At least the original owner was thoughtful enough to put the sliver of quick-install literature back in the box prior to returning. Although, on second thought, he or she probably never even got to it before sending everything back. There was also this, reflective of its Apple protocol-friendliness:

You also may have already noticed in the earlier bottom-view open-box shot that one of the devices inside was still encased by a protective translucent sleeve, while that of the other device was missing. I went with the latter as my teardown victim, operating under the theory that its still-plastic-covered sibling was unused and therefore most likely to still be functional for future hands-on evaluation coverage purposes. Here’s our patient:





This last shot of the underside of the device:

Specifically, this closeup of the specs, including the all-important FCC ID (2AXJ4KP125M):
is as good a time as any to explain the background to my “The answer depends in part on which hardware and firmware version you’re running” comment in this post’s subtitle. Note the following lines of prose on the product support pages for the EP25P2 and EP25P4:
Vx.0=Vx.6/Vx.8 (eg:V1.0=V1.6/V1.8)
Vx.x0=Vx.x6/Vx.x8 (eg:V1.20=V1.26/V1.28)
Vx.30=Vx.32 (eg:V3.30=V3.32)
I’d mentioned in the prior teardown in this series that TP-Link tends to cycle through numerous hardware revisions throughout a product’s life, with each hardware iteration accompanied by multiple firmware versions, and the cadence combination resulting in inconsistent functionality (said another way: bugs). The EP25 is no exception to this general rule. That said, “inconsistent functionality” seemingly is particularly notable in this product case (grammatical tweaks by yours truly):
On Amazon, I bought a 2-unit box set of the EP25P2 (“Hardware 2.6” in the Kasa app), and a 4-unit box of the EP25P4 (“Hardware 1.0” in the Kasa app). They market them as the exact same product, but the EP25P2 has much better energy and power consumption data and graphs, and a cost tool. The other just has a crude power read out. It seems like something they should’ve been clear about, and like something they could fix in the app software. I’m annoyed they did this and will return the EP25P4.
FWIW, looking back both at the device bottom closeup and the earlier bottom box shot, I’m guessing “US/2.6” references hardware v2.6. Again:
. Curiously, the four-pack (EP25P4) support page lists three hardware versions (V1.60, V1.80 and V2.60), albeit not the V1.0 h/w mentioned in the earlier Reddit post…and the two-pack (EP25P2) page mentions only V2.60.
Time to delve inside. The case-disassembly methodology was unsurprisingly identical to that for the earlier HS103, so in the interest of brevity I’ll spare you another iteration of the full image suite of steps. See the earlier teardown for ‘em; here’s today’s teardown subset. One upside this second time around: no blood loss by yours truly!







As before, I ‘spect this is the assembly subset that you’re all most interested in:
once again based on (among other things) a Hongfa HF32FV-16 relay (the tan rectangular “box” at far right). Multiple products, along with multiple hardware versions for each, may evolve in a general sense, but some things stay the same…
Detailing the “smarts”And specifically, here’s the “action” end:
From this side, the embedded antenna is visible; the PCB is otherwise bare:
You can see the antenna from the other side, too, plus a more broadly interesting presentation:
The PCB “lay of the land” is reminiscent of that inside February’s HS103, including the respective switch and LED locations:
This time, however, the prior design’s Realtek RTL8710 has been upgraded to the dual-core RTL8720 (PDF), whose beefier processing “chops” are presumably helpful for implementing the added energy monitoring and HomeKit protocol capabilities, as well as with expanded internal RAM and (optional integrated) flash memory. In this particular design, however, the flash memory is external, taking the form of an Eon Silicon Solution EN25Q32B 32 Mbit SPI serial device. It’s in the upper right corner of the PCB, next to the LED and occupying one of the IC sites you might have already noticed was unpopulated in the HS103 implementation. The other previously unpopulated IC site, below the EN25QH32B, now houses a Shanghei Belling BL0937 (PDF) single-phase energy monitoring IC. Eureka!
Tying up loose endsAs with its TP-Link (but not more amenable Amazon) smart plug predecessors, I was unable to wedge the EP25’s PCB away from the rear half of its enclosure, so there’ll be no circuit board backside photos for you…from me, at least. Alternatively, you can always check out the ones published by the FCC. If you do, you may walk away amazed (as I was) by the total area dominance by multiple large globs of solder.
In closing, I thought I’d share a somewhat related video I found while doing my research. It’s a review of the HS110, the energy monitoring variant of TP-Link’s original HS100 smart plug that I tore down nine years back:
As those Virginia Slims commercials used to say, “You’ve come a long way.” And with that, I’ll turn it over to you for your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Tapo or Kasa: Which TP-Link ecosystem best suits ya?
- TP-Link’s Kasa HS103: A smart plug with solid network connectivity
- TP-Link’s Kasa EP10: If at first it doesn’t connect, buy, buy again
The post TP-Link’s Kasa EP25: Energy monitoring for a hoped-for utility bill nose-dive appeared first on EDN.
Radar transceiver scales for automated driving

NXP’s TEF8388 RF CMOS automotive radar transceiver supports Level 2+ and Level 3 ADAS, with a roadmap toward higher levels of automation. Operating in the 76- to 81-GHz FMCW radar band, it provides 8 transmitters and 8 receivers (8T8R), scalable to 32T32R configurations for both entry-level and high-performance systems. Paired with NXP radar processors, it forms an imaging radar platform that addresses diverse performance, cost, and regulatory requirements across global markets.

The TEF8388 delivers strong RF performance—14 dBm Pout and 12 dB NF—while keeping power consumption comparable to less integrated 3T4R devices. An on-chip M7 core provides flexible chirp programming, calibration, and functional safety management.
Occupying a 16×16-mm footprint, the TEF8388 uses an optimized pin layout and strategic launcher placement to enhance channel isolation and signal quality. It meets AEC-Q100 and ISO 26262 SEooC ASIL B requirements and operates over a junction temperature range of –40 °C to +150 °C.
Development support for lead customers is available now. Mass-market support will follow later in 2026.
The post Radar transceiver scales for automated driving appeared first on EDN.
HWLLC topology pushes fast charging to 500 W

A half-wave LLC (HWLLC) platform from Renesas includes four controller ICs rated for up to 500 W for high-speed chargers. The HWLLC AC/DC converter topology scales from 100 W to 500 W, enabling chargers for power tools, e-bikes, and other appliances without the size, heat, and efficiency penalties of legacy topologies.

Combined in a 240-W USB EPR power adapter design, the HWLLC approach achieves a power density of 3 W/cm³ and 96.5% peak efficiency—described as the industry’s highest power density. The 500-W envelope broadens application range, while USB-C EPR capability enables a move beyond 100-W charging.
At the heart of the lineup is the RRW11011, an AC/DC primary-side digital controller with interleaved PFC and HWLLC operation. It delivers a wide 5-V to 48-V output for USB 3.1/3.2 EPR and other variable-load charging systems. The boost PFC stage minimizes ripple, total harmonic distortion, and EMI, while digital two-stage control enhances efficiency and reduces audible noise.
The platform also includes the RRW30120 USB PD 3.2 EPR controller with secondary-side regulation, the RRW40120 600-V half-bridge gate driver optimized for SuperGaN FETs and MOSFETs, and the RRW43110 synchronous rectifier controller.
The RRW11011, RRW30120, RRW40120, and RRW43110 are now in production, and samples are available for evaluation.
The post HWLLC topology pushes fast charging to 500 W appeared first on EDN.
SiC modules raise power density for AI servers

QSiC Dual3 1200-V half-bridge MOSFET modules from SemiQ address the efficiency and thermal demands of liquid-cooled AI data centers. Two of the series’ six devices offer an RDS(on) of just 1 mΩ and achieve a power density of 240 W/in.3 in a 62×152-mm package. The modules are available with or without a parallel Schottky barrier diode to further reduce switching losses in high-temperature environments.

QSiC Dual3 is designed to replace silicon IGBT modules with minimal redesign, reducing both size and weight while maintaining efficiency. All SiC MOSFET die are screened using wafer-level gate-oxide burn-in tests exceeding 1350 V. The modules also feature low junction-to-case thermal resistance, enabling the use of smaller, lighter heatsinks.
The Dual3 lineup includes the following part numbers:

Rated for junction temperatures from −40°C to +175°C, the QSiC modules are also suited for grid converters in energy storage systems, industrial motor drives, uninterruptible power supplies, and EV applications.
Learn more about the QSiC Dual3 modules here.
The post SiC modules raise power density for AI servers appeared first on EDN.
Automotive HMI SiP packs MPU and DDR2 memory

The SAM9X75D5M from Microchip is a hybrid SiP for automotive HMI applications, integrating an 800‑MHz ARM926EJ‑S 32‑bit processor with 512 Mbits of DDR2 SDRAM. The package also includes a 24‑bit LCD controller supporting displays up to 10 in. with XGA resolution (1024×768), simplifying high-performance graphical interfaces.

This hybrid SiP combines MPU-class processing with high-density memory in a single package, reducing PCB complexity while maintaining MCU-style development workflows. For automotive and e-mobility HMIs, it offers real-time OS support and flexible display and camera interfaces, including MIPI DSI, LVDS, RGB, MIPI CSI, and 12-bit parallel I/F.
AEC-Q100 Grade 2 qualified, the SAM9X75D5M provides CAN FD, USB, and Gigabit Ethernet connectivity with Time-Sensitive Networking (TSN) capability. It also integrates 2D graphics, audio, and advanced security functions.
The device comes in a 243‑ball BGA package (part number SAM9X75D5M‑V/4TBVAO) and is priced at $9.12 each in 5000‑unit quantities. Variants with 1 Gbit and 2 Gbits of memory are now sampling.
The post Automotive HMI SiP packs MPU and DDR2 memory appeared first on EDN.
FET-based clamp protects 48-V USB PD EPR lines

Semtech’s TDS5311P circuit protection device delivers near-constant clamping voltage for 48-V USB PD EPR applications. A member of the SurgeSwitch family, it protects a single voltage bus or data line operating at up to 53 V in devices and systems requiring industrial-grade reliability.

Unlike conventional TVS diodes, the TDS5311P uses a surge-rated FET as the main electrical overstress (EOS) protection element. It maintains a nearly constant clamping voltage from the first microsecond of a surge event through the maximum rated current across the full −40°C to +125°C industrial temperature range.
The TDS5311P is rated for transient current up to 24 A (8/20 µs) and peak pulse power of 1512 W (8/20 µs). It meets the IEC 61000-4-5 industrial surge standard of ±1 kV (RS = 42 Ω, CS = 0.5 µF), as well as IEC 61000-4-2 ESD withstand levels of ±20 kV (contact) and ±25 kV (air). Typical clamping voltage is 60 V at 24 A (8/20 µs).
Housed in a 2.0×2.0-mm, 6-pin DFN package, the TDS5311P conserves PCB area compared with SMAJ and SMAB packages. Supplied on tape and reel in 3000-unit quantities, the device costs $0.38256 each ($1147.68 per reel).
The post FET-based clamp protects 48-V USB PD EPR lines appeared first on EDN.
The system architect’s sketchbook: Inside the simulation


Deepak Shankar, founder of Mirabilis Design and developer of VisualSim Architect platform for chip and system designs, has created this cartoon for electronics design engineers.
The post The system architect’s sketchbook: Inside the simulation appeared first on EDN.
Active noise control: Engineering silence in audio systems

In the world of audio, silence is often as valuable as sound. Whether it is the low rumble of an airplane cabin, the drone of traffic, or the hiss of background noise in a recording, unwanted audio can compromise clarity and comfort.
Active noise control (ANC) offers a sophisticated solution: instead of merely blocking noise, it uses microphones, processors, and speakers to generate an equal and opposite signal that cancels interference in real time.
This marriage of acoustics and digital signal processing has transformed how we experience music, communication, and quiet itself, making ANC one of the most elegant applications of engineering in audio systems.
Active noise control vs. active noise cancellation
Before the dive, it’s good to note that active noise control (ANC) is the overarching engineering principle—using sound to counter sound—while active noise cancellation is its most familiar audio application, seen in headphones, earbuds, and car cabins.
This distinction matters because it shows how a fundamental control concept translates into everyday listening, making the science behind ANC directly relevant to how we experience clarity and comfort in audio systems.
Noise management: Isolation, reduction, and cancellation
To effectively manage sound, it’s important to distinguish between passive isolation, active noise reduction (ANR), and active noise cancellation (ANC), as these terms are often conflated in consumer marketing. Passive noise isolation provides the foundation, using physical barriers like dense ear-cup foam and high-quality seals to block sound waves from entering the ear canal, making it effective against a broad spectrum of high-frequency noises.
Beyond this physical barrier, active noise reduction (ANR) and active noise cancellation (ANC) represent the same advanced technology; the former term being more common in aviation and industrial sectors, and the latter in consumer retail. Both utilize integrated microphones and digital signal processing to sample environmental noise and generate a precise “anti-noise” signal in real time.
By applying the principle of destructive interference—creating an inverted wave that effectively neutralizes the original sound—these active systems are uniquely capable of erasing steady, low-frequency sounds that passive methods struggle to mitigate.
Nature’s ANC: How treefrogs and other animals tune out the world
Nature is the original engineer when it comes to acoustics, and while you will not find animals with electronic hardware, some species have evolved ingenious biological mechanisms that function on the exact same principle as active noise cancellation (ANC).
The most striking example is found in certain species of treefrogs, which face the daunting challenge of picking out a specific mate’s call amidst a deafening swamp-wide chorus. To solve this, they possess an internal connection between their eardrums that passes through their lungs; this allows the lungs to act as an acoustic filter, creating a phase-cancellation effect that effectively “mutes” the frequencies of competing species while amplifying the call of their own.
Beyond this direct analogue to ANC, many animals utilize other strategies to combat environmental noise, such as the “Lombard effect,” where birds and primates actively adjust the pitch or volume of their vocalizations to cut through ambient chaos, or the “jamming avoidance response” seen in electric fish, which shift their pulse frequencies to prevent signal interference. Ultimately, while these animals are not wearing headsets, evolution has mastered the art of filtering out the noise to focus on what matters most.
And as a historic note, ADI’s SSM2000 was a pivotal audio IC that revolutionized noise reduction through its patented HUSH “single-ended” technology.
Unlike traditional systems that required complex pre-encoding, SSM2000 could adaptively and dynamically strip away hiss and background noise from any audio source on the fly. By integrating a sophisticated dynamic filter and downward expander into a single, cost-effective package, it became the industry standard for enhancing signal clarity in 1990s consumer electronics—ranging from car stereos to early PC sound cards—offering a clever, hardware-based solution for high-fidelity sound that paved the way for modern signal processing.

Figure 1 From the 1990’s SSM2000 to today’s DSP-driven architectures, engineers leverage biological noise-suppression mechanisms to deliver precision audio clarity. Source: Author
Inside active noise cancellation systems
Active noise cancellation (ANC) works by detecting and analyzing incoming sound patterns, then generating an opposing “anti-noise” signal to neutralize them. This process significantly reduces the level of background noise you hear. ANC is especially effective against steady, low-frequency sounds such as ceiling fans or engine hums. While it’s most commonly found in stereo headsets that cover both ears, some mono headsets also incorporate ANC technology to enhance noise management.

Figure 2 Sketch demonstrates the core principle of ANC. Source Author
In essence, ANC works by generating an anti-noise waveform that mirrors the shape and frequency of the unwanted sound. This waveform is produced at a phase angle of exactly 180° opposite to the noise, so when both signals meet at the target area, they effectively cancel each other out.
ANC systems can be implemented through different hardware configurations:
- Feed-forward ANC: A microphone is positioned on the outside of the earphone to capture external noise before it reaches the ear.
- Feed-back ANC: A microphone is placed inside the earphone, monitoring the sound that actually enters the ear canal and canceling it in real time.
- Hybrid ANC: This combines both feed-forward and feed-back methods, offering more precise and adaptive noise reduction across a wider range of frequencies. That is, two microphones are used to form a closed-loop design. The reference microphone forecasts incoming external noise, while the error microphone audits the sound inside the ear canal. This dual setup enables the system to cancel noise effectively and avoid feedback issues.
Beyond hardware design, ANC relies on adaptive cancellation. This technique uses one or more microphones to continuously detect external noise and dynamically adjust the anti-noise waveform in real time to suit changing environments.
While some specialized industrial noise-control systems use a ‘synthesis method’—where the noise pattern is sampled and a known waveform is generated to counteract it—modern consumer headphones rely almost exclusively on adaptive, real-time processing to handle the unpredictable and constantly changing noise of the real world.
Broadband vs. narrowband noise cancellation
In the field of active noise control engineering, the terms broadband and narrowband carry meanings that differ from their use in telecommunications. Broadband ANC refers to systems designed to reduce unpredictable, wide-frequency environmental noise such as traffic, crowd chatter, or wind.
Because this type of noise is random, the system requires a coherent reference signal to generate an effective anti-noise waveform. By measuring the primary noise upstream, the digital controller can model the phase and magnitude of the disturbance in real time, allowing correlated noise to be canceled downstream at the loudspeaker.
Narrowband ANC, on the other hand, is tailored to periodic noise generated by rotational machinery, such as engines or fans. Instead of relying solely on an acoustic input microphone to capture the noise mid-propagation, the system uses a non-acoustic reference—such as a tachometer signal—to determine the fundamental rotational frequency.
Since repetitive noise occurs at predictable harmonics of this frequency, the control system can model these components with high precision. This approach is particularly effective in vehicle cabins, where it suppresses specific engine-related vibrations without interfering with speech, radio performance, or essential warning signals.
Modern ANC implementations often combine these strategies, resulting in adaptive broadband feedforward control, which utilizes acoustic sensors, and adaptive narrowband feedforward control, which employs non-acoustic sensors like accelerometers or tachometers.

Figure 3 A simple graphic depicts destructive interference as anti-noise combines with unwanted noise to reduce residual noise. Source: Author
Balancing promise and pitfalls: The realities of ANC
So, while active noise cancellation promises remarkable benefits—quieting the hum of engines, reducing fatigue during long journeys, and sharpening the clarity of music or speech—it also comes with challenges that beginners should appreciate. ANC systems excel at steady, low-frequency sounds but falter when faced with sudden or irregular noise.
Engineers must carefully tune parameters such as the damping ratio, which governs system stability, and the phase response, which determines how precisely the inverted signal cancels the original. Too much damping can make the system sluggish, while too little risks instability or even amplifying certain frequencies.
Latency in signal processing, microphone placement, and the physical limits of speakers all add complexity. Understanding these trade-offs is vital, because ANC is not about achieving perfect silence; it’s about learning how physics and signal processing collaborate to reduce chaos in real-world conditions.
Silence from chaos: The beginner’s journey into active noise cancellation
Active noise cancellation is one of those technologies that feels almost magical, yet it’s rooted in a principle simple enough for beginners to explore. Imagine sitting in a room filled with the steady hum of a fan or the drone of traffic outside and then hearing that noise dissolve because of a circuit you built yourself. That is the essence of ANC—capturing unwanted sound, inverting its waveform, and blending it back so the disturbance cancels itself out.
For those new to the field, the journey does not require professional acoustic labs or high-end industrial equipment; a pair of microphones, a set of speakers, and basic signal processing components are sufficient to begin. However, it is important to be clear: designing a functional ANC system from scratch is one of the most formidable challenges a hobbyist can undertake. It demands more than just coding skills; it requires a deep understanding of wave physics, precise timing, and acoustic dynamics.
The complexity of this task lies in the “latency budget”—the critical window of time the system has to process external noise and generate an inverse wave before it reaches the ear. If the processing takes too long, the waves will not align properly, failing to achieve destructive interference.
Fortunately, the barrier to entry has lowered. Modern, high-speed microcontrollers and dedicated DSP hardware now allow hobbyists to implement adaptive filters that were once exclusive to expensive, industrial-grade equipment. Chips from major players like Analog Devices and ams OSRAM bring ANC within reach of hobbyists, offering playful possibilities for makers eager to experiment with noise cancellation and advanced audio signal-processing projects.
As an introductory analog experiment, serious hobbyists can explore active noise cancellation by setting up a microphone to capture ambient noise, inverting that signal via an active phase-inverter, and summing it back into the audio path to create destructive interference. While this approach lacks the adaptive processing of digital systems, it provides a masterclass in phase alignment, group delay, and the iterative challenge of balancing amplitude in real-world signal paths.
Well, the first time you hear noise dissolve because of your own project, you realize it’s not just about electronics, it is about discovering how human ingenuity can carve silence out of chaos. That is the real inspiration of ANC for beginners: a hands-on path into the power of sound, silence, and imagination, now made more accessible than ever by today’s tools.
Ready to explore? Begin your first ANC experiment today and discover how you can turn noise into silence with your own hands.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Active Noise Cancellation
- The Basics and Acoustic Echo Cancellation
- Digital Active Noise Cancellation for Consumers Who Want It All
- Active noise control – a software-based approach for automobiles
- Active noise cancellation: Trends, concepts, and technical challenges
The post Active noise control: Engineering silence in audio systems appeared first on EDN.
Power Tips #151: Improving efficiency in 48V-input multiphase buck converters with GaN

Step-down buck converters used in 48V-to-5V power supply designs are becoming increasingly common in automotive and industrial applications, especially in advanced driver assistance systems, in-vehicle infotainment, and robotics. While synchronous buck topologies achieve high efficiency, they sometimes fall short of expected performance. In some cases, switching behavior, controller bias, power, and thermal performance can create limiting losses, resulting in a decrease in efficiency.
Figure 1 shows the efficiency of Texas Instruments’ 48 VIN, 960 W four-phase buck converter with integrated GaN reference design (PMP23595), with the output voltage set to 5 V using forced pulse-width modulation operation without cooling.
Figure 1 Efficiency of 48 VIN to 5 VOUT at a 400 kHz switching frequency. Source: Texas Instruments
The efficiency curve in Figure 1 can meet the specifications of most 48V-to-5V power supply designs, but could fall just below the intended target for others. Rather than changing topology or adding complexity, it’s possible to make some practical adjustments within a standard buck converter to boost efficiency further.
Figure 2 shows the efficiency curve for of the 48V-5V buck converter under several test configurations, including added thermal management, switching frequency adjustment and external bias operation. These configurations were selected to isolate the effects of each adjustment and indicate that different loss mechanisms dominate depending on the operating point. Let’s look at each adjustment in greater detail.
Figure 2 Efficiency of 48VIN to 5VOUT with multiple adjustments. Source: Texas Instruments
Adjustment No. 1: Thermal performanceAdding a cooling system, in this case a heat sink, produced a negligible improvement at a low output current but resulted in a clear improvement above 30 A.
At a low output current, the total power dissipation remains relatively small, and device temperatures remain closer to ambient. Thus, reducing thermal resistance provides little effect.
At higher output current, conduction losses increase with IOUT2, causing the field-effect transistor (FET) junction temperature and inductor temperature to rise. As temperature increases, the FET drain-to-source on-resistance (RDS(on)) and inductor copper resistance increase, further increasing conduction losses. Incorporating a heat sink or some form of cooling reduces this rise in junction temperature, directly lowering temperature-dependent resistances. Another result is a measurable reduction in conduction losses, which appear as improved efficiency at high currents. At a high current – 80 A in this scenario – the improvement reached 0.8%.
Adjustment No. 2: Switching frequencyReducing the switching frequency from 400 kHz to 250 kHz while ensuring that the inductance value was still suitable improved efficiency approximately 0.5% through the mid-current range and 1% in the high-current range. However, decreasing the switching frequency too much with the same inductor value can result in higher core losses if you don’t manage the ripple current correctly.
Reduced switching-related losses cause this behavior, such as field-effect transistor turn-on and turn-off losses, gate-drive losses, and internal controller switching losses. At a 48-V input, these losses scale quickly with both current and switching frequency.
At light loads, reducing the switching frequency produces smaller efficiency improvements, suggesting that fixed losses such as quiescent current or inductor core loss dominate in this region and limit the overall impact of this adjustment.
Adjustment No. 3: Controller bias powerIn a forced pulse-width modulation configuration, supplying the controller bias from an external 5-V source improves efficiency by approximately 0.5% in the light- to mid-current range.
Deriving bias from VOUT remains a viable option if the output voltage is not a much higher voltage (such as 24 V and above) or much lower (such as 3V and below).
When deriving bias power internally from the output rail, a small portion of the converter’s output power operates the controller. At light loads, this overhead represents a slightly larger fraction of the total output power.
At higher output currents, the conduction losses in the FETs and inductor begin to dominate. In this region, the controller bias power becomes such a small fraction of total losses that it no longer produces a measurable efficiency benefit. As a result, the externally biased efficiency curve converges with the internally biased efficiency curve.
Adjustment No. 4: Inductor optimizationThe inductor can play a larger role in efficiency than its direct current resistance (DCR) alone suggests. While copper losses depend on DCR and scale with the output current, core losses depend strongly on ripple current and switching frequency.
If the ripple current is high, core losses can become significant. This is especially common with powdered iron core material, which can have high core losses if you don’t account for the ripple current.
Increasing the inductance reduces ripple current and core losses but may increase DCR. Conversely, using a very low DCR inductor while having excessive ripple current can increase core losses to the point where it offsets the efficiency boost. The inductor choice balances DCR and ripple current such that neither copper nor core losses dominate.
When looking to improve converter efficiency, identify which loss mechanism dominates the operating region of interest as a useful first step. For what we have seen here on this synchronous buck converter, you can evaluate it quickly:
- If light-load efficiency is low, examine the switching frequency and internal bias losses.
- If efficiency is low at high current, focus on conduction losses and thermal management.
- If the losses appear higher than expected across the full current range, review the inductor ripple current and core material.
Once you identify the dominant loss mechanism, minor design adjustments can often lead to measurable efficiency gains.
The high-efficiency system in this exercise used the TI reference design that I mentioned earlier, which includes the LMG708B0 synchronous step-down converter with integrated GaN configured to a 5-V output with a reduced inductance of 2.5µH.
References
- Jacob, Mathew. “Select inductors for buck converters to get optimum efficiency and reliability.” Texas Instruments Analog Design Journal article, literature No. SLYT775, 3Q2019.

Matthew Bowers is a systems engineer in TI’s Power Design Services team, focused on developing power solutions for automotive applications. Matthew received his bachelor’s degree in electrical engineering from Texas Tech University in 2023.
Related Content
- How to design a digital-controlled PFC, Part 2
- Power Tips # 141: Tips and tricks for achieving wide operating ranges with LLC resonant converters
- Power Tips #134: Don’t switch the hard way; achieve ZVS with a PWM full bridge
- Power Tips #127: Using advanced control methods to increase the power density of GaN-based PFC
The post Power Tips #151: Improving efficiency in 48V-input multiphase buck converters with GaN appeared first on EDN.
What does Arm’s own chip stand for?

Arm is now a chip vendor—what does it mean for the semiconductor industry? EE Times’ Nitin Dahad was at the event in San Francisco, California, where the British IP giant unveiled its first chip, an AGI CPU for data centers. He reports on what it means for the company, now increasingly dubbed Arm 2.0, and how this launch will impact its standing in the semiconductor industry. He also explains the delicate balancing act that Arm will have to play moving forward.
Read the full article at EDN’s sister publication, EE Times.
Related Content
- Arm Leaps Into TinyML With New Cores
- Arm Brings v9 to IoT, GenAI to Edge Devices
- How Arm Total Design is built around 5 key building blocks
- Arm’s Chiplet System Architecture eyes ecosystem sweet spot
- Cache coherent interconnect IP pre-validated for Armv9 processors
The post What does Arm’s own chip stand for? appeared first on EDN.
Overcoming interconnect obstacles with co-packaged optics (CPO)

Over the last few years, there has been growing interest across the global semiconductor packaging industry with a new approach. Co-packaged optics (CPO) involves integrating optical fibers, used for data transmission, directly onto the same package or photonic IC die as semiconductor chips. Traditionally, semiconductor packaging has used copper interconnects, but these can consume large amounts of power and lead to signal weakening at high frequencies when the distance is further than a couple of meters.
With CPO, the optical components are integrated directly into a package, and the long copper trace between the switch and the optical module is replaced with short, high-integrity connections. Optical signaling uses far less power at high data rates than electrical signaling. As CPO reduces the distance between optical components and the semiconductor dice, this lowers latency, improves high-speed signal integrity, and accelerates data transfer.
All of which are fundamental for the next generation of AI devices for high performance computing (HPC) inside the data center systems. Nevertheless, there are obstacles that need to be overcome with CPO and when designing photonic packages, especially for integrated photonic circuits or photonic chips. This is why advances in photonic package design are coming to the forefront.
Overcoming CPO obstacles
When co-packaging photonics with electronics, there can be signal integrity issues. Electrical crosstalk must be reduced to improve signal quality. Using short interconnects and low-parasitic layouts are the most appropriate tactics when used alongside co-design tools for optical optimization. Signal integrity can be ensured without requiring complex routing or more space, as optical interconnects can support multi-terabit-per-second data rates over long distances with only minor signal loss.
Mounting a large photonic IC die onto a laminate or organic substrate can be problematic. Due to the coefficient of thermal expansion (CTE) mismatch between the substrate and the photonic IC die, non-negligible die warpage may occur. This warpage can significantly degrade optical signal performance in the photonic IC waveguides during data transmission, leading to substantial reductions in optical signal power and quality.
In addition, excessive warpage may introduce mechanical stress in the photonic IC die, altering its material properties and further impacting optical performance. While using a ceramic substrate could mitigate these issues, it’s more costly and is not widely adopted today.
Dealing with temperature variations can be a concern with photonic devices, but efficient thermal management and thorough thermal design can help to improve performance and reliability. Integrating photonics with electronics may require thermoelectric coolers (TECs) and heat sinks along with smart thermal simulations throughout the design process.
Sub-micron alignment is also a complex technical task. Optical misalignment can lead to significant insertion losses, as well as disrupting device performance. Leveraging passive alignment techniques with etched features or alignment markers may mean lower levels of accuracy, but this is the lowest cost. Active alignment, using real-time optical feedback, results in better performance and efficiency, though it’s far more complex and costly.
Addressing challenges when testing optical components involves using built-in test waveguides, automated optical probing systems, and standardized test procedures during and after packaging. Integrating optical and electrical components into a single package not only makes the manufacturing process more complicated, the associated risks and costs are also greater due to the different assembly phases. It’s possible to cut through the complexity and improve yields by using standardized processes for CPO assembly.
The future of CPO and photonic package design
As a result of the growing interest in CPO and photonic packaging, there have been advances in photonic package design. CPO enables faster data transmission and improved power-efficiency when compared to the conventional copper-based interconnects approach. It has many advantages, including high-speed communication and lower power consumption, but there are also concerns related to signal integrity, thermal management, optical alignment, and costs.
Advances in photonic package design can overcome these obstacles and help electronic design engineers create new architectures that would not be viable with traditional semiconductor packaging. As the semiconductor industry continues to rapidly evolve, with more complex devices requiring high-performance, compact and power-efficient chips, CPO with advanced photonic package design will become increasingly important.
Dr Larry Zu is CEO of Sarcina Technology.
Special Section: Chiplets Design
- What the special section on chiplets design has to offer
- Chiplet innovation isn’t waiting for perfect standards
- Scoping out the chiplet-based design flow
- Demystifying 3D ICs: A practical framework for heterogeneous integration
- Chiplets: 8 best practices for engineering multi-die designs
The post Overcoming interconnect obstacles with co-packaged optics (CPO) appeared first on EDN.
The Tapo Hub: TP-Link joins the low-bandwidth, long-range RF club

Leveraging low-power wireless connectivity isn’t proprietary to a single smart-home technology and product supplier, no matter that each company’s implementation of the concept may be.
Back in 2019, when I first conceptually explored, then tore down, and finally implemented personally a Blink outdoor security system (still operational to this very day):
- Blink: Security cameras with a power- and bandwidth-stingy uplink
- Teardown: Security camera network module
- Teardown: Blink XT security camera
- Blink: Security camera system installation and impressions
The aspect of the architecture that intrigued me the most was the camera’s battery-powered nature. How on earth were they spec’d to run for up to two years (far from nearly five in real life) solely on two lithium AA cells while still regularly remaining user-accessible over Wi-Fi?
The answer, as those of you who’ve already read my writeups (and remember them) know, was a two-fold response:
- The entire system wasn’t battery-powered, and
- The communications infrastructure wasn’t solely Wi-Fi
In-between the cameras (back then, I was apparently using quarters for size comparison purposes, not pennies):

and the Internet is a Sync Module:

Requoting my original piece in the series:
A Blink system consists of one or multiple tiny cameras, each connected both directly to a common router or to an access point intermediary (and from there to the Internet) via Wi-Fi, and to a common (and equally diminutive) Sync Module control point (which itself then connects to that same router or access point intermediary via Wi-Fi) via a proprietary “LFR” long-range 900 MHz channel.
The purpose of the Sync Module may be non-intuitive to those of you who (like me) have used standalone cameras before…until you realize that each camera is claimed to be capable of running for up to two years on a single set of two AA lithium cells. Perhaps obviously, this power stinginess precludes continuous video broadcast from each camera, a “constraint” which also neatly preserves both available LAN and WAN bandwidth. Instead, the Android or iOS smartphone or tablet app first communicates with the Sync Module and uses it to initiate subsequent transmission from a network-connected camera (generic web browser access to the cameras is unfortunately not available, although you can also view the cameras’ outputs from either a standalone Echo Show or Spot, or a Kindle Fire tablet in Echo Show mode).
That the battery-powered network nodes (cameras in this case) are battery-based is convenient from a location-flexibility standpoint, not necessitating running wired-power feeds to them, just as the fact that they’re wireless precludes needing to run Cat5 spans to them. And in some cases, it also enables ongoing implementation functionality (at least to a degree) even if premises power goes down.
Discerning degree of drynessFast forward to the present. My wife and I recently bought a couple of ionizing humidifiers for the house, one of them “smart” (believe it or not; stay tuned for coverage to come):

The (upstairs) thermostats for our (downstairs) furnaces, one for each horizontal half of the house, supposedly also report residence humidity, but I’ve never believed the data they feed me; they perpetually say that it’s “<15%”. I could have just bought a cheap hygrometer (standalone humidity sensor) for $5 or so; this one’s even solar-rechargeable:

But when I came across one, the T315, part of TP-Link’s Tapo smart home product suite, I knew I had to have it:

It was less than $25 at Amazon. It leveraged Kindle-reminiscent display tech. And I already had several other Tapo devices active in the home. How hard could it be to add one more?
Ingenuity reduxNot hard, it turned out, but not quite as straightforward as I’d initially envisioned. The Tapo T315 is battery-powered, just like those Blink XT cameras. And equally similarly (can you already guess where I’m going here?), just as with TP-Link’s other smart sensors—buttons (doorbells, etc.), door and window contacts, presence, motion, water leak (hold that last thought), etc.—this time, in-between it and my router, there’s therefore a required (drum roll) smart hub!
Since my data payload size was modest in this case, I went with the entry-level Tapo H100, which Amazon also sells for sub-$25:

And I quote (sound familiar?):
The Tapo Hub is the heart of your Tapo smart home, connecting devices like smart sensors, switches and buttons, using an ultra-low power wireless protocol. This technology helps battery-powered devices last up to 10 times longer.
The company also sells more advanced (but still economical) hubs that further comprehend battery-powered Tapo security cameras (including, I’m assuming, transitioning them to Wi-Fi for active broadcast streaming, and also supporting local recording storage); the mid-range microSD card-based H200 and high-end H500, the latter shipping with 16 GBytes of eMMC flash memory and (believe it or not) further expandable via an optional 2.5” SATA HDD or SSD.
Here’s the packaging for the Tapo H100 smart hub, which I needed to activate first:






And here’s what was inside, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, along with a sliver of literature which I didn’t bother photographing:



Nitty-gritty details:

Right-side configuration and reset switch:



After plugging it in to a power strip-housed AC outlet, setup was multi-step but straightforward:






















Success!

Now for the Tapo T315 hygrometer. Packaging first, again:






Setup, including connection to the now-active hub several rooms over, was once again easy:









And there we are! Sub-15% humidity…pfft…






Feeling pretty good about myself, I decided to push my luck once more. When the plumber replaced our geriatric (but thankfully not yet leaking) water heater downstairs in the furnace room a few years ago, he threw in a standalone leak detection sensor (a valuable albeit often overlooked addition to any residence) to reside on the floor next to it:






Note, however, this bit in the operating instructions:
Replacing the battery: Replace the battery if the alarm has operated for an extended period of time, or if the battery expiration date is approaching. You may want to mark the battery expiration date on a piece of tape and attach it to the alarm when you install the battery.
Let’s be real. I know myself well enough to realize that once I set it, I’m going to forget it. I was admittedly surprised to learn, after replacing it (more accurately, moving it; it now sits below the whole-house water filter enclosure in a different room) that unlike my carbon monoxide detectors at their end-of-life dates, it didn’t at least chirp when its battery was getting low. That said, we’d only hear the sound if we were there at the time, and assuming it was loud enough to capture our attention. And further to that point, more generally, if we were away when a leak started, we’d be blissfully ignorant of what was going on…at least at first, until we returned home, that is.
Enter the $19.99 (on Amazon as I write this) TP-Link Tapo T300 Smart Water Leak Sensor:

Once again, box shots first:






Followed by what’s inside (minus, again, the also-provided piece of paperwork):






Yank the blue plastic strip to activate the factory-installed and user-replaceable two-battery connection:

Thereby auto-transitioning the sensor to setup mode:
Go through the brain-dead simple setup steps:














And voilà:


My mixed Kasa-plus-Tapo smart home topology is functionally rock-solid so far, including the hub-based portion. Buh-bye, Belkin Wemo…and maybe, someday, Blink, too. To be clear, Blink and TP-Link’s disparate ecosystems, coupled with the latter’s comparatively greater product type diversity, would be the sole long-term replacement motivation (specifically, mothballing my Blink cameras and replacing them with TP-Link equivalents).
My Blink gear also continues to work just fine, including no evidence whatsoever of any functionally degrading interference between its and TP-Link’s respective ultra-low power wireless links. That all said, I’ll undoubtedly further expand my TP-Link-sourced stuff in the future; stay tuned for more hands-on coverage. Speaking of which, I’ve also got a redundant Tapo H100 smart hub and T300 smart water leak sensor, both sitting on the shelf, queued up for teardown, along with a display-less sibling of the T315 hygrometer, the Tapo T310 Smart Temperature and Humidity Sensor ($17.99 at Amazon):

I hope you’re looking forward to those analyses as well. Until then, let me know what you think in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Blink: Security cameras with a power- and bandwidth-stingy uplink
- Teardown: Security camera network module
- Teardown: Blink XT security camera
- Blink: Security camera system installation and impressions
The post The Tapo Hub: TP-Link joins the low-bandwidth, long-range RF club appeared first on EDN.
The 6G clock ticking: Why silicon architecture for 2030 must start in 2026

The 6G transition is no longer a distant theoretical exercise; it’s a commercial inevitability driven by fundamental requirements for cellular standards to keep moving forward. 5G penetration has already surpassed 75% and is on a trajectory to reach 95% within a few years. We are witnessing an appreciation for continued call quality and data throughput improvements despite an explosion in mobile traffic.
However, the wireless ecosystem projects that even this capacity will soon overload due to accelerating AI content, the integration of satellite communications (SATCOM) into the cellular fold, and the rise of physical AI. 6G is the industry’s response to keep pace with that exponential growth in data communication demand.
The 2030 countdown: Why 2026 is the crucial starting line
To understand the urgency, one must look at the decadal cycle of cellular evolution. History shows it takes about five years to finalize a standard and fold its requirements into a functional ecosystem. While 6G is anticipated to take off commercially by 2030, the work-back schedule reveals a tight timeline for product builders. By 2029, hardware must be ready for compliance testing, meaning component technologies must be finalized by 2028.
Consequently, underlying embedded systems must be built in 2027, necessitating that architectural definitions start as early as 2026. As an example of what is going on in the industry, Qualcomm’s CEO recently hinted at the Snapdragon Summit that 6G-capable devices could appear as early as 2028 for trials, making the 2028 Olympics a perfect arena for tech demos.
Unlocking the “Golden Band”: FR3 and the business of spectrum
Beyond architectural shifts, 6G introduces the Frequency Range 3 (FR3) spectrum, spanning 7.125 GHz to 24.25 GHz. Often called the “Golden Band for 6G,” FR3 offers the perfect balance between the wide coverage of lower bands and the massive capacity of mmWave.
This spectrum is expected to be a major business driver, enabling the 10x higher data rates targets (up to 200 Gbps) and supporting “massive MIMO evolution” to handle the projected 4x traffic growth by 2030 (going over 5.4 zettabytes as indicated by the GSMA Intelligence report).
Sustainable networks
Sustainability is a core pillar of 6G, with network operators seeking to reduce OpEx, as 25% of it is driven by power demand. 6G moves from an “always-on” to a “smart-on” philosophy, aiming for 30-50% increase in power efficiency. Key techniques include:
- Enhanced deep sleep modes: Enabling base stations to achieve near-zero power consumption when no active users are present, and reduction in periodic signaling (current 5G standard mandates high periodic signaling that in practice keeps a lot of the RF and power amplifier components active at all times).
- AI-driven beamforming: Using AI to direct signals precisely to users, reducing energy waste from broad, inefficient broadcasting.
- AI-driven resource management: Using AI at the higher protocol layers for effective radio resources management.
The AI-native revolution: Moving intelligence to the air interface
One of the most significant shifts in 6G is the move toward an AI-native air interface. Unlike 5G’s rigid mathematical models, 6G uses deep learning to dynamically adapt signal processing blocks. This enables “adaptive waveforms” that adjust modulation in real-time to environmental conditions.
It also facilitates integrated sensing and communication (ISAC), where RF reflections provide precise spatial awareness, allowing the network to proactively adjust beamforming based on user movement.
The coordination challenge: Managing two-sided AI
This transition introduces a complex challenge in how the transmitter (base station) and receiver (device) coordinate their intelligence. Unlike traditional algorithms, AI components must be synchronized through AI lifecycle management (LCM). The industry is weighing one-sided models (device-only optimization) against two-sided architectures (essential for tasks like CSI compression).
In two-sided designs, the device acts as a neural encoder and the base station as a decoder; these must be coordinated pairs to some extent. The level of coordination is still in study, as there are few optional schemes. Examples for those schemes are fully matched neural networks couples, or alternatively, independent at the NN architecture level but trained on the same dataset.
This raises critical questions on the protocol level: should the network use model ID-based selection (activating pre-loaded models) or model transfer (pushing new neural weights over the air) or weights transfer?
Programmable intelligence: Why DSPs are the preferred path
Because 3GPP specifications remain fluid, the need for flexibility through programmability has never been higher. Developing 6G on hard-wired logic is risky, as spec changes could render silicon obsolete. This is why digital signal processors (DSPs) are the preferred architecture. Modern DSPs are uniquely suited for the AI-native physical layer; they possess the massive number of MACs required for matrix operations and are highly efficient at the vector processing necessary for neural networks.
Leading technology vendors also offer dedicated AI ISA for accelerated NN activation functions. A fully programmable modem powered by AI-native DSP offers a “safe bet,” allowing developers to adapt as 6G settles while maintaining the performance needed to lead the market.
Elad Baram is director of product marketing for the Mobile Broadband Business Unit at Ceva.
Related Content
- Get ready for 6G
- 5G & 6G: Adoption, technologies and use cases
- 5G-Advanced to 6G: What’s next for wireless networks
- Making waves: Engineering a spectrum revolution for 6G
- The aspects of 6G that will matter to wireless design engineers
The post The 6G clock ticking: Why silicon architecture for 2030 must start in 2026 appeared first on EDN.
1MHz 555 VFC

For decades, I’ve had a fascination with voltage-to-frequency converters and the 555 analog timer chip, and therefore a double obsession with VFCs based on the 555. In fact, my first Design Ideas (DI) submission (in 1974) was for a 555 VFC. It was not only published but also selected as the best DI of the year. That was it, I was thenceforth hooked forever.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The simple 555 VFC design to be presented here, so far as I know and as unlikely as it sounds for so *ahem* “mature” a part as the 555, is novel. It runs with good linearity and accuracy at 1 MHz, with even faster operation possible. That’s 100x faster than that 1974 555 frequency converter.
I hope you’ll find its details interesting. Here’s how it works. The story begins with Figure 1.
Figure 1 Starred components are precision, including the +5 V supply, but something’s missing.
There’s nothing novel about the input current source comprising A1, Q1, and surrounding parts. It supplies 0 to 1 mA to the U1 current-to-frequency converter in response to its input voltage, as scaled and offset by R1 and R2. The values shown set a 0 to +5 V input span. R1 = 1.8M and R2 = 200k would make it -5 V to +5 V.
A capacitor added in parallel with R2 will provide extra noise rejection. But the inherent noise immunity of the VFC analog-to-digital conversion is good, so you probably won’t need it.
Moving further into the circuit is when things do start to get weird, because the usual two resistors associated with 555 oscillators are missing. Also missing is the usual astable 555 1/3V+ peak-to-peak voltage swing. This topology generates a 2/3V+ Vpp linear sawtooth waveform that resets, not to V+/3, but to zero. Unfortunately, while the sawtooth is nicely linear, due to U1’s internal switching delays Td, the frequency versus Q1 current Iq1 relationship is not very linear. Figure 2 shows how bad it is:
Frequency of oscillation (FOO) = 1.0/((VthC2/Ir3 + Td) = 1.0/(1.0ns/Ir3 + Td)

Figure 2 Nonlinear red curve versus ideal black shows ~20% linearity error from LMC555 internal delays.
Luckily, as derived in another recent DI: “Improve 555 frequency linearity.“
…it’s an easy fix. It consists of a single resistor, R4, connected between the Dch (discharge) and Thr (threshold) pins. R4 is used to linearize the current-versus-frequency function by biasing the Thr pin upward by IcR4. That cuts short the duration of the positive-going timing ramp and thereby the sawtooth period by the same amount that the delays lengthen it: IcR4/(Ic/C2) = R4C2 = Td.
Thus, if R4 is chosen so R4C2 = Td as shown in Figure 3, nonlinearity compensation will be (at least theoretically) complete over the full range of control current. The frequency of oscillator (FOO) for this circuit:
FOO = 1/((VthC2)/Iq1 + 212ns – Td) = 1/(1.0ncb/Iq1 + 212ns – 212ns) = 1/(1.0ncb /Iq1) = 1000 Iq1 MHz = 1MHz(+5v – Vin)/+5v

Figure 3 R4C2 = Td = 212ns = nonlinearity compensation for 555 internal delays.
Now FOO will linearly track Iq1 and therefore Vin as shown in Figure 4.

Figure 4 Nonlinearity disappears if R4 = Td/C2 =212 ns/300 pF = 706 ohms.
Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974. They have included the best Design Idea of the year in 1974 and 2001.
Related Content
- Improve 555 frequency linearity.
- Tune 555 frequency over 4 decades
- 555 VCO revisited
- Inverted MOSFET helps 555 oscillator ignore power supply and temp variations
- Gated 555 astable hits the ground running
The post 1MHz 555 VFC appeared first on EDN.
















