Українською
  In English
Feed aggregator
Latest issue of Semiconductor Today now available
BugBuster – Open-source, open-hardware all-in-one debug & programming tool built on ESP32-S3
| Hey everyone, I’ve been working on BugBuster, an open-source/open-hardware debug and programming instrument designed to replace a pile of bench equipment with a single USB-C connection. The goal: give you a device that can program, debug, and manage power and peripherals remotely, so multiple users can share access to physical hardware over the network. Repo: https://github.com/lollokara/bugbuster What it is At its core it’s a software-configurable I/O tool built around the Analog Devices AD74416H and an ESP32-S3. All 12 smart I/O pins are dynamically programmable — you assign their function in software at runtime. I/O specs: ∙ Logic I/O: 1.8 V to 5 V compatible ∙ Analog input: -12 V to +12 V, 24-bit ADC ∙ Analog output: 0-12 V or 0-25 mA (source and sink) ∙ 4 channels can be connected to the high-voltage ADC/DAC simultaneously ∙ The ESP32-S3 exposes a second USB CDC port map a serial bridge to any of the 12 I/O pins directly from the desktop app Measurement modes per channel: voltage input/output, current input/output (4-20 mA loop), RTD (2/3/4-wire), digital I/O, waveform generation (sine, square, triangle, sawtooth to 100 Hz), real-time scope streaming 32-switch MUX matrix (4× ADGS2414D) lets you route signals flexibly between channels. All onboard supplies are fully programmable: ∙ USB-C PD negotiation via HUSB238 (5-20 V input, up to 20 V @ 3 A = 60 W) ∙ Two adjustable voltage domains (3-15 V each, DS4424 IDAC on LTM8063 feedback) ∙ One programmable logic voltage domain ∙ Each output port is e-fuse protected (TPS1641x) current limits and enables set in software ∙ All calibrated with NVS-persisted curves This means you can power your DUT, set its logic level, and adjust supply voltages all programmatically, all remotely. OpenOCD HAT (coming) An expansion HAT based on the RP2040 and Renesas HVPAK will add: ∙ OpenOCD - JTAG/SWD programming and debugging of targets ∙ Additional high-voltage functions from the HVPAK ∙ More I/O expansion I’m ordering PCBs next week. All is open hardware and software on the latter the structure is: ∙ Firmware: ESP-IDF + PlatformIO, FreeRTOS dualo-core (ADC polling, DAC, fault monitor, waveform gen, WiFi all concurrent) ∙ Desktop app: Tauri v2 backend (Rust) + Leptos 0.7 frontend (WASM), 17 tabs covering every hardware function ∙ Protocol: Custom binary BBP over USB CDC - COBS framing, CRC-16, < 1 ms round-trip ∙ Hardware: Altium Designer, schematics and layout in the repo [link] [comments] |
AOI showcases 25dBm ultra-high-power ELSFP for next-gen AI infrastructure
APEC 2026 showcases advances in power electronics

The annual Applied Power Electronics Conference & Exposition (APEC 2026) showcases hundreds of companies that exhibit their latest component and technology advances for system power designers across a wide range of industries. Many of these devices deliver on growing requirements for higher efficiency and higher power density, along with simplifying design to reduce complexity and accelerate time to market.
Power device manufacturers claim major technology advances, including topologies and packaging, for applications ranging from AI data centers and humanoid robotics to fast-charging mobile devices. Still a big area of development is wide-bandgap (WBG) semiconductors, including gallium nitride (GaN) and silicon carbide (SiC) power devices, addressing the need for simpler designs and more flexibility.
Here is a selection of power devices featured at APEC 2026 that target improvements in efficiency and power density, along with simplifying design and saving board space. These are used in a wide range of applications, including AI data centers, appliances, automotive, e-mobility, industrial automation, and robotics.
Breakthroughs and advancesOffering an alternative to resonant power designs, Power Integrations (PI) announced a topology that it calls a breakthrough for flyback power supply design by extending the power range of flyback converters to 440 W. The TOPSwitchGaN flyback IC family combines the company’s PowiGaN technology with its TOPSwitch IC architecture, reducing complexity and improving manufacturability. It can also eliminate heat sinks in many cases, according to PI, and shorten design time and lower total system cost.
TOPSwitchGaN ICs feature 92% efficiency across the load range—from 10% to 100% load—and exceed European Energy-related Products (ErP) regulations at less than 50-mW power consumption for standby and off modes, and it is accomplished without the need for synchronous rectification, PI said. They are suited for high-end appliances, e-bike chargers, and industrial applications.
PowiGaN switches deliver a much lower on-state resistance (RDS(on)) than silicon, which reduces conduction losses, dramatically increasing the power capability of flyback converters, PI said. Thanks to the integration of the 800-V PowiGaN switches, the devices can operate at switching frequencies of up to 150 kHz to minimize transformer size. Other specs include no-load consumption at below 50 mW at 230 VAC, including line sense, and up to 210 mW of output power for 300-mW input at 230 VAC to run housekeeping functions when units are in standby mode.
For ultra-slim designs, TOPSwitchGaN ICs are available in low-profile eSOP-12 surface-mount packaging that enables 135 W (85–265 VAC) without a heat sink for applications such as appliances. These devices are also available in an eSIP-7 package, and thanks to its vertical orientation, it minimizes the printed-circuit-board (PCB) footprint. It has a thermal impedance equivalent to a TO-220-packaged part. By mounting a metal heat sink, the extended power range is achieved for applications including power tools, e-bikes, and garage openers.
Reference designs include the DER-1079 (60-W, wide-range isolated flyback power supply unit (PSU) for appliances), the DER-1019 (356-W highline [89 V/4 A]) isolated flyback industrial PSU), and the RDR-1018 e-bike charger kit (168-W wide-range isolated flyback design).
Power Integrations’ TOPSwitchGaN flyback ICs (Source: Power Integrations)
pSemi, a Murata company, also claimed groundbreaking power products, targeting high-energy-density applications. At APEC 2026, pSemi unveiled the PE26100 multilevel buck converter for fast-charging mobile devices and the PE25304 advanced integrated charge pump switching-capacitor power module to enable high-efficiency power conversion in humanoid robotic, dexterous-hand power applications.
The PE26100 is an expanded application focus for its high‑performance PE26100 multilevel buck converter, which is now optimized for main, direct battery charging in next‑generation smartphones, tablets, and other compact mobile devices. It delivers a fast‑charging capability, high output current, up to 6 A, and high thermal performance in an ultra‑thin form factor for space‑constrained consumer electronics.
pSemi said the architecture and performance characteristics make it uniquely suited for today’s transition toward high‑power USB Power Delivery (USB‑PD) and programmable power supply (PPS) fast‑charging ecosystems. Supporting 4.5-V to 18.5-V input, the device enables four‑level buck mode for higher USB‑PD voltage inputs and three‑level buck mode for mid‑ to low input voltages. For USB PPS applications, the PE26100 can also operate as a fixed‑ratio, capacitor‑divider charge pump, offering divider ratios of 2:1 and 3:1 depending on programmed input voltage.
The PE25304 is an advanced integrated charge pump switching‑capacitor power module for high efficiency and performance in space‑constrained, high‑power applications. Designed to divide input voltage by four, the PE25304 is purpose‑built for 48-V input architectures, with a wide operating range from 20 V to 60 V, making it suited for dexterous-hand robotics and mechatronic systems. It can also be used in drones, medical devices, embedded AI modules, and industrial automation systems.
The module is housed in an ultra-low-profile package (2 mm) and can deliver up to 72 W of output power. It also features a 97% conversion efficiency, reducing power loss and thermal buildup.
Texas Instruments (TI) unveiled several isolated power modules for applications from data centers to electric vehicles that require improvements in power density, efficiency, and safety. In particular, the UCC34141-Q1 and UCC33420 isolated power modules leverage TI’s IsoShield technology. This is a proprietary multichip packaging solution that delivers up to 3× higher power density than discrete solutions in isolated power designs and shrinks solution size by as much as 70% by packing more power into smaller spaces while reducing area, cost, and weight.
Traditionally, power designers use power modules to save board space and simplify design. Advancements in packaging technology such as the IsoShield enable higher performance and efficiency gains. The IsoShield copackages a high-performance planar transformer and an isolated power stage, offering functional, basic, and reinforced isolation capabilities.
It enables a distributed power architecture, helping manufacturers meet functional safety requirements by avoiding single-point failures, TI said. In addition to shrinking the solution size, it delivers up to 2 W of power for automotive, industrial, and data center applications that require reinforced isolation. For example, the increased power density helps deliver lighter and more efficient EVs that extend range and improve performance.
TI also announced other advancements in data centers, automotive, humanoid robots, sustainable energy, and USB Type-C applications, including an 800-V to 6-V DC/DC power distribution board. Pre-production and production quantities of the isolated power modules, along with evaluation modules, reference designs, and simulation models, are available now on TI.com.
TI’s UCC34141-Q1 and UCC33420 isolated power modules (Source: Texas Instruments Inc.)
MaxLinear Inc. unveiled its modular intelligent power management solution for next-generation broadband system-on-chip (SoC) designs. The platform includes the MxL7080 power management controller, MxL76500 smart regulating stage (SRS) modules, and high-efficiency MxL76125 22-V/15-A synchronous buck regulator. It delivers a thermally optimized power architecture for high‑bandwidth, multi-service access platforms, including cable, fiber, and fixed wireless access gateways; Ethernet routers; and customer premise equipment.
The platform addresses the need for scalable, multi-rail power management architectures capable of supporting higher power density, tighter voltage tolerances, and improved thermal performance as SoC designs get more complex.
The MxL7080 power management controller, paired with four MxL76500 SRS modules, provides a reference‑based, multiphase power architecture for high‑performance SoCs. This architecture provides improved thermal distribution to reduce localized hotspots, a simplified layout and routing flexibility, and precise multi‑rail sequencing with dynamic voltage scaling support.
The MxL76125 buck regulator, housed in a 4 × 5-mm QFN package, enhances point‑of‑load (PoL) flexibility for complex broadband and access platforms. It offers a wide 5-V to 22-V input voltage range supporting 5-V, 12-V, and 20-V system rails and high efficiency up to 96%, with light‑load PFM mode to reduce idle power. Other features include a fast transient response using COT‑based control with ceramic output capacitors and integrated protection including OCP, OVP, OTP, UVLO, and short‑circuit protection.
The complete (MxL7080 + MxL76500 + MxL76125) power solution is optimized for multi-access gateway platforms. These devices are available now in RoHS-compliant, green/halogen-free, industry-standard packages. Evaluation boards and samples are available at the MxL7080, MxL76500, and MxL76125 product pages.
MaxLinear’s intelligent power management solution (Source: MaxLinear Inc.)
SiC and GaN power solutions
Microchip Technology Inc. has launched its BZPACK mSiC power modules, offering high flexibility with a range of topologies, which include half-bridge, full-bridge, three-phase, and PIM/CIB configurations. This flexibility allows power designers to optimize performance, cost, and system architecture.
Targeting demanding power-conversion environments, the BZPACK mSiC power modules exceed high voltage-high humidity-high temperature reverse bias (HV‑H3TRB) testing, surpassing the industry standard of 1,000 hours, making them suited for industrial and renewable energy applications. The modules provide a Comparative Tracking Index 600-V case, stable RDS(on) across temperature ranges, and substrate options in aluminum oxide or aluminum nitride.
The BZPACK power modules are also designed to reduce system complexity and enable faster assembly by offering a baseplate-less design with press-fit, solderless terminals and an optional pre-applied thermal interface material.
The power modules leverage Microchip’s advanced mSiC technology and performance of its MB and MC mSiC MOSFET families for industrial and automotive applications, with AEC-Q101-qualified options available. These devices support common gate-source voltages (VGS ≥ 15 V) and are available in industry-standard packages.
The MC family integrates a gate resistor, which offers benefits in improved switching control, low switching energy, and improved stability in multi-die module configurations. Package options include TO-247-4 Notch and die form (waffle pack).
Microchip offers a range of SiC diodes, MOSFETs, and gate drivers. The BZPACK mSiC power modules are available in production quantities.
Microchip’s BZPACK mSiC modules (Source: Microchip Technology Inc.)
SemiQ Inc. launched its QSiC Dual3 family of 1,200-V half-bridge MOSFET modules for motor drives in data center cooling systems, grid converters in energy storage systems, and industrial drivers. These are designed to replace IGBT modules with minimal redesign, with all MOSFET die screened using wafer-level gate-oxide burn-in tests exceeding 1,450 V.
Enabling power converters with high conversion efficiency and power density, the series of six devices includes an optional parallel Schottky barrier diode (SBD) to further reduce switching losses in high-temperature environments. Two of the family’s six devices have an RDS(on) of 1 mΩ and a power density of 240 W/in.3 in a 62 × 152-mm package. The modules also feature a low junction-to-case thermal resistance and enable a simplified system design with smaller, lighter heat sinks.
The devices include the GCMX1P0B120S4B1, GCMX1P4B120S4B1, GCMX2P0B120S4B1, GCMS1P0B120S4B1 (SBD), GCMS1P4B120S4B1 (SBD), and GCMS2P0B120S4B1 (SBD). Datasheets for the QSiC Dual3 modules can be downloaded here.
SemiQ’s QSiC Dual3 modules (Source: SemiQ Inc.)
In the GaN space, Efficient Power Conversion (EPC) introduced the EPC91121 motor drive inverter evaluation board, built around its Gen 7 EPC2366 40-V eGaN power transistor. The board is designed for fast prototyping and evaluation, integrating the key functions required for a motor drive inverter, including gate drivers, housekeeping power supplies, voltage and temperature monitoring, and current sensing.
The 40-V EPC2366 Gen 7 eGaN FET offers an ultra-low RDS(on) of 0.84 mΩ, enabling extremely efficient power conversion and fast switching performance. The three-phase inverter solution can deliver up to 70-Apeak (50-ARMS) output current from input voltages ranging between 18 V and 30 V, making it suited for battery-powered systems operating around a 24-V supply.
The platform supports PWM switching frequencies up to 150 kHz, which is significantly higher than typical silicon-based motor drives, according to EPC. This reduces magnetic component size, minimizes switching losses, and improves overall system responsiveness, the company said.
The board, measuring 79 × 80 mm, provides high-bandwidth current sensing on all three phases, supporting measurements up to ±125 A, while phase and DC-bus voltage sensing provide the feedback required for precise monitoring and advanced motor control techniques such as field-oriented control (FOC) and space-vector PWM. Other features include shaft encoder and Hall-sensor interfaces and multiple test points.
Applications include drones, robotics, industrial automation, handheld power tools, and other compact electromechanical systems in which high efficiency and power density are critical.
The EPC91121 reference design board and devices are available now from DigiKey and Mouser. Design support files, including schematic, bill of materials, and Gerber files, are available on the EPC91121 product page.
EPC’s EPC91121 BLDC motor drive evaluation board (Source: Efficient Power Conversion)
Renesas Electronics Corp. unveiled its high-voltage TP65B110HRU at APEC 2026, claiming the first bidirectional switch using depletion-mode (d-mode) GaN technology, capable of blocking both positive and negative currents in a single device with integrated DC blocking. Target applications include single-stage solar microinverters, AI data centers, and on-board EV chargers.
The device simplifies power converter designs and replaces conventional back-to-back FET switches with a single low-loss, fast-switching, easy-to-drive device, Renesas said. “By integrating bidirectional blocking functionality on a single GaN product, power conversion can be achieved in a single stage using fewer switching devices.”
This is an alternative to today’s high-power-conversion designs that use unidirectional silicon or SiC switches, which block current in only one direction when in the off state. Many of these single-stage designs use conventional unidirectional switches back to back, Renesas said, resulting in a fourfold increase in switch count and reduced efficiency.
Renesas’s 650-V SuperGaN devices are based on a proprietary, normally off technology. The TP65B110HRU combines a high-voltage bidirectional d-mode GaN chip co-packaged with two low-voltage silicon MOSFETs with high threshold voltage (3 V), high gate margin (±20 V), and built-in body diodes for efficient reverse conduction. It offers high-dV/dt capability of >100 V/ns, with minimum ringing and short delays during on/off transitions.
Comparing the Renesas bidirectional GaN switch with enhancement-mode bidirectional GaN devices, the Renesas switch is compatible with standard gate drivers that require no negative gate bias. The result is a simpler, lower-cost gate-loop design and fast, stable switching in both soft- and hard-switching operations without a performance penalty, the company said.
The TP65B110HRU bidirectional GaN switch, housed in a TOLT top-side-cooled package, is available now, along with the RTDACHB0000RS-MS-1 evaluation kit. Also available are two reference solutions (500-W Solar Microinverter and Three-Phase Vienna Rectifier System) that leverage the TP65B110HRU and other Renesas-compatible devices.
Renesas’s TP65B110HRU bidirectional GaN switch (Source: Renesas Electronics Corp.)
Renesas also announced a GaN charging solution for industrial and IoT electronics applications. The GaN-based Half-Wave LLC (HWLLC) platform supports 500-W or higher operation across IoT, industrial, and infrastructure systems. The HWLLC converter topology scales a compact power architecture from 100-W-class designs to 500 W, targeting high-speed chargers for power tools, e-bikes, and other appliances.
The topology addresses the size, heat, and efficiency penalties of legacy topologies. It also helps designers move beyond 100-W USB-C charging devices and adopt 240-W USB EPR charging to shrink proprietary brick chargers in smartphones, laptops, and many gaming systems, Renesas said. The fast-charging technology was recently incorporated into Belkin’s GaN-based Z-Charger that features Renesas’s zero-standby-power (ZSP) chip with advanced SuperGaN d-mode GaN technology.
Building on its proprietary ZSP technology, the solution encompasses four new controller ICs, including the RRW11011 interleaved power-factor correction (PFC) and HWLLC combo controller, the RRW30120 USB-PD protocol and closed-loop controller, the RRW40120 half-bridge GaN gate driver, and the RRW43110 intelligent synchronous rectifier controller.
The RRW11011 PFC with phase-shift control cancels ripple, reduces component size and cost, and balances current. It also allows designers to lower operating temperature while delivering the wide output range (5 V to 48 V) required by USB Extended Power Range (EPR) and other variable-load charging systems. The RRW30120 USB-PD protocol and closed-loop controller achieve a maximum USB power delivery of 240 W. Together in a 240-W USB EPR power adapter design, the solution claims the highest power density in the industry (3 W/cc) and 96.5% peak efficiency.
The four devices enabling the HWLLC solution are available in addition to the EBC10293 240-W USB-PD EPR evaluation board. Reference solutions include the 240-W AC/DC Adapter and 300-W Lighting Power Platform.
Renesas’s Half-Wave LLC GaN charging solution (Source: Renesas Electronics Corp.)
AI data centers
Infineon Technologies AG released several power solutions aimed at AI data centers, including voltage regulation devices, digital power controller ICs, and CoolGaN-based high-voltage intermediate bus converter (IBC) reference designs.
Infineon expanded its voltage regulation portfolio with the XDPE1E digital multiphase PWM buck controllers and TDA49720/12/06 PMBus PoL voltage regulators to deliver higher compute performance per rack in AI data centers as next-generation platforms drive new requirements for power architectures.
The XDPE1E3G6A and XDPE1E496A, digital three- and four-loop multiphase PWM buck controllers, respectively, target multi-processor AI platforms and advanced VR inductor topologies. They offer highly configurable phase allocation and fully programmable phase firing order and support multiple protocols, including PMBus, AVSBus, SVID, and SVI3. Digital features, including active transient response, fast DVID, automatic phase shedding, and PFM, help address dynamic AI loads. Infineon also offers built-in tools such as Digital Scope, Black Box recording, and protection features.
To address the increasing number of non-core rails in AI systems, which require efficient regulation with accurate monitoring and control, Infineon developed the TDA49720/12/06 family of fully integrated PoL DC/DC buck regulators with PMBus-compliant digital telemetry. This family, with 6-A, 12-A, and 20-A options in 3 × 3-mm and 3 × 3.5-mm packages, helps maximize power density and simplify layout on accelerator cards and server boards.
The PMBus telemetry enables accurate reporting of key parameters, including output voltage, load current, input voltage, and die temperature. The devices also feature a proprietary valley-current-mode constant-on-time control scheme that enables fast transient response, cycle-by-cycle current limiting, and support for all-MLCC output capacitance designs. The devices operate from 2.7-V to 16-V input and across a wide junction temperature range of −40°C to 150°C.
Infineon’s XDPE1E496A digital multiphase PWM buck controller (Source: Infineon Technologies AG)
Infineon also expanded its XDP digital power controller IC family with the XDPP1188-200C, targeting higher power levels in AI servers. The device supports intermediate bus conversions from 48 V to 12 V or lower, as well as future higher-voltage DC systems, including the conversion of ±400-V or 800-VDC bus voltage to 48 V, 24 V, or 12 V.
The XDPP1188-200C complements Infineon’s CoolGaN-based high-voltage IBC reference designs (also introduced at APEC) and supports custom high-/medium-voltage IBC designs up to 800 VDC in AI data centers. The XDPP1188-200C allows optimization for customer-specific requirements. In 48-V systems, the controller works seamlessly with medium-voltage IBC modules, delivering an optimized power supply chain from the intermediate bus to processor voltage regulation.
Key features include an advanced feed-forward control mechanism to improve response time and stability under dynamic input transient conditions, and a nonlinear fast transient response to handle the rapid power demand fluctuations in AI servers. The device also integrates advanced power management techniques at light-load conditions and supports bidirectional configuration, enabling flexible power management.
The XDPP1188-200C digital power controller is sampling now. Volume production is expected in the first quarter of 2026.
Infineon’s XDPP1188-200C digital power controller (Source: Infineon Technologies AG)
Infineon also introduced two high-voltage IBC reference designs to help customers make the shift to AI server power architectures powered by ±400 VDC and 800 VDC.
Leveraging Infineon’s 650-V CoolGaN switches, the reference designs address two architectures: The 800-VDC to 50-V design is an intermediate stage for downstream 48-V IBC modules, while the 800-VDC to 12-V design enables direct conversion for compact server boards. The XDPP1188-200C digital controller is available for custom implementations, as noted earlier, with output voltages of 48 V, 24 V, or 12 V.
The 800-VDC or ±400-V to 50-V high-voltage IBC reference design demonstrates more than 98% efficiency at full load. Leveraging Infineon’s high- and medium-voltage CoolGaN switches, EiceDRIVER gate drivers, and a PSOC microcontroller (MCU), it consists of two 3-kW 400-V to 50-V converter building blocks, which are configured in an input-series-output-parallel (ISOP) arrangement. It scales to 6-kW TDP and supports up to 10.8 kW for 400 µs, using a planar PCB integrated transformer with multiple synchronous rectifier stages and soft switching across all load conditions to reduce electromagnetic interference. It claims an exceptional 2.5-kW/in.3 power density in a 60 × 60 × 11-mm form factor.
The second reference design is an ultra-thin, high-voltage IBC demo board with an 8-mm height, which converts an 800-VDC bus voltage directly to a 12-V intermediate rail. The design delivers 6-kW TDP and supports up to 10.8 kW for 400 µs. It features a power density above 2,300 W/ in.3, up to 98.2% peak efficiency, and 97.1% efficiency at full load. It operates as an ISOP half-bridge LLC converter, leveraging Infineon’s 650-V CoolGaN and 40-V OptiMOS 7 switches, with EiceDRIVER gate drivers and a PSOC MCU.
Infineon’s high-voltage IBC demo board (Source: Infineon Technologies AG)
A host of other semiconductor solution providers highlighted their latest and greatest at APEC 2026. Toshiba America Electronic Components Inc., for example, showcased several new products and technologies, ranging from its UMOS 11 MOSFETs and top-side-cooled TOGT package to SiC modules and MCU and motor control solutions.
On display were Toshiba’s expanded family of UMOS 11 MOSFETs in industry-standard packages. These devices feature improved switching characteristics and reduced RDS(on) per area compared with the previous UMOS 10 generation. The company also highlighted its WBG semiconductor portfolio, including high-power SiC power modules for grid-level and industrial systems; 750-V and 1,200-V SiC die and modules for automotive drivetrain inverter applications; and GaN devices.
Toshiba also featured its top-side-cooled TOGT packaging that targets high-power-density applications. It enables heat dissipation through the top of the package to reduce thermal stress on the PCB.
Other solutions presented at the show include MCU and motor control solutions (MCU, MCD, and SmartMCD devices) for automotive body electronics, electronic control units (ECUs), and industrial control applications. System reference designs highlighted include high-efficiency power supply platforms such as 3-kW server PSUs for data center applications, automotive ECU power architectures, and motor control reference designs for pump and power tool systems.
Toshiba’s UMOS 11 MOSFETs (Source: Toshiba America Electronic Components Inc.)
The post APEC 2026 showcases advances in power electronics appeared first on EDN.
A fully floating BJT-based LED current driver

The circuit in Figure 1 combines a VBE-referenced current source with a current mirror to implement a simple two-terminal, fully floating LED current sink or source. This approach is well-suited for applications in which tight current accuracy is not required, such as driving LED strings where a 5–10% current tolerance is acceptable.
Figure 1 A simple, fully floating LED current driver based on a VBE-referenced current source and a BJT current mirror. The circuit operates as either a current sink or source and supports output currents up to 100 mA. Note: R2=R3. All resistors are ¼ W and 5%.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The LED driver can drive an arbitrary number of series-connected LEDs, provided the available supply voltage is at least 2.3 V. The topology supports both high-side and low-side operation, as shown in Figure 2. Output current ranges from a few milliamps up to 100 mA, with no requirement for heat sinks.

Figure 2 High-side and low-side operating configurations enabled by the fully floating nature of the LED driver.
The current source formed by BJTs Q1 and Q2 is set by resistor R1. A current mirror implemented with BJTs Q3 and Q4, using equal emitter resistors (R2 = R3), forces nearly equal currents in branches I1 and I2, as long as the voltage drop across the emitter resistors is at least 0.5 V. This requirement helps compensate for VBE mismatch between the transistors. The total LED current is therefore doubled, while power dissipation is evenly shared among the devices.
Experimental data (Table 1) confirm the expected behavior: output current scales with R1, and the minimum supply voltage increases from 2.3 V at 9.3 mA to 2.8 V at 97 mA, consistent with the headroom required by the VBE-referenced source and mirror.
|
R1 |
R2=R3 |
Iout |
Vsupplymin |
|
150Ω |
100 Ω |
9.3mA |
2.3V |
|
82 Ω |
56 Ω |
18.2mA |
2.4V |
|
33 Ω |
22 Ω |
44mA |
2.5V |
|
15 Ω |
10 Ω |
97mA |
2.8V |
Table 1 Experimental data showing R1, R2/R3, and corresponding Iout and Vsupplymin.
With a minimum operating voltage of approximately 2.8V, the circuit dissipates about 280 mW at a maximum output current of 100 mA. Higher supply voltages reduce efficiency due to increased power dissipation in the driver.
Because the LED current is VBE-dependent, it exhibits temperature sensitivity, with a temperature coefficient of approximately -0.3 %/°C. Using a resistor with a negative temperature coefficient for R1 can partially compensate for this effect.
—Luca Bruno has a Master’s Degree in Electronic Engineering from the Politecnico of Milan. He has written 16 EDN Design Ideas.
Related Content
- LED strings driven by current source/mirror
- Current mirror drives multiple LEDs from a low supply voltage
- A current mirror reduces Early effect
- A two-way mirror — current mirror that is
The post A fully floating BJT-based LED current driver appeared first on EDN.
The truth about AI inference costs: Why cost-per-token isn’t what it seems

The AI industry has converged on a deceptively simple metric: cost per token. It’s easy to understand, easy to compare, and easy to market. Every new system promises to drive it lower. Charts show steady declines, sometimes dramatic ones, reinforcing the impression that AI inference is rapidly becoming cheaper and more efficient.
But simplicity, in this case, is misleading.
A token is not a fundamental unit of cost in isolation. It is the visible output of a deeply complex system that spans model architecture, hardware design, system scaling, memory behavior, power consumption, and operational efficiency. Reducing that complexity to a single number creates a dangerous illusion: improvements in cost per token necessarily reflect improvements in the underlying system.
They often do not.
To understand what is really happening, we need to step back and look at the full system—specifically, the total cost of ownership (TCO) of an AI inference deployment.
From benchmark numbers to real systems
Most comparisons in the industry start from benchmark results. Inference benchmarks such as MLPerf provide a useful baseline because they fix key variables—model, latency constraints, and workload characteristics—allowing different systems to be evaluated under the same conditions.
Take a large-scale model such as Llama 3.1 405B. On a modern GPU system like Nvidia’s GB200 NVL72, MLPerf reports an aggregate throughput that translates to roughly 138 tokens per second per accelerator. An alternative inference-focused architecture might deliver a lower figure—say, 111 tokens per second per accelerator.
At first glance, the conclusion seems obvious: the GPU is faster.
But this is precisely where the problem begins. That number describes the performance of a single accelerator under specific benchmark conditions. It says very little about how the system behaves when deployed at scale.
And in real-world data centers, scale is everything.
The illusion of linear scaling
In theory, performance should scale linearly with the number of accelerators. Double the hardware, double the throughput. In practice, this never happens. Communication overhead, synchronization, memory contention, and architectural inefficiencies all conspire to reduce effective performance as systems grow.
This effect is captured by what is often called scaling efficiency. It’s one of the most important and most overlooked parameters in AI infrastructure.
A system that achieves 97% scaling efficiency will behave differently from one that achieves 85%, even if their per-chip performance appears comparable. Over dozens or hundreds of accelerators, that difference compounds rapidly.
This is where inference-specific architectures begin to separate themselves.
Unlike training, inference does not require backpropagation. The execution flow is more predictable, the data movement patterns are more structured, and the opportunity for optimization is significantly greater. Architectures that are purpose-built for inference can exploit this determinism to sustain high utilization across large systems.
One architecture is a case in point. By moving away from the traditional GPU execution model and adopting a deeply pipelined, dataflow-oriented design, it minimizes the coordination overhead that typically erodes scaling efficiency. The result is not just higher peak utilization, but more important, consistently high utilization at scale.
When the system flips the narrative
Once performance is evaluated at the level that actually matters—servers, racks, and data centers—the comparison often changes.
Throughput per server depends not only on per-accelerator performance, but also on how many accelerators are packed into a system and how efficiently they work together. Throughput per rack adds another layer, incorporating system density and infrastructure constraints. When power is introduced into the equation, the relevant metric becomes throughput per kilowatt.
It is at this level that architectural differences become impossible to ignore.
GPU-based systems are optimized for flexibility. They can handle a wide range of workloads, but that generality introduces inefficiencies when running highly structured inference tasks. Data must move between memory hierarchies, threads must be synchronized, and execution units often sit idle waiting for dependencies to resolve.
The architecture mentioned above takes a different approach. By eliminating the traditional memory hierarchy bottlenecks and replacing them with a large, flat register file combined with a dataflow execution model, it effectively removes the “memory wall” that limits sustained performance in GPU systems. Data is kept close to compute, and execution proceeds in a continuous pipeline rather than in discrete, synchronized steps.
The consequence is subtle but powerful: even if peak per-chip performance appears lower, the effective throughput at the system level can be significantly higher. More importantly, that performance is achieved with far greater energy efficiency.
Power: The constraint that doesn’t go away
Energy consumption is not just a cost factor; it’s the constraint that ultimately defines the scalability of AI infrastructure.
Electricity prices, power usage effectiveness (PUE), and utilization rates are not theoretical constructs. They are operational realities that directly impact the economics of every deployment. A system that consumes less energy per token has an intrinsic advantage that compounds over time.
This is where inference-native architectures again demonstrate their value.
Because the architecture’s design minimizes unnecessary data movement and maximizes pipeline utilization, it delivers more tokens per unit of energy. The metric that matters is not peak FLOPS, but tokens per kilowatt—and on that axis, architectural efficiency becomes the dominant factor.
In large-scale deployments, this translates directly into lower operating costs and improved total cost of ownership.
The hidden influence of workload assumptions
Benchmarking does not eliminate bias—it simply moves it.
Parameters such as context length, output token size, and concurrency have a profound impact on system behavior. A model running at 128K context imposes different demands than one operating at 8K. Latency, memory pressure, and throughput all shift accordingly.
Architectures that rely on heavy memory movement are particularly sensitive to these changes. As context length grows, the cost of moving data becomes increasingly dominant.
By contrast, architectures that localize data and streamline execution are more resilient to these shifts. This is another area where the architecture’s register-centric, dataflow design provides an advantage: it reduces dependence on external memory bandwidth and maintains more consistent performance across varying workloads.
From metrics to economics
When performance, power, and infrastructure are combined, the discussion moves from engineering to economics.
Total cost of ownership captures the full picture: capital expenditure, operating costs, energy consumption, and system utilization over time. It reflects not just how fast a system can run, but how efficiently it can deliver value in a real deployment.
This is where many cost-per-token claims fall apart.
A lower cost per token can be achieved in multiple ways—by improving efficiency, by adjusting assumptions, or by accepting lower margins. Without a system-level view, it’s impossible to distinguish between these scenarios.
What matters is not the headline number, but the underlying drivers.
The risk of optimizing the wrong thing
The industry’s focus on cost per token has created a subtle distortion. Instead of optimizing systems, we risk optimizing metrics. This is not unique to AI. Every technology cycle has its preferred metrics, and every metric can be gamed if taken out of context.
A truly efficient system is one that aligns performance, energy consumption, and scalability. It delivers consistent throughput, minimizes waste, and operates effectively under real-world constraints. This is precisely the direction that inference-specific architectures are taking.
The aforementioned architectural approach illustrates this shift. Rather than attempting to adapt a general-purpose architecture to an increasingly specialized workload, it starts from the workload itself and builds upward. The result is a system that is not only efficient in theory, but also in practice—at scale, under load, and within the constraints of real data centers.
Toward a more honest conversation
None of this diminishes the achievements of GPU-based systems. They have been instrumental in the rise of modern AI and remain incredibly powerful platforms. But the workloads are changing. Large language model inference is not the same as training, and it’s not the same as graphics. As the industry shifts toward deployment at scale, the limitations of general-purpose architectures become more apparent.
At the same time, new architectures, as described above, are emerging that are designed specifically for these workloads. They may not always win on peak performance metrics, but they are optimized for the realities of inference: predictable execution, high utilization, and energy efficiency.
If we want to compare these systems fairly, we need to move beyond simplified metrics and toward system-level evaluation.
The bottom line
Cost per token is not wrong—but it is incomplete.
The real question is not how cheaply a token can be produced in isolation, but how efficiently a system can deliver tokens over time, at scale, within the constraints of power, infrastructure, and workload demands.
When viewed through that lens, the path forward becomes clearer.
The next generation of AI infrastructure will not be defined by the highest peak performance or the most aggressive benchmark result. It will be defined by architectures that align performance with efficiency, and efficiency with economics.
And in that context, the industry may find that the most important innovation is not faster hardware—but better architecture.
Lauro Rizzatti is a business development executive at VSORA, a pioneering technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.
Related Content
- Chiplets Are The New Baseline for AI Inference Chips
- Custom AI Inference Has Platform Vendor Living on the Edge
- The next AI frontier: AI inference for less than $0.002 per query
- Revolutionizing AI Inference: Unveiling the Future of Neural Processing
- Purpose-built AI inference architecture: Reengineering compute design
The post The truth about AI inference costs: Why cost-per-token isn’t what it seems appeared first on EDN.
AOI receives new order for 800G data-center transceivers from major hyperscale customer
ROHM has added New Lineup of 17 High-Performance Op Amps Enhancing Design Flexibility
ROHM has added the new CMOS Operational Amplifier (op amp) series “TLRx728” and “BD728x” to its lineup. These are suitable for a wide range of applications, including automotive, industrial, and consumer systems. A broad lineup also makes product selection easier.
In recent years, demand for high-accuracy op amps has been rapidly increasing as automotive and industrial systems become more sophisticated, demanding faster speed, better precision, and higher efficiency. In applications requiring amplification of sensor outputs, minimising signal error and delay is essential. To meet these requirements, a well-balanced set of key characteristics is needed, including Input Offset Voltage, Noise, and Slew Rate.
These new products are high-performance op amps that offer a low input offset voltage, low noise, and high slew rate. TLRx728 features an input offset voltage of 150 μV (typ.), while the BD728x offers 1.6 mV (typ.). Both series have a noise voltage density of 12 nV/√Hz at 1kHz and a slew rate of 10 V/μs. They are therefore suitable for a wide range of precision applications, including sensor signal processing, current detection circuits, motor driver control, and power supply monitoring systems. Both series are designed to balance versatility and high performance rather than being limited to specific applications.
Application Examples
Automotive equipment, industrial equipment, and consumer electronics.
Example use case: Sensor signal processing, current detection circuits, motor driver control, and power supply monitoring systems.
The post ROHM has added New Lineup of 17 High-Performance Op Amps Enhancing Design Flexibility appeared first on ELE Times.
EEVblog 1743 - Mechanical Vibration Detection with your Oscilloscope Probe
Поїздка Шевченковими місцями із Профкомом
95 співробітників Київської політехніки побували в культурно-освітній поїздці Шевченковими місцями Черкащини.
КПІшники — серед лідерів з настільного тенісу на XXI Універсіаді
🏓 Наші студенти успішно виступили на змаганнях з настільного тенісу на XXI Універсіаді м. Києва: одразу дві команди університету увійшли до числа найкращих і додали КПІ важливі очки в загальний залік.
Govt Infuses ₹258 Crore Into 128 Startups to Drive DeepTech and IP Creation
In a major move to solidify India’s standing as a global high-tech hardware hub, the Union Government has announced a strategic investment of ₹257.77 crore (approximately $31 million) into 128 technology startups.
The announcement, shared by Minister of State for Electronics and IT, Jitin Prasada, in a written reply to the Rajya Sabha on Friday, March 27, 2026, highlights a shift toward “risk capital” for sectors critical to national security and economic self-reliance.
The “Fund of Funds” MechanismThe investment was executed through the Electronics Development Fund (EDF), which operates under a “Fund of Funds” model. Instead of direct equity, the EDF acts as an anchor investor in eight professionally managed “Daughter Funds” (early-stage venture and angel funds).
These Daughter Funds have leveraged the government’s initial contribution to mobilise a total of ₹1,335.77 crore in follow-on investments for startups specialising in:
-
Semiconductor Design & Nano-electronics
-
Cybersecurity & AI/ML
-
Robotics & IoT
-
Medical Electronics (HealthTech)
As of late February 2026, the ripple effect of this capital injection has already yielded significant socio-economic returns:
-
Job Creation: Over 22,700 high-skilled jobs have been generated within the supported startups.
-
Intellectual Property: The companies have successfully filed or acquired more than 300 IPs, reinforcing India’s domestic design capabilities.
-
Profitable Exits: The government has already realised ₹173.88 crore from 37 successful exits, proving that DeepTech ventures are becoming increasingly viable for investors.
While the government aims for pan-India growth, the current investment data shows a strong concentration in existing tech corridors. Bangalore remains the undisputed leader, housing 88 of the 128 funded startups.
A Strategic Pivot Toward Self-RelianceThis news comes on the heels of the Ministry of Electronics and IT (MeitY) approving 29 additional projects under the Electronics Component Manufacturing Scheme, involving a cumulative investment of ₹7,104 crore.
By focusing on the “Daughter Fund” model, the government ensures that capital is managed by industry experts while maintaining a minority stake—a move designed to encourage private venture capital to take more “brave” bets on hardware and indigenous R&D rather than just consumer-facing software apps.
“The goal is to build a self-sustaining electronics ecosystem,” stated the Ministry. “We aren’t just looking for the next ‘unicorn’; we are looking for the next breakthrough in Indian-owned IP.”
By: Shreya Bansal, Sub-Editor
The post Govt Infuses ₹258 Crore Into 128 Startups to Drive DeepTech and IP Creation appeared first on ELE Times.
КПІ ім. Ігоря Сікорського зберігає сильні позиції в рейтингу QS World University Rankings by Subject 2026
Оприлюднено результати одного з найвпливовіших світових рейтингів університетів — і Київська політехніка знову підтвердила свій статус лідера технічної освіти України.
Голос "Академіка Вернадського" чуємо завдяки випускникові КПІ
Нещодавно роботу на станції "Академік Вернадський" розпочала 31-ша українська антарктична експедиція (УАЕ). Ця команда замінила колег із 30-ї УАЕ і працюватиме рік. Посаду системного адміністратора в ній обіймає випускник КПІ Олександр Мацібура. Це буде вже друга його зимівля на станції.
Nuvoton Launches Upgraded Driving Smart Device, NuMicro M3331 Series MCU
As the global transition toward industrial automation and smart living accelerates, the security and processing efficiency of microcontrollers (MCUs) have become decisive factors for enterprises’ success in the business-to-business market. Nuvoton Technology has announced the launch of its new NuMicro M3331 series 32-bit microcontroller. Powered by the Arm Cortex-M33 core, the M3331 series delivers exceptional performance at operating frequencies of up to 180 MHz and integrates TrustZone technology, providing a robust hardware foundation for industrial control, smart factories, smart buildings, and renewable energy.
The M3331 series is more than just a hardware platform; it is an integrated solution designed to address the challenges customers face when processing complex control algorithms and protecting intellectual property (IP). Featuring the Cortex-M33 core with built-in DSP instruction set and a single-precision floating point unit (FPU), it runs up to 180 MHz. The M3331 series is built with comprehensive security mechanisms. Through hardware-level Secure Boot, it ensures that the system executes only certified and authorised firmware from startup, establishing an immutable root of trust. To protect core IP assets, it features eXecute-Only-Memory (XOM) to safeguard core algorithms and eliminate the risk of code leakage. Additionally, the TrustZone technology partitions a secure execution environment to effectively defend against malicious attacks.
Designed for reliability, the M3331 series supports a wide operating temperature range from -40°C to +105°C and exhibits superior interference resistance (ESD HBM 4 kV / EFT 4.4 kV), significantly reducing the risk of downtime caused by environmental factors. To further enhance system reliability, the 512 KB Flash memory supports Error Correction Code (ECC) for detecting and repairing bit-flip errors. Furthermore, within the 320 KB SRAM, a hardware parity check is provided for the 64 KB core area, achieving true industrial-grade system resilience.
To meet the diverse communication demands of the IoT era, the M3331 series introduces an I3C interface and two CAN FD controllers, greatly increasing data throughput between sensors and control nodes. For high-speed peripherals and mass storage, it includes a built-in USB 2.0 High-Speed OTG controller (with on-chip PHY) and an SDH (Secure Digital Host Controller) interface, delivering exceptional performance for both gaming products requiring low-latency transmission and smart consumer devices needing high-bandwidth storage.
Moreover, the M3331 series features a 12-bit ADC with a sampling rate of up to 4.2 Msps, accurately capturing subtle changes in analogue signals. With up to 48 PWM outputs, it provides precise control solutions for professional photography lighting and stage lighting. Specifically, the series is equipped with ELLSI (Enhanced LED Light Strip Interface) and up to 10 LLSI (LED Light Strip Interface) interfaces, supporting next-generation gaming ARGB LED control protocols. This offloads the CPU and reduces development difficulty, enabling brilliant and fluid dynamic LED effects. The M3331 series offers a variety of package options, from the compact QFN 33 (4×4 mm) to the high-pin-count LQFP 128 (14×14 mm), helping customers optimise their PCB layouts.
To accelerate time-to-market, Nuvoton provides NuMaker-M3333KI and NuMaker-M3334KI evaluation boards, along with full support for mainstream RTOS (FreeRTOS, Zephyr, RT-Thread) and GUI libraries (emWin, LVGL). This ecosystem empowers customers to build stable system solutions rapidly.
The M3331 series consists of two subseries: the M3333 series (without USB 2.0 support) and the M3334 series (with USB 2.0 support).
The post Nuvoton Launches Upgraded Driving Smart Device, NuMicro M3331 Series MCU appeared first on ELE Times.
MoU signed to discuss integrating Toshiba Electronic Devices & Storage’s semiconductor business, ROHM’s semiconductor business, and Mitsubishi Electric’s power device business
Not pretty, but hopefully functional
| I have a brass annealer project and thought that it will be easy to make with protoboard. it was not, at least not for me. The welder tip was too large and there are bad joints everywhere. well, if it works🤷 [link] [comments] |
TP-Link’s Kasa EP25: Energy monitoring for a hoped-for utility bill nose-dive

How easy is it to analyze and optimize how much power the device connected to a smart plug is drawing? The answer depends in part on which hardware and firmware version you’re running.
Next up in my ongoing TP-Link smart home device ecosystem series of hands-on evaluations and teardowns:
- Tapo or Kasa: Which TP-Link ecosystem best suits ya?
- TP-Link’s Kasa HS103: A smart plug with solid network connectivity
- TP-Link’s Kasa EP10: If at first it doesn’t connect, buy, buy again
is the EP25 smart plug, which builds on the EP10 foundation with two feature set additions: Apple HomeKit (and Siri, for that matter) support, along with energy monitoring capabilities.
I bought a two-pack (with an associated “P2” product name suffix) from Amazon’s Resale (formerly Warehouse) sub-site for $13.29 plus tax during a 30%-off promotion last November. They also come in an “EP25P4” four-pack version. I’ll start with some stock photos:






Although I’ve identified the EP25 as the enhanced sibling of the EP10, particularly referencing the naming-format commonality, those of you who’ve already analyzed the above graphic with device dimensions (not to mention the side switch location) might understandably be confused. Doesn’t it look more like the earlier, beefier, HS103? Indeed, it does. Here it is below the EP10:

And now underneath the HS103:

Perhaps the larger chassis was necessary to fit the additional feature-implementing circuitry? There’s one way to find out for sure; take it apart. So, let’s start, as usual with some box shots, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:


This isn’t what the box backside originally looked like, actually:

When it arrived, there was a barcode-inclusive sticker stuck to it, as is typical with products that cycle back through the Amazon Resale sub-site after initial sale-then-customer return:

But stuck to it was something I’d not experienced before: another sticker, with a smaller black rectangle near its center:

I had a sneaking suspicion that I’d find a RFID or other tracking tag on the other side. I was right:
Continuing around the outer package sides:



Judging from the already-severed clear tape on the bottom of the box, in contrast to the still-intact tape holding the top flap in place, I assumed the original owner got inside through the bottom-end pathway:

Yup. I don’t know what surprises me more (and I’ve also seen it plenty of times before): how brutishly some folks mangle the various packaging piece(s) to get to the device(s) inside, or that they still have the impudence to return the goods for refund afterwards. Now to cut the top’s transparent tape and try out the alternative entry path:

At least the original owner was thoughtful enough to put the sliver of quick-install literature back in the box prior to returning. Although, on second thought, he or she probably never even got to it before sending everything back. There was also this, reflective of its Apple protocol-friendliness:

You also may have already noticed in the earlier bottom-view open-box shot that one of the devices inside was still encased by a protective translucent sleeve, while that of the other device was missing. I went with the latter as my teardown victim, operating under the theory that its still-plastic-covered sibling was unused and therefore most likely to still be functional for future hands-on evaluation coverage purposes. Here’s our patient:





This last shot of the underside of the device:

Specifically, this closeup of the specs, including the all-important FCC ID (2AXJ4KP125M):
is as good a time as any to explain the background to my “The answer depends in part on which hardware and firmware version you’re running” comment in this post’s subtitle. Note the following lines of prose on the product support pages for the EP25P2 and EP25P4:
Vx.0=Vx.6/Vx.8 (eg:V1.0=V1.6/V1.8)
Vx.x0=Vx.x6/Vx.x8 (eg:V1.20=V1.26/V1.28)
Vx.30=Vx.32 (eg:V3.30=V3.32)
I’d mentioned in the prior teardown in this series that TP-Link tends to cycle through numerous hardware revisions throughout a product’s life, with each hardware iteration accompanied by multiple firmware versions, and the cadence combination resulting in inconsistent functionality (said another way: bugs). The EP25 is no exception to this general rule. That said, “inconsistent functionality” seemingly is particularly notable in this product case (grammatical tweaks by yours truly):
On Amazon, I bought a 2-unit box set of the EP25P2 (“Hardware 2.6” in the Kasa app), and a 4-unit box of the EP25P4 (“Hardware 1.0” in the Kasa app). They market them as the exact same product, but the EP25P2 has much better energy and power consumption data and graphs, and a cost tool. The other just has a crude power read out. It seems like something they should’ve been clear about, and like something they could fix in the app software. I’m annoyed they did this and will return the EP25P4.
FWIW, looking back both at the device bottom closeup and the earlier bottom box shot, I’m guessing “US/2.6” references hardware v2.6. Again:
. Curiously, the four-pack (EP25P4) support page lists three hardware versions (V1.60, V1.80 and V2.60), albeit not the V1.0 h/w mentioned in the earlier Reddit post…and the two-pack (EP25P2) page mentions only V2.60.
Time to delve inside. The case-disassembly methodology was unsurprisingly identical to that for the earlier HS103, so in the interest of brevity I’ll spare you another iteration of the full image suite of steps. See the earlier teardown for ‘em; here’s today’s teardown subset. One upside this second time around: no blood loss by yours truly!







As before, I ‘spect this is the assembly subset that you’re all most interested in:
once again based on (among other things) a Hongfa HF32FV-16 relay (the tan rectangular “box” at far right). Multiple products, along with multiple hardware versions for each, may evolve in a general sense, but some things stay the same…
Detailing the “smarts”And specifically, here’s the “action” end:
From this side, the embedded antenna is visible; the PCB is otherwise bare:
You can see the antenna from the other side, too, plus a more broadly interesting presentation:
The PCB “lay of the land” is reminiscent of that inside February’s HS103, including the respective switch and LED locations:
This time, however, the prior design’s Realtek RTL8710 has been upgraded to the dual-core RTL8720 (PDF), whose beefier processing “chops” are presumably helpful for implementing the added energy monitoring and HomeKit protocol capabilities, as well as with expanded internal RAM and (optional integrated) flash memory. In this particular design, however, the flash memory is external, taking the form of an Eon Silicon Solution EN25Q32B 32 Mbit SPI serial device. It’s in the upper right corner of the PCB, next to the LED and occupying one of the IC sites you might have already noticed was unpopulated in the HS103 implementation. The other previously unpopulated IC site, below the EN25QH32B, now houses a Shanghei Belling BL0937 (PDF) single-phase energy monitoring IC. Eureka!
Tying up loose endsAs with its TP-Link (but not more amenable Amazon) smart plug predecessors, I was unable to wedge the EP25’s PCB away from the rear half of its enclosure, so there’ll be no circuit board backside photos for you…from me, at least. Alternatively, you can always check out the ones published by the FCC. If you do, you may walk away amazed (as I was) by the total area dominance by multiple large globs of solder.
In closing, I thought I’d share a somewhat related video I found while doing my research. It’s a review of the HS110, the energy monitoring variant of TP-Link’s original HS100 smart plug that I tore down nine years back:
As those Virginia Slims commercials used to say, “You’ve come a long way.” And with that, I’ll turn it over to you for your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Tapo or Kasa: Which TP-Link ecosystem best suits ya?
- TP-Link’s Kasa HS103: A smart plug with solid network connectivity
- TP-Link’s Kasa EP10: If at first it doesn’t connect, buy, buy again
The post TP-Link’s Kasa EP25: Energy monitoring for a hoped-for utility bill nose-dive appeared first on EDN.
Radar transceiver scales for automated driving

NXP’s TEF8388 RF CMOS automotive radar transceiver supports Level 2+ and Level 3 ADAS, with a roadmap toward higher levels of automation. Operating in the 76- to 81-GHz FMCW radar band, it provides 8 transmitters and 8 receivers (8T8R), scalable to 32T32R configurations for both entry-level and high-performance systems. Paired with NXP radar processors, it forms an imaging radar platform that addresses diverse performance, cost, and regulatory requirements across global markets.

The TEF8388 delivers strong RF performance—14 dBm Pout and 12 dB NF—while keeping power consumption comparable to less integrated 3T4R devices. An on-chip M7 core provides flexible chirp programming, calibration, and functional safety management.
Occupying a 16×16-mm footprint, the TEF8388 uses an optimized pin layout and strategic launcher placement to enhance channel isolation and signal quality. It meets AEC-Q100 and ISO 26262 SEooC ASIL B requirements and operates over a junction temperature range of –40 °C to +150 °C.
Development support for lead customers is available now. Mass-market support will follow later in 2026.
The post Radar transceiver scales for automated driving appeared first on EDN.
HWLLC topology pushes fast charging to 500 W

A half-wave LLC (HWLLC) platform from Renesas includes four controller ICs rated for up to 500 W for high-speed chargers. The HWLLC AC/DC converter topology scales from 100 W to 500 W, enabling chargers for power tools, e-bikes, and other appliances without the size, heat, and efficiency penalties of legacy topologies.

Combined in a 240-W USB EPR power adapter design, the HWLLC approach achieves a power density of 3 W/cm³ and 96.5% peak efficiency—described as the industry’s highest power density. The 500-W envelope broadens application range, while USB-C EPR capability enables a move beyond 100-W charging.
At the heart of the lineup is the RRW11011, an AC/DC primary-side digital controller with interleaved PFC and HWLLC operation. It delivers a wide 5-V to 48-V output for USB 3.1/3.2 EPR and other variable-load charging systems. The boost PFC stage minimizes ripple, total harmonic distortion, and EMI, while digital two-stage control enhances efficiency and reduces audible noise.
The platform also includes the RRW30120 USB PD 3.2 EPR controller with secondary-side regulation, the RRW40120 600-V half-bridge gate driver optimized for SuperGaN FETs and MOSFETs, and the RRW43110 synchronous rectifier controller.
The RRW11011, RRW30120, RRW40120, and RRW43110 are now in production, and samples are available for evaluation.
The post HWLLC topology pushes fast charging to 500 W appeared first on EDN.















