Feed aggregator

Engineering the Future of High-Voltage Battery Management: Rohit Bhan on BMIC Innovation

ELE Times - 2 hours 53 min ago

ELE Times conducts an exclusive interview with Rohit Bhan, Senior Staff Electrical Engineer at Renesas Electronics America, discussing how advanced sensing, 120 V power conversion, ±5 mV precision ADCs, and ASIL D fault-handling capabilities are driving safer, more efficient, and scalable battery systems across industrial, mobility, and energy-storage applications.

Rohit Bhan has spent two decades advancing mixed-signal and system-level semiconductor design, with a specialization in AMS/DMS verification and battery-management architectures. Over the past year, he has expanded this foundation through significant contributions to high-voltage BMIC development, helping to push Renesas’ next generation of power-management solutions into new levels of accuracy, safety, and integration.

Rohit is highly regarded within Renesas and industry-wide for his ability to bridge detailed analog modeling, digital verification, and real-world application requirements. His recent work includes developing ±5 mV high-accuracy ADCs for precise cell monitoring, implementing an on-chip buck converter that reduces board complexity, and architecting 18-bit current-sensing solutions that enable more advanced state-of-charge and state-of-health analytics. He has also integrated microcontroller-driven safety logic into verification environments—supporting ASIL D-level fault detection and autonomous response—while contributing to Renesas’ first BMIC design.

Rohit’s expertise spans behavioral modeling, reusable verification environments, multi-cell chip operation, and stackable architectures for even higher cell counts. His end-to-end perspective—ranging from system definition and testbench development to customer engagement and product innovation—has made him a key contributor to Renesas’ battery-management roadmap. As the industry moves toward higher voltages, smarter analytics, and tighter functional-safety requirements, his work is helping shape the next wave of intelligent, reliable, and scalable BMIC platforms.

Here are the excerpts from the interaction:

ELE Times: Rohit, you recently helped deliver a multi-cell BMIC architecture capable of operating at high voltage. What were the most significant engineering hurdles in moving to a new process technology for the first time, and what does that enable for future high-voltage applications?

ROHIT BHAN: From a design perspective, key challenges included managing high-stress device margins (such as parasitic bipolar effects and field-plate optimization), defining robust protection strategies for elevated operating conditions, integrating higher-energy power domains, maintaining analog accuracy across very large common-mode ranges, and working through evolving process design kit maturity. From a verification standpoint, this required extensive coverage of extreme transient conditions (including electrical overstress, surge, and load-dump-like events), which drove expanded corner matrices, mixed-signal simulation complexity, and tight correlation between silicon measurements and models to close the accuracy loop and ensure specified performance.

Looking forward, these advances enable future high-energy applications with increased monitoring and protection headroom, simpler system-level implementations, and improved measurement integrity. A mature high-stress-capable process combined with robust analog and IP libraries provides a scalable foundation for derivative products (such as variants with different channel densities or feature sets) and for modular or isolated architectures that support higher aggregate operating ranges—while preserving a common verification, validation, and qualification framework.

ELE Times: Among your 2025 accomplishments, your team achieved ±5 mV accuracy in cell-voltage measurement. Why is this level of precision so critical for cell balancing, battery longevity, and safety—especially in EV, industrial, and energy-storage use cases?

RB: If our measurement error is ±20 mV, the BMIC can “think” a cell is high when it isn’t or miss a genuinely high cell; the result is oscillatory balancing and residual imbalance that never collapses. Tightening to ±5 mV allows thresholds and hysteresis to be set small enough that balancing actions converge to a narrow spread instead of dithering. Over hundreds of cycles, that cell becomes the pack limiter (early full/empty flags, rising impedance). Keeping the max cell delta small via ±5 mV metrology lowers the risk of one cell aging faster and dragging usable capacity and power down. In addition, early detection of abnormal dV/dt under load or rest hinges on accurate voltage plateaus and inflection points—errors here mask the onset of dangerous behavior.

ELE Times: An on-chip buck converter is a major milestone in integration. How did you approach embedding such a high-voltage converter into the BMIC, and what advantages does this bring to OEMs in terms of board simplification, thermal performance, and cost?

RB: There are multiple steps involved in making this decision. It starts with finding the right process and devices, partitioning the power tree into clean voltage domains, and engineering isolation, spacing, and ESD for HV switching nodes. Finally, close the control loop details (gate drive, peak‑current trims, offsets) and verify at the system level, and correlate early in the execution phase.

For OEMs, this translates into simpler boards with fewer external components, easier routing, and a smaller overall footprint, while eliminating the need for external high-stress pre-regulators feeding the battery monitor, since the pack-level domain is managed on die. By internalizing the high-energy conversion and using cleaner harnessing and creepage strategies, elevated-potential nodes are no longer distributed across the board, significantly simplifying creepage and clearance planning at the power-management boundary. The result is fewer late-stage compliance surprises and integrated high-energy domains that are aligned with process-level reliability reviews, reducing the risk of re-layout driven by spacing or derating constraints. 

ELE Times: You also worked on an 18-bit ADC for current sensing. How does this resolution improve state-of-charge and state-of-health algorithms, and what new analytics or predictive-maintenance features become possible as a result?

RB: Regarding the native 18‑bit resolution and long integration window: the coulomb‑counter (CC) ADC integrates for ~250 ms (milliseconds) per cycle, with selectable input ranges ±50/±100/±200 mV across the sense shunt; results land in CCR[L/M/H] and raise a completion IRQ. This is the basis for low‑noise charge throughput measurement and synchronized analytics. Error and linearity you can budget: the EC table shows 18‑bit CC resolution, INL ~27 LSB, and range‑dependent µV‑level error (e.g., ±25 µV in the ±50 mV range), plus a programmable dead‑zone threshold for direction detection—so the math can be made deterministic. Cross‑domain sync: A firmware/RTL option lets the CC “integration complete” event trigger the voltage ADC sequencer, tightly aligning V and I snapshots for impedance/OCV‑coupled analytics.

Two main functionalities that depend on this accuracy are State of Charge (SOC) and State of Health (SOH). First, for SOC accuracy—following is where the extra bits show up:

  1. Lower quantization and drift in coulomb counting: with 18‑bit integration over 250 ms, the charge quantization step is orders smaller than typical load perturbations. Combined with the ±25–100 µV error bands (range‑dependent), which reduces cycle‑to‑cycle SOC drift and tightens coulombic efficiency computation—especially at low currents (standby, tail‑charge), where coarse ADCs mis‑estimate.
  2. Cleaner “merge” of model‑based and measurement‑based SOC: the synchronized CC‑→‑voltage trigger lets you fuse dQ/dV features with the integrated current over the same window, improving EKF/UKF observability when OCV slopes flatten near the top of charge. Practically: fewer recalibration waypoints and tighter SOC confidence bounds across temperature.
  3. Robust direction detection at very small currents: the dead‑zone and direction bits (e.g., cc_dir) are asserted based on CC codes exceeding a programmable threshold; you can reliably switch charge/discharge logic around near‑zero crossings without chattering. That matters for taper‑charge and micro‑leak checks.

For SOH + predictive maintenance, this resolution enables capacity‑fade trending with confidence, specifically:

  • Cycle‑level coulombic efficiency becomes statistically meaningful, not noise‑dominated—letting you detect early deviations from the fleet baseline.
  • Impedance‑based health scoring (per cell and stack): enabling impedance mode in CC (aligned with voltage sampling) gives snapshots each conversion period; tracking ΔR growth vs. temperature and SOC identifies aging cells and connector/cable degradation proactively.
  • Micro‑leakage & parasitic load detection: with µV‑level CC error windows and long integration, you can flag slow, persistent current draw (sleep paths, corrosion) that would be invisible to 12–14‑bit chains—preventing “vanishing capacity” events in ESS and industrial packs.
  • Adaptive balancing + charge policy: fusing accurate dQ with cell ΔV allows balancing decisions based on energy imbalance, not just voltage spread. That reduces balancing energy, speeds convergence, and lowers thermal stress on weak cells.
  • Early anomaly signatures: the combination of high‑resolution CC and triggered voltage sequences yields load‑signature libraries (step response, ripple statistics) that expose incipient IR jumps or contact resistance growth—feeding an anomaly detector before safety limits trip.

ELE Times: Even with high-accuracy ADCs, on-chip buck converters, and advanced fault-response logic, the chip is designed to minimize quiescent current without compromising monitoring capability. What design strategies or architectural decisions enabled such low power consumption?

RB: We achieved very low standby power through four key strategies. First, we defined true power states that completely shut down high-consumption circuitry, such as switching regulators, charge pumps, high-speed clocks, and data converters. Second, wake-up behavior is fully event-driven rather than periodically active. Third, the always-on control logic is designed for ultra-low leakage operation. Finally, voltage references and regulators are aggressively gated, so precision analog blocks are only enabled when they are actively needed. Deeper low-power modes further reduce consumption by selectively disabling additional domains, enabling progressively lower leakage states for long-term storage or shipping scenarios.

ELE Times: You’ve emphasized the role of embedded microcontrollers in both chip functionality and verification. Can you explain how MCU-driven fault handling—covering short circuits, overcurrent, open-wire detection, and more—elevates functional safety toward ASIL D compliance?

RB: In our current chip, safety is layered so hazards are stopped in hardware while an embedded MCU and state machines deliver the diagnostics and control that raise integrity toward ASIL D. Fast analog protection shuts high‑side FETs on short‑circuit/overcurrent and keeps low‑frequency comparators active even in low‑power modes, while event‑driven wake and staged regulator control ensure deterministic, traceable transitions to safe states.

The MCU/FSM layer logs faults, latches status, applies masks, and cross‑checks control vs. feedback, with counters providing bounded detection latency and reliable classification—including near‑zero current direction via a programmable dead‑zone. Communication paths use optional CRC to guard commands/telemetry, and a dedicated runaway mechanism forces NORMAL→SHIP if software misbehaves, guaranteeing a known safe state. Together, these mechanisms deliver immediate hazard removal, high diagnostic coverage of single‑point/latent faults, auditable evidence, and controlled recovery—providing the system‑level building blocks needed to argue ISO 26262 compliance up to ASIL D.

ELE Times: Stackable BMICs are becoming a major focus for high-cell-count systems. What challenges arise when daisy-chaining devices for applications like e-bikes, industrial storage, or large EV packs, and how is your team addressing communication, synchronization, and safety requirements?

RB: Stacking BMICs for high‑cell‑count packs introduces tough problems—EMI and large common‑mode swings on long harnesses, chain length/topology limits, tighter protocol timing at higher baud rates, coherent cross‑device sampling, and ASIL D‑level diagnostics plus safe‑state behavior under hot‑plug and sleep/wake. We address these with hardened links (transformer for tens of meters, capacitive for short hops), controlled slew and comparator front‑ends, ring/loop redundancy, and ASIL D‑capable comm bridges that add autonomous wake; end‑to‑end integrity uses 16/32‑bit CRC, timeouts, overflow guards, and memory CRC. For synchronization, we enforce true simultaneous sampling, global triggers, and evaluate PTP‑style timing, using translator ICs to coordinate mixed chains.

ELE Times: You have deep experience building behavioral models using wreal and Verilog-AMS. How does robust modeling influence system definition, mixed-mode verification, and ultimately silicon success for high-voltage BMICs?

RB:  Robust wreal/Verilog‑AMS modeling is a force multiplier across the mixed signal devices. It clarifies system definition (pin‑accurate behavioral blocks with explicit supplies, bias ranges, and built‑in checks), accelerates mixed‑mode verification (SV/UVM testbenches that reuse the same stimuli in DMS and AMS, with proxy/bridge handshakes for analog ramp/settling), and de‑risks silicon by catching integration and safety issues early (SOA/EMC assumptions, open‑wire/CRC paths, power‑state transitions) while keeping sims fast enough for coverage.

Concretely, pin‑accurate DMS/RNM models and standardized generators enforce the right interfaces and bias/inputs status (“supplyOK”, “biasOK”), reducing schematic/model drift. SV testbenches drive identical sequences into RNM and AMS configs for one‑bench reuse so timing‑critical behaviors are verified deterministically. RNM delivers order‑of‑magnitude speed‑ups (e.g., ~60× seen in internal comparisons) to reach coverage across modes. Model‑vs‑schematic flows quantify correlation (minutes vs. hours) and expose regressions when analog blocks change. Together with these practices in our methodology and testbench translates into earlier bug discovery, tighter spec alignment, and first‑time‑right outcomes.

ELE Times: Your work spans diverse categories—from power tools and drones to renewable-energy systems and electric mobility. How do application-specific requirements shape decisions around cell balancing, current sensing, and protection features?

RB: Across segments, application realities drive our choices: power tools and drones favor compact BOMs and fast transients, so 50 mA internal balancing with brief dwell and measurement settling, tight short‑circuit latency, and coulomb‑counter averaging for SoC works well; e‑bikes/LEV typically stay at 50 mA but require separate charge vs. discharge thresholds (regen vs. propulsion), longer DOC windows and microsecond‑class SCD cutoffs to satisfy controller safety timing. Industrial/renewables often need scheduled balancing and external FET paths beyond 50 mA, plus deep diagnostics (averaging, CRC, open‑wire) across daisy‑chained stacks and EV/high‑voltage packs push toward ASIL D architectures with pack monitors, redundant current channels, contactor drivers, and ring communications. Current sensing is chosen to match the environment—low‑side for cost‑sensitive packs, HV differential with isolation checks in EV/ESS—while an 18‑bit ΔΣ coulomb counter and near‑zero dead‑zone logic preserve direction fidelity. Protection consistently blends fast analog comparators for immediate energy removal with MCU‑logged recovery and robust comms (CRC, watchdogs), so each market gets the right balance of performance, safety, and serviceability.

ELE Times: As battery management and gauges (BMG) evolve toward higher voltages, embedded intelligence, and greater integration, what do you see as the next major leap in BMIC design? Where are the biggest opportunities for innovation over the next five years?

RB: This is an exciting topic. Based on our roadmaps and the work we have been doing, the next major leap in BMIC design is a shift from “cell‑monitor ICs” to a smart, safety‑qualified pack platform—a Battery Junction Box–centric architecture with edge intelligence, open high‑speed wired communications, and deep diagnostics that run in drive and park. Here’s where I believe the biggest opportunities lie over the next five years:

  • Pack‑centric integration: the Smart Battery Junction Box
  • Communications: from proprietary chains to open, ring‑capable PHY
  • Metrology: precision sensing + edge analytics
  • Functional safety that persists in sleep/park
  • Power: HV buck integration becomes table stakes
  • Balancing: thermal‑aware schedulers and scalable currents
  • Cybersecurity & configuration integrity for packs
  • Verification‑driven design: models that shorten the loop.

The post Engineering the Future of High-Voltage Battery Management: Rohit Bhan on BMIC Innovation appeared first on ELE Times.

Anritsu Launches New RF Hardware Option, Supporting 6G FR3 

ELE Times - 3 hours 28 min ago

Anritsu Corporation released a new RF hardware option for its Radio Communication Test Station MT8000A to support the key FR3 (Frequency Range 3) frequency band for next‑gen 6G mobile systems. With this release, the MT8000A platform now supports evolving communications technologies, covering R&D through to final commercial deployment of 4G/5G and future 6G/FR3 devices.

Anritsu will showcase the new solution in its booth at MWC Barcelona 2026 (Mobile World Congress), the world’s largest mobile communications exhibition, held in Barcelona, Spain, from March 2 to 5, 2026.

Since 6G is expected to deliver ultra-high speed, ultra-low latency, ultra-safety and reliability far surpassing 5G, worldwide, international standardisation efforts are accelerating toward commercial 6G release.

The key high‑capacity data transmission and wide-coverage features of 6G require using the FR3 frequency band (7.125 to 24.25 GHz), and the Lower FR3 band range up to 16 GHz, which extends from the FR1 (7.125 GHz) band, is already on the agenda for the 2027 World Radiocommunication Conference (WRC-27) discussions.

By leveraging its long expertise in wireless measurement, Anritsu’s MT8000A test platform leads the industry with this highly scalable new RF hardware option supporting the Lower FR3 band, and covering both current and next‑generation technologies. Future 6G functions will be supported by seamless software upgrades, helping speed development and release of new 6G devices.

Development Background

The FR3 frequency band is increasingly important in achieving practical 6G devices, meaning current 4G/5G test instruments (supporting FR1 and FR2) require hardware upgrades.

Additionally, dedicated FR3 RF tests are required because FR3 and conventional FR1/FR2 bands have different RF-related connectivity and communication quality features.

Furthermore, FR3 test instruments will be essential for both 6G protocol tests to validate network connectivity, and for functional tests to comprehensively evaluate service/application performance.

These factors are driving demand for a highly expandable, multifunctional, and high‑performance test platform like the MT8000A, covering both existing 4G/5G devices and next‑generation multimode 4G/5G/6G devices.

Product Overview and Features

Radio Communication Test Station MT8000A

The current MT8000A test platform supports a wide range of 3GPP-based applications, including RF, protocol, and functional tests for developing 4G/5G devices.

By adding this new industry-beating RF hardware option supporting 6G/Lower FR3 bands, Anritsu’s MT8000A platform assures long‑term, cost-effective use for developing future 6G/FR3 devices.

Anritsu’s continuing support for future 6G/FR3 test functions using MT8000A software upgrades will advance the evolution of next‑generation communications and help achieve a useful, safe, and stable network‑connected society.

The post Anritsu Launches New RF Hardware Option, Supporting 6G FR3  appeared first on ELE Times.

Марія Зуєва: "КПІ сприяє формуванню професійної компетентності та характеру"

Новини - 3 hours 52 min ago
Марія Зуєва: "КПІ сприяє формуванню професійної компетентності та характеру"
Image
Інформація КП ср, 02/11/2026 - 09:00
Текст

Державні академічні стипендії імені Героїв Небесної Сотні вшановують подвиг наймолодших Героїв Небесної Сотні. Кожна стипендія носить ім'я одного з героїв і призначається студентам спеціальності, на якій навчалася ця людина. "Стипендія імені Героїв Небесної Сотні – це водночас підтримка талановитої молоді та вшанування нашої новітньої історії. Ми дякуємо університетам за прозорий відбір i гарантуємо належне фінансування програми. Вірю, що ці студенти понесуть цінності свободи й гідності у свої професії та громади", – зазначив міністр освіти і науки України Оксен Лісовий.

Violumas launches new 255nm, 265nm and 275nm LEDs in mid-power, high-power, and high-density packages

Semiconductor today - Tue, 02/10/2026 - 23:35
Violumas Inc of Fremont, CA, USA, a provider of high-power UV LED solutions and inventor of 3-PAD LED technology, has released its next-generation 255nm, 265nm and 275nm LEDs in both SMD and COB configurations...

Violumas launches new 255nm, 265nm and 275nm LEDs in mid-power, high-power, and high-density packages

Semiconductor today - Tue, 02/10/2026 - 23:35
Violumas Inc of Fremont, CA, USA, a provider of high-power UV LED solutions and inventor of 3-PAD LED technology, has released its next-generation 255nm, 265nm and 275nm LEDs in both SMD and COB configurations...

555 VCO revisited

EDN Network - Tue, 02/10/2026 - 16:40

It is well known that a 555 timer in astable mode can be frequency modulated by applying a control voltage (CV) to pin 5. The schematic on the left of Figure 1 shows this classic 555 VCO. 

Figure 1 Classic VCO (left) and new 555 VCO variant (right), where Pin 5 is not modulated, which leads to a constant 50% pulse width, independent of frequency.

Modulating pin 5 has some severe drawbacks: The control voltage (CV) must be significantly > 0 V and < V+, otherwise the oscillation stops.

Wow the engineering world with your unique design: Design Ideas Submission Guide

In contrast to a typical VCO, which outputs 0 Hz or Fmin @ CV=0 and reaches Fmax @ CVmax, the CV behavior of the classic 555 VCO is inverted and nonlinear. This is due to the modulation of the upper and lower Schmitt trigger thresholds, and pulse width changes with frequency. The useful tuning range Fmax/Fmin is limited to about 3.

Stephen Woodward’s “Can a free-running LMC555 VCO discharge its timing cap to zero?” shows some clever improvements: linear-in-pitch CV behavior and an extended 3 octave range, but still suffers from other “pin 5” drawbacks.

The schematic on the right of Figure 1 shows a new variant of the 555 VCO. Pin 5 is not modulated, which leads to a constant 50% pulse width, independent of frequency.

A rising CV results in a higher frequency. CV=0 is allowed and generates Fmin.

The useful tuning range is >10 and ≥100, with some caveats noted below.

Although it uses only 2 resistors and 1 capacitor, like the classic 555 astable configuration, it is a bit harder to understand. The basic function of adding a fraction of the square wave output voltage to the triangle voltage over C, which rises in frequency, is described in my recent Design Idea (DI), “Wide-range tunable RC Schmitt trigger oscillator.”

There, I use a potentiometer to add a fraction of the output to the capacitor voltage.

In the new 555 VCO variant, the potentiometer voltage is replaced by an external CV, which is chopped by the 555 discharge output (pin 7).

When CV is 0, the voltage on the right side of C3 is also 0, and the VCO outputs Fmin. With rising CV, a square wave voltage between 0V (pin 7 discharging) and CV (pin 7 open) appears on the right side of C3. Similar to my above-mentioned DI, this square wave voltage must always be smaller than the hysteresis voltage  (555: Vh=V+/3), otherwise Fmax goes towards infinity. That is why you must watch your CVmax if you want to reach high Fmax/Fmin ratios.

Figure 2 shows a QSPICE simulation of frequency with respect to CV from 0 V to 3.9 V in 100 mV steps.

Figure 2 QSPICE simulation of frequency with respect to CV from 0 V to 3.9 V in 100 mV steps.

A prototype with component values from Figure 1  and V+=12 V has been breadboarded, and a rough frequency-versus-CV curve is measured and marked with a red dot in the QSPICE simulation in Figure 2.

Figure 3 shows a scope screenshot for Fmin. 

Figure 3 A scope screenshot for Fmin, CH1 (yellow) output voltage, CH2 (magenta) CV=0.

In conclusion, the new 555 VCO circuit overcomes some drawbacks of the classic version, like limited CV range, inverted CV/Hz behavior, and changing pulse width, without using more components. Unfortunately, it still shows nonlinear CV/Hz behavior. Maybe using a closed loop, with an opamp and a simple charge pump, can tame it by raising the chip count to 2.

Uwe Schüler is a retired electronics engineer. When he’s not busy with his grandchildren, he enjoys experimenting with DIY music electronics.

Related Content

The post 555 VCO revisited appeared first on EDN.

Infineon’s 2026 edition of GaN Insights eBook highlights adoption in power electronics

Semiconductor today - Tue, 02/10/2026 - 15:48
Infineon Technologies AG of Munich, Germany has published the 2026 edition of its annual GaN Insights, focusing on GaN gallium nitride (GaN) technology, its applications and future prospects, as the increasing adoption of GaN power solutions is driving a significant transformation in the power electronics industry...

Simplifying inductive wireless charging

EDN Network - Tue, 02/10/2026 - 15:00
Block diagram of Microchip's 300-W inductive power transfer reference design.

What do e-bikes and laptops have in common? Both can be wirelessly charged by induction.

E-bikes and laptops both use lithium-ion batteries for power, chosen for their light weight, high energy density, and long lifespan. Both systems can be wirelessly recharged via the wireless power transfer (WPT) method that uses electromagnetic induction to transfer energy to the battery without cables.

For e-bikes, there is a wireless charging pad or inductive tile that e-bikes park on to transfer power. For induction charging, one coil is integrated into the static pad or tile (transmitter coil) and the other (the receiver coil) is situated on the bike, often in the kickstand. The charging pad’s coil, fed by AC, creates a magnetic field, which in turn produces current in the bike’s coil. This AC is then converted to DC, to power the bike’s battery.

The principle is the same for laptops, as well as a broad range of consumer and industrial devices, including small robots, drones, power tools, robotic vacuum cleaners, wireless routers, and lawnmowers.

Microchip provides a 300-W electromagnetic inductive wireless electric power transmission reference design that can be incorporated into any type of low-power consumer or industrial system for wireless charging (see block diagram in Figure 1). It consists of a Microchip WP300TX01 power transmitter (PTx) and Microchip WP300RX01 power receiver (PRx). The design operates with efficiency of over 90% at 300-W power and a Z-distance (the distance between pairing coils) of 5−10 mm.

Block diagram of Microchip's 300-W inductive power transfer reference design. Figure 1: Block diagram of the 300-W inductive power transfer reference design (Source: Microchip Technology Inc.)

The transmitter (Figure 2) is nominally powered from a 24-V rail and the receiver regulates the output voltage to nominal 24 V.

Block diagram of the power transmitter in Microchip's 300-W inductive power transfer reference design.Figure 2: Block diagram of the power transmitter (Source: Microchip Technology Inc.)

The design’s operating DC input voltage range is 11 V to 37 V, with input overvoltage and undervoltage protection, as well as overcurrent and thermal protection via a PCB/coil temperature-monitoring functionality. Maximum receiver output current is 8.5 A, and the receiver output voltage is adjustable from 12 V to 36 V.

The design implements a Microchip proprietary protocol, developed after years of research and development and, with patents granted in the U.S., ensuring reliable power transfer with high efficiency. The system also implements foreign object detection (FOD), a safety measure that avoids hazardous situations should a metallic object find its way in the vicinity of the charging field. Once the FOD detects a metallic object near the charging zone, where the magnetic field is generated, it stops the power transfer.

The reference design incorporates this functionality on the main coil, ceasing power from the transmitter until the object is removed. FOD is performed by stopping four PWM drive signals, with four being the maximum to avoid stopping the charging entirely.

This reference design also detects some NFC/RFID cards and tags.

Transmitter and receiver

The WP300TX01 is a fixed-function device designed for wireless power transfer, as is the WP300RX01 chip, designed for receiving wireless power. The two are paired together for a maximum power transfer of 300 W.

The user can configure the input’s under- and overvoltage, as well as the input’s overcurrent and overpower. There are three outputs for general-purpose LEDs and multiple OLED screens, as well as five inputs for interface switches. The design enables OLED display pages to allow viewing and monitoring of live system parameters, and as with the input parameters, the OLED panel’s settings can be configured by the user.

The WP300RX01 device operates from 4.8 V to 5.2 V, in an ambient temperature between −40°C and 85°C. Like with the transmitter controller, this device provides overvoltage, undervoltage, overcurrent, overpower, and overtemperature protection, with added qualification of AEC-Q100 REVG Grade 3 (−40°C to 85°C), which refers to a device’s ability to function reliably within this ambient temperature range.

The reference design simplifies and accelerates WPT system design and eliminates the need to go through the certification process, as it has already been accredited with the CE certification, which signifies that a product meets all the necessary requirements of applicable EU directives and regulations.

Types of wireless charging

There are different types of wireless charging, including resonant, inductive, electric field coupling, and RF. Inductive charging for smartphones and other lower-power electronic devices is guided by the Qi open standard, introduced by the Wireless Power Consortium in 2010, to create a universal, interoperable charging concept for electronic devices.

The Qi open standard promotes interoperability, thus avoiding multiple chargers and cables, as well as market fragmentation into different proprietary solutions. Many manufacturers have adopted this standard in their products, including tech giants like Apple and Samsung.

Since 2023, the Qi 2.0 version brings faster charging to mobile devices to 15 W, certified for interoperability and safety. Qi 2.0 devices feature magnetic attachment technology, which aligns devices and chargers perfectly for improved energy efficiency for faster and safer charging and ease of use. Qi 2.X includes the Magnetic Power Profile (MPP) with an added operating frequency of 360 kHz. With MPP, a magnetic ring ensures the receiver’s coil aligns perfectly with the charger’s coil, thus improving power transfer and reducing heat.

Qi 2.2, released in June 2025, enables 25-W charging, building on the convenience and energy efficiency of Qi while improving the wireless charging time.

Simultaneous charging of two 15-W Qi receivers

In addition to its 300-W electromagnetic inductive wireless electric power transmission reference design reviewed earlier in this article, Microchip also offers the Qi2 dual-pad wireless power transmitter reference design. This dual-pad, multi-coil wireless power transmitter reference design enables simultaneous charging of two 15-W Qi receivers (see Figure 3).

At the heart of the design is a Microchip dsPIC33 digital-signal controller (DSC) that simultaneously controls both charging pads. The dual-pad design is compatible with the Qi 1.3 and Qi 2.x standards, as well as MPP and Extended Power Profile.

The hardware is reconfigurable and supports most transmitter topologies. In addition to MPP, it supports Baseline Power Profile for receivers to 5 W.

Block diagram of Microchip's Qi 2.0 dual-pad wireless power transmitter reference design.Figure 3: Block diagram of the Qi 2.0 dual-pad wireless power transmitter reference design (Source: Microchip Technology Inc.)

The MPP charging pad initiates charge with a 12-kHz inverter switching frequency but will shift to 360 kHz when connected to an MPP PRx. The dsPIC33CK DSC executes two charger instances. To facilitate support for different protocols, real-time decisions based on charging pad and receiver type are required.

The MPP charging pad initiates charge with a 12-kHz inverter switching frequency but will shift to 360 kHz when connected to an MPP PRx. The dsPIC33CK DSC executes two charger instances. To facilitate support for different protocols, real-time decisions based on charging pad and receiver type are required.

The software-based design provides a high level of flexibility to optimize key features of the wireless power system, such as efficiency, charging area, Z-distance, and FOD. To support applications with a wide input voltage range, each PTx includes a front-end four-switch buck-boost (4SWBB) converter for power regulation. The 4SWBB connects to a full-bridge inverter for driving the resonant tank. On the MPP charger, additional resonant capacitor switch networks enable higher resonant frequency. An MP-A13 charger implements a similar coil select circuitry for energizing the coil with the strongest signal possible, enabling a wider area of placement.

This reference design is automotive-grade and includes CryptoAuthentication, hardware-based (on-chip) secure storage for cryptographic keys, to protect communication and data handling. In addition, the design includes a Trust Anchor TA100/TA010 secure storage subsystem. The dsPIC33CK device architecture also allows the integration of additional software stacks, such as automotive CAN stack or NFC stacks for tag detection.

It’s worth noting that the variable-input voltage, fixed-frequency power control topology implemented in the transmitter is ideal for systems that must meet stringent electromagnetic-interference and electromagnetic-compatibility requirements.

In addition to all these features, including FOD through calibrated power loss, the dual-charging reference design also provides measured quality factor/resonant frequency and ping open-air object detection; multiple fast-charge implementations, including for Apple and Samsung; and several receiver modulation types, such as AC capacitive and AC/DC resistive. For added safety, the design includes thermal power foldback and shutdown and overpower protection.

A UART-USB communication interface enables reporting and debugging of data packets, and LEDs indicate system status and coil selection. There is a reset switch and temp sensor inputs for added functionalities.

With the continuously evolving standards for Qi and unique new applications requiring higher-wattage wireless charging, there is plenty of opportunity for innovation and growth in the wireless charging space. Microchip experts can provide you with the right guidance for seamlessly bringing your wireless charging solution to market.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The post Simplifying inductive wireless charging appeared first on EDN.

Звіт ректора за 2025 рік

Новини - Tue, 02/10/2026 - 15:00
Звіт ректора за 2025 рік kpi вт, 02/10/2026 - 15:00
Текст

Звіт ректора Національного технічного університету України «Київський політехнічний інститут імені Ігоря Сікорського» Мельниченка Анатолія Анатолійовича за 2025 рік про виконання Контракту №І-44 від 18 липня 2024

Navitas unveils 10kW DC–DC platform delivering 98.5% efficiency for 800VDC next-gen AI data centers

Semiconductor today - Tue, 02/10/2026 - 11:08
Navitas Semiconductor Corp of Torrance, CA, USA — which provides GaNFast gallium nitride (GaN) and GeneSiC silicon carbide (SiC) power semiconductors — has unveiled a 10kW DC–DC power platform delivering up to 98.5% peak efficiency and 1MHz switching frequency, enabling what is claimed to be unprecedented power density to support the rapid, large-scale expansion of next-generation AI data centers...

Anritsu Achieves Skylo Certification to Accelerate Global Expansion for NTNs

ELE Times - Tue, 02/10/2026 - 09:01

ANRITSU CORPORATION announced the expansion of its collaboration with Skylo Technologies with the successful certification of Anritsu’s RF and protocol test cases for Skylo’s non-terrestrial network (NTN) specifications. This milestone completes a comprehensive suite of Skylo-approved RF and protocol test cases, enabling narrowband IoT devices to operate seamlessly over Skylo’s NTN in alignment with 3GPP Release 17.

The momentum behind satellite-to-ground connectivity continues to accelerate as mobile operators and enterprises seek to extend reliable coverage across remote regions, industrial sites, and maritime environments. Under these circumstances, Skylo’s NTN network brings efficient power, low cost, and highly resilient NB-IoT capabilities to industries such as agriculture, logistics, maritime, and mining, enabling remote sensing, asset tracking, and safety-critical applications where a terrestrial network is out of reach.

Using Anritsu’s ME7873NR and ME7834NR platforms, now certified under the Skylo Carrier Acceptance Test program, device manufacturers will be able to validate NB-IoT NTN chipsets, modules, and devices for Skylo’s network with a fully automated and repeatable test environment. These solutions integrate 3GPP 4G and 5G protocols with NTN-specific parameters, ensuring accurate simulation of live network scenarios while reducing test time and accelerating device readiness.

Anritsu’s test solutions provide end-to-end validation for terrestrial and non-terrestrial networks within a single environment, enabling realistic emulation of satellite channel conditions and orbital dynamics for comprehensive verification of device performance. This level of testing rigour ensures interoperability, reliability, and high performance for dual-mode NTN devices destined for deployment across global markets.

Andrew Nuttall, Chief Technology Officer and Co-founder at Skylo Technologies, said: “We’re excited to join forces with Anritsu to accelerate innovation in non-terrestrial networks. This collaboration strengthens our shared commitment to delivering reliable, high-performance connectivity solutions for a rapidly evolving global market. Together, we’re enabling the next generation of devices and services that will redefine what’s possible in satellite-enabled connectivity.”

Daizaburo Yokoo, General Manager of Anritsu’s Mobile Solutions Division, said: “Partnering with Skylo represents an exciting step forward in advancing non-terrestrial network technology. This collaboration underscores our shared commitment to drive interoperability and set new standards for the future of global communications.”

Skylo operates on 3GPP Release 17 specifications and has developed additional “Standards Plus” extensions to enhance performance and interoperability across satellite networks. These Skylo-specified enhancements ensure that devices certified through the Skylo CAT program deliver robust connectivity and a seamless user experience across its expanding NTN footprint.

In partnership with Skylo, Anritsu remains committed to advancing 5G device development, enabling seamless global connectivity for data, voice, and messaging.

The post Anritsu Achieves Skylo Certification to Accelerate Global Expansion for NTNs appeared first on ELE Times.

Arrow Electronics Initiates Support for Next-Gen Vehicle E/E Architecture

ELE Times - Tue, 02/10/2026 - 08:31

Arrow Electronics has launched a strategic initiative and research hub to support next-generation vehicle electrical and electronic (E/E) architecture.

The available resources provide automotive manufacturers and tier-1 suppliers with the engineering expertise and supply chain stability required to navigate the industry’s shift toward software-defined vehicles.

As consumer and commercial vehicles evolve into complex, intelligent platforms, the traditional method of adding a separate computer for every new electronic feature is no longer sustainable. E/E architecture represents a complete overhaul of the “nervous system” within modern vehicles.

This fundamental shift moves away from hundreds of individual components toward a more centralised system where powerful computing hubs manage multiple functions. This transition can streamline and harmonise systems and operations while reducing the internal wiring of a car by up to 20 per cent, leading to vehicles that are lighter, more energy-efficient and easier to update via software throughout the vehicle’s lifecycle.

Aggregating Hardware, Software and Supply Chain Expertise

Arrow is a central solution aggregator for E/E architecture, bridging the gap between individual components and complete, integrated systems. Arrow’s portfolio of design engineering services includes a dedicated team of automotive experts who provide cross-technology support in both semiconductor and IP&E (interconnect, passive and electromechanical components) sectors.

This technical depth is matched by a vast global inventory and robust supply chain services that help ensure confidence through multisourced, traceable component strategies and proactive obsolescence planning so that automakers have the right components in hand when they need them.

In addition to hardware, Arrow has significantly expanded its transportation software footprint in recent years to include expertise in AUTOSAR, functional safety standards and automotive cybersecurity.

Strengthening the Automotive Ecosystem

“E/E architecture is the cornerstone of the modern automotive revolution, enabling the transition from hardware-centric machines to intelligent, software-defined mobility,” said Murdoch Fitzgerald, chief growth officer of global services for Arrow’s global components business. “By combining our global engineering reach with a broad range of components and specialised software expertise, we are well positioned to help our customers navigate this complexity, reducing their time-to-market and helping ensure their platforms are built to adapt as the industry evolves.”

Arrow’s E/E architecture initiative builds on the company’s 2024 acquisitions of specialist software firms iQMine and Avelabs, leading engineering services providers for the automotive and transportation industry. These additions have bolstered Arrow’s software development centres and its Automotive Centre of Excellence.

To support engineers and procurement leaders through E/E architecture redesign, Arrow has launched a new dedicated research hub. This online resource provides comprehensive technical insights, whitepapers and design tools specifically for E/E architecture development.

The post Arrow Electronics Initiates Support for Next-Gen Vehicle E/E Architecture appeared first on ELE Times.

Software-Defined Everything: The Foundation of the AI-powered Digital Enterprise

ELE Times - Tue, 02/10/2026 - 08:08

Courtesy: Siemens

Industry today is not facing a single technological change but a structural transformation. Markets are evolving faster than production systems, product life cycles are shortening, while industrial assets are designed to last for decades. At the same time, complexity along the entire value chain is increasing – technologically, organizationally, and regulatory. In this reality, adaptability becomes the decisive capability to secure and sustainably develop industrial value creation.

Within this context, classical automation reaches its structural limits. Automation based on fixed sequences, static logics, and extensive manual engineering can no longer keep up with the pace of modern industry. Efficiency gains within this paradigm are insufficient when products, processes, and frameworks are constantly changing – and they do not provide a sustainable foundation for the widespread use of artificial intelligence.

What is needed now is the next evolutionary step: the automation of automation itself. Instead of specifying every process in detail, industrial systems must be empowered to solve tasks autonomously – based on objectives, context, and continuous learning. Software-Defined Everything (SDx) becomes the necessary organising principle: it decouples functionality from specific hardware, creates a continuous, lifecycle-spanning data foundation, and enables systems to self-configure, adapt, and optimise.

In production, this approach manifests as Software-Defined Automation (SDA). SDA is the consistent application of Software-Defined Everything to the production automation layer. Control logic, functionality, and intelligence are decoupled from physical hardware, software-defined, and continuously developed. Hardware remains the stable, high-performance foundation, while software provides flexibility, adaptability, and learning capability to production systems.

This creates the structural basis for the AI-powered Digital Enterprise: an industrial organisation in which software, digital twins, and industrial AI work in closed-loop cycles, systems learn continuously, and decisions are not only prepared but also operationally executed. From this capability, the path to the Industrial Metaverse opens up – as the next stage of development, where planning, simulation, collaboration, and operational control converge in a shared digital space, supporting real industrial value creation in real time.

Stable foundation, flexible control: Software-Defined Automation in production

For many years, industrial functionality was inseparably tied to hardware. New requirements meant new components, modifications, or downtime. This model was stable – but no longer fast enough.

Software-Defined Everything breaks this logic. Functions, intelligence, and control are decoupled from specific hardware and moved into software. In production, this takes the form of Software-Defined Automation (SDA): the automation layer itself becomes software-defined, controlled, and continuously improved, while hardware continues to serve as a stable, high-performance foundation.

This fundamentally changes industrial systems:

  • Functions can be adapted via software instead of physical modifications
  • Systems evolve continuously throughout their lifecycle
  • Adaptability becomes a structural characteristic

Industry becomes not only more digital but also definable, controllable, and optimizable through software.

Practical example: Software-Defined Automation in action

How this transformation is already becoming reality can be seen in the automotive industry. Companies, together with Siemens, are implementing Software-Defined Automation as an integral part of Software-Defined Everything. By introducing a virtual, TÜV-certified PLC, production control logic is no longer tied to physical control hardware but runs as software – centrally managed, flexibly scalable, and continuously updated.

This implements a core principle of SDA: the automation layer itself is software-defined. New functions can be rolled out via software, production systems can be quickly adapted to new vehicle variants, and updates and tests can be prepared and validated virtually. IT and OT environments converge into a unified, software-based operation.

The result is production that is not only more efficient but also learning- and AI-capable – a key prerequisite for the AI-powered Digital Enterprise.

Software-Defined as a bridge between goal and reality

The real value of Software-Defined Everything lies not in individual applications but in connecting the digital target picture with actual operations. SDx – and in production specifically SDA – enables the digital representation of target and actual states of industrial systems and products.

Real operational data from running plants is combined with target states from simulations, digital twins, and engineering models. Unlike isolated analytics or digital twin solutions, this creates a continuous, consistent data foundation across the entire lifecycle – from design through implementation to optimisation. Most importantly, it creates a bidirectional connection: digital insights directly influence operations.

Digital insights are no longer abstract. They become actionable.

Why Software-Defined Everything is the prerequisite for Industrial AI

Artificial intelligence only delivers value in industry if it can do more than analyse – it must act. On a software-defined data foundation, target and operational states can be continuously compared and contrasted. AI methods detect deviations, identify correlations across products, machines, and plants, and derive concrete optimisation recommendations.

The decisive step follows: Software-Defined Everything – and in production, Software-Defined Automation – closes the loop. AI-driven insights are directly translated into operational adjustments. Machines, processes, and products respond autonomously, without manual reconfiguration.

This creates learning systems that continuously improve – not as an exception, but as the standard.

The AI-powered Digital Enterprise: Learning as an operating system

When Software-Defined Everything, Software-Defined Automation, digital twins, and industrial AI interact, a new form of industrial organisation emerges. Products become platforms, production systems dynamically adapt to new variants and requirements, and knowledge is generated in ongoing operations and systematically made usable.

The AI-powered Digital Enterprise is therefore not a static target but a continuous learning process embedded within the systems themselves.

Industrial Metaverse: The consequence of a Software-Defined reality

From this development, the Industrial Metaverse becomes tangible – not as a visualisation, but as a new operational and management layer. When digital twins accurately reflect the real state, when AI prepares or autonomously makes decisions, and when software directly translates these decisions into real-world actions, the virtual space becomes the central environment for planning, collaboration, and optimisation.

Software-Defined Everything as a structural capability

Software-Defined Everything – with Software-Defined Automation as the core for production – is not a short-term trend or an isolated technology choice. It is the structural prerequisite to make industrial systems learning-capable, adaptable, and future-proof, and to unlock the full potential of AI for the industry of the future.

The post Software-Defined Everything: The Foundation of the AI-powered Digital Enterprise appeared first on ELE Times.

3 semicon-enabled innovations impacting our experience of the world

ELE Times - Tue, 02/10/2026 - 07:13

Courtesy: Texas Instruments

The chips that power today’s smartphones contain over 15 billion transistors; the semiconductors powering data centres can have hundreds of billions of transistors. Semiconductors drive and power breakthroughs across hundreds of critical and emerging industries, such as robotics, personal electronics and artificial intelligence. As semiconductors continue to enable the world to function and make life more convenient and safer, their role will only increase.

The importance of chips – and the electronics they’re enabling – has been made possible by years of semiconductor progress. Let’s review how semiconductor technologies are enabling three innovations in electronics that impact how we experience the world.

Innovation No. 1: Systems that operate safely around humans

“You might think humanoids are 3 to 5 years away. But really, humanoids are the present,” said Giovanni Campanella, general manager of factory automation, motor drives and robotics at TI, at a Computex speech.

Humanoids’ emergence is anything but simple. Robots that perform chores in homes, complete tasks in a factory, or even clean dishes in a restaurant kitchen must adapt in dynamic environments, where things change every second.

In order to build adaptable robots that can operate around humans in diverse settings, such as domestic or business environments, design engineers must leverage semiconductor technologies. Each of these technologies must work together to perform the actions of one safe and functional humanoid. Actuators in robots enable their movements. With sensing, the robot can perceive its surrounding environment, and a central computer acts as its brain, analysing and making decisions from that sensing data. Communication with the compute units and actuators happens in real time, so the humanoid can complete a task, such as handing an object to someone.

Innovation No. 2: Smaller, more affordable, smarter devices

Smartphones and laptops keep getting thinner and lighter. Medical patches provide continuous monitoring without external equipment. Devices are on a trajectory to fit into an individual’s life, increasing convenience and accessibility.

How are designers able to continually progress toward the trend of “smaller” and more convenient when last year’s newest smartphone was already the smallest ever?

Significant advances in component design are enabling this progress. An example of this was our launch of the world’s smallest MCU, reflecting breakthroughs in packaging, integration and power efficiency that allow more functionality to fit into dramatically smaller spaces.

“With the addition of the world’s smallest MCU, our MSPM0 MCU portfolio provides unlimited possibilities to enable smarter, more connected experiences in our day-to-day lives,” said Vinay Agarwal, vice president and general manager of MSP Microcontrollers at TI.

Due to semiconductors, headphones that were once clunky can now fit into a pocket and provide a premium audio experience. Smart rings instantly track health metrics like activity and heart rate without interrupting everyday activities. With devices like the world’s smallest MCU, the prevalence of smaller, more affordable electronics that seamlessly blend into an already-existing routine is expanding.

Innovation No. 3: AI everywhere

By 2033, the global AI market is expected to account for $4.8 trillion – 25 times higher than the $189 billion valuation in 2023. AI is already enabling smartphones to process images in real time, cars to monitor drivers and their surroundings, and medical devices to deliver precise insights, and with its projected growth, the possibilities of where else AI can appear seem endless.

But with the influx of power needed to process the massive amounts of data that AI requires – and the inevitable demand to process even more data – there must be supporting infrastructure.

This is why moving energy from the grid to the gate is crucial –  by optimising every stage of the power chain, from the electrical grid to the logic gates inside computer processors, TI helps support widespread AI adoption while improving efficiency, reliability, and sustainability.

At the same time, the need for more power to process the computations that AI requires has reshaped system designs. Software-defined architectures have enabled products to adapt and deploy new AI capabilities without new hardware. Software is increasingly becoming an important driver of flexibility, differentiation, and energy efficiency in applications such as vehicles, robotic systems and appliances.

Even at the edge, we’re working with designers now to implement AI onto devices such as solar panels to detect potentially dangerous arc faults. But that’s only one way we’re supporting the increase of AI.

“We’ll continue developing those use cases that make sense,” said Henrik Mannesson, general manager of energy infrastructure at TI. “But we also recognise the need to build universal tools that enable customers to further innovate with edge AI.”

Conscluion:

From robots that can safely work alongside humans to ultra-compact devices that seamlessly integrate into daily life, and AI systems that scale responsibly from the edge to the cloud, semiconductor innovation is redefining how technology touches the world around us. These advances are not happening in isolation; they are the result of sustained progress in sensing, computing, power management, and software-driven design working in unison. As demand grows for smarter, safer, and more energy-efficient systems, semiconductors will remain the invisible backbone enabling engineers to turn ambitious ideas into practical, real-world solutions. In shaping what’s next, the smallest components will continue to have the biggest impact.

The post 3 semicon-enabled innovations impacting our experience of the world appeared first on ELE Times.

TP-Link’s Kasa HS103: A smart plug with solid network connectivity

EDN Network - Mon, 02/09/2026 - 23:07

With Amazon’s smart plug teardown “in the books”, our engineer turns his attention to some TP-Link counterparts, this first one the best behaved of the bunch per hands-on testing results.

Two months back, I introduced you to several members of TP-Link’s Kasa and Tapo smart home product lines as successors to Belkin’s then-soon and now (at least as you read these words, a few weeks after I wrote them) defunct Wemo smart plug devices. I mentioned at the time that I’d had particularly good luck, from both initial setup and ongoing connectivity standpoints, with the Kasa HS103:

An example of which, I mentioned at the time, I’d shortly be tearing down both for standalone inspection purposes and subsequent comparison to the smaller but seemingly also functionally flakier Tapo EP10:

Today, I’ll be actualizing my HS103 teardown aspiration, with the EP10 analysis to follow in short order, hopefully sometime next month. What’s inside this inexpensive device, and is it any easier to disassemble than was Amazon’s Smart Plug, which I dissected last month?

Plain is appealing

Let’s find out. As usual, I’ll begin with some outer box shots of the four-pack containing today’s patient. You may call the packaging “boring”. I call it refreshingly simple. As well as recyclable.

Sorry, I couldn’t resist including that last one 😀.

Now for the device inside the box, beginning with a conceptual block diagram. Interestingly, although I’d mentioned back in December that TP-Link now specs the HS103 to handle a current draw of up to 15A, the four-pack (HS103P4) graphic on Amazon’s website still list 12A max:

Its three-pack (HS103P3) graphic counterpart eliminates the current spec entirely, replacing it with the shadowy outline of an AC outlet set, which I suppose is one way to fix the issue!

And now for some real-life shots, as usual (and as with subsequent images) accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

See that seam? I ‘spect that’ll be a key piece for solving the puzzle of the pathway to the insides:

And, last but not least, all those specs that the engineers out there reading this know and love, including the FCC certification ID (2AXJ4KP115):

Cracking (open) the case

Now to get inside. Although I earlier highlighted the topside seam, I decided to focus my spudger attention on the right side to start, specifically the already visible gap between the main chassis and the rubberized gasket ring:

Quickly results-realizing that I was indirectly just pushing the side plate (containing the multi-function switch) out of its normal place, I redirected my attention to it more directly:

Success, at least as a first step!

Now for that gasket…

At this point, however, we only have a visual tease at the insides:

Time for another Amazon-supplied conceptual diagram:

And now for the real thing. This junction overlap gave me a clue of how to start:

It wouldn’t be a proper teardown without at least a bit of collateral damage, yes?

Onward, I endure it all for you, dear readers:

Voilà:

Boring half first:

PCB constituent pieces

Now for the half we all really care about:

As with its Amazon smart plug predecessor, the analog and power portions are “vanilla”:

The off-white relay at far right on the main PCB, for example, is the HF32FV-16 from Hongfa. Perhaps the most interesting aspect of the analog-and-power subsystem, at least to me, is the sizeable fuse below the pass-through ground connection, which I hadn’t noticed in the Amazon-equivalent design (although perhaps I just overlooked it?). The digital mini-PCB abutting the relay, on the other hand, is where all the connectivity and control magic take place…

In the upper left corner is the multicolor LED whose glow (amber or/or blue, and either steady or blinking, depending on the operating mode of the moment) shines through the aforementioned translucent gasket when the switch is powered up (and not switched off):

Those two unpopulated eight-lead IC sites below it are…a titillating tease of what might be in a more advanced product variant? In the bottom left corner is the embedded 2.4 GHz Wi-Fi 1T1R antenna. And to its right is the “brains” of the operation at the other end of the antenna connection, Realtek’s RTL8710, which supports a complete TCIP/IP “stack” and integrates a 166 MHz Arm Cortex M3 processor core, 512 Kbytes of RAM and 1 Mbyte of flash memory.

Stubborn solder

Speaking of power pass-throughs…what about the other side of the main PCB? The obvious first step is to remove the screw whose head you might have already noticed in the earlier shot:

But that wasn’t enough to get the PCB to budge out of the chassis, at least meaningfully:

Recall that in the Amazon smart plug design, not only the back panel’s ground pin but also its neutral blade pass through intact to the front panel slots, albeit with the latter also split off at the source to power the PCB via a separate wire. The line blade is the only one that only goes directly to the PCB, where it’s presumably switched prior to routing to the front panel load slot.

In this design, that same switching scheme may very well be the case. But this time the back panel neutral connection also routes solely to the PCB. Note the two beefy solder points on the main PCB, one directly above the screw location and the other to the right of its solder sibling. I was unable to get either (far from both of) them either successfully unsoldered from above or snipped from below. And all I could discern on the underside of the PCB from peering through the gap were a few scattered additional passive components, anyway.

So, sorry, folks, I threw in the towel and gave up. I’m assuming that those two particular solder points, befitting the necessary stability not only electrically but also mechanically, i.e., physically, leveraged higher-temperature solid or silver solder that my iron just wasn’t up for. Or maybe I just wasn’t sufficiently patient to wait long enough for the solder to melt (hey, it’s happened before). Regardless, and as usual, I welcome your thoughts on what I was able to show you, or anything else related to this product and my teardown of it, for that matter, in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post TP-Link’s Kasa HS103: A smart plug with solid network connectivity appeared first on EDN.

Lumentum’s quarterly revenue grows 65% year-on-year to $665.5m

Semiconductor today - Mon, 02/09/2026 - 21:13
For its fiscal second-quarter 2026 (ended 27 December 2025), Lumentum Holdings Inc of San Jose, CA, USA (which designs and makes photonics products for optical networks and lasers for industrial and consumer markets) has reported record revenue (for the second consecutive quarter) of $665.5m (towards the top of the $630–670m guidance range). This is up 24.7% from $533.8m last quarter and up 65.5% on $402.2m a year ago, driven by cloud and AI business, yielding high double-digit gains in both core and new product lines...

✨ Запрошуємо студентів та викладачів взяти участь у HeatTech Hackathon

Новини - Mon, 02/09/2026 - 20:40
✨ Запрошуємо студентів та викладачів взяти участь у HeatTech Hackathon
Image
kpi пн, 02/09/2026 - 20:40
Текст

Теплоенергетичний кластер України разом з Київським Політехнічним інститутом імені Ігоря Сікорського запрошує студентів та викладачів на HeatTech Hackathon.

Infineon’s silicon carbide power MOSFETs selected for Toyota’s new bZ4X model

Semiconductor today - Mon, 02/09/2026 - 18:10
Infineon Technologies AG of Munich, Germany says that CoolSiC silicon carbide power MOSFETs have been adopted in the new bZ4X model of the world’s largest automaker, Toyota of Tokyo, Japan. Integrated into the on-board charger (OBC) and DC/DC converter, the SiC MOSFETs leverage the material’s advantages of low losses, high thermal resistance and high-voltage capability to help extend driving range and reduce charging time...

Look at these monsters! 29,000 microfarad

Reddit:Electronics - Mon, 02/09/2026 - 18:01
Look at these monsters! 29,000 microfarad

Came across this capacitor bank inside of this giant battery charger just figured I'd share, LOL. It has (3) 29k microfarad 200vdc, and (1) 13k microfarad 200vdc capacitors. Gives me the heebie-jeebies just looking at it... It has a built-in capacitor discharge button but still...

submitted by /u/Due-Fan-2536
[link] [comments]

Pages

Subscribe to Кафедра Електронної Інженерії aggregator