Українською
  In English
EDN Network
Inductive sensors broaden motion-control options

Three magnet-free inductive position sensors from Renesas provide a cost-effective alternative to magnetic and optical encoders. With different coil architectures, the ICs address a wide range of applications in robotics, medical devices, smart buildings, home appliances, and motor control.

The dual-coil RAA2P3226 uses a Vernier architecture to deliver up to 19-bit resolution and 0.01° absolute accuracy, providing true power-on position feedback for precision robotic joints. The single-coil RAA2P3200 prioritizes high-speed, low-latency operation for motor commutation in e-bikes and cobots, with built-in protection for robust industrial use. Also using single-coil sensing, the RAA2P4200 offers a compact, cost-efficient option for low-speed applications such as service robots, power tools, and medical devices.
All three sensors share a common inductive sensing core that enables accurate, contactless position measurement in harsh industrial environments. Each device supports rotary on-axis, off-axis, arc, and linear configurations, and includes automatic gain control to compensate for air-gap variations. A 16-point linearization feature enhances accuracy.
The sensors are now in volume production, supported by a web-based design tool that automates coil layout, simulation, and tuning.
The post Inductive sensors broaden motion-control options appeared first on EDN.
AOS devices power 800-VDC AI racks

GaN and SiC power semiconductors from AOS support NVIDIA’s 800-VDC power architecture for next-gen AI infrastructure, enabling data centers to deploy megawatt-scale racks for rapidly growing workloads. Moving from conventional 54-V distribution to 800 VDC reduces conversion steps, boosting efficiency, cutting copper use, and improving reliability.

The company’s wide-bandgap semiconductors are well-suited for the power conversion stages in AI factory 800‑VDC architectures. Key device roles include:
- High-Voltage Conversion: SiC devices (Gen3 AOM020V120X3, topside-cooled AOGT020V120X2Q) handle high voltages with low losses, supporting power sidecars or single-step conversion from 13.8 kV AC to 800 VDC. This simplifies the power chain and improves efficiency.
- High-Density DC/DC Conversion: 650-V GaN FETs (AOGT035V65GA1) and 100-V GaN FETs (AOFG018V10GA1) convert 800 VDC to GPU voltages at high frequency. Smaller, lighter converters free rack space for compute resources and enhance cooling.
- Packaging Flexibility: 80-V and 100-V stacked-die MOSFETs (AOPL68801) and 100-V GaN FETs share a common footprint, letting designers balance cost and efficiency in secondary LLC stages and 54-V to 12- V bus converters. Stacked-die packages boost secondary-side power density.
AOS power technologies help realize the advantages of 800‑VDC architectures, with up to 5% higher efficiency and 45% less copper. They also reduce maintenance and cooling costs.
The post AOS devices power 800-VDC AI racks appeared first on EDN.
Optical Tx tests ensure robust in-vehicle networks

Keysight’s AE6980T Optical Automotive Ethernet Transmitter Test Software qualifies optical transmitters in next-gen nGBASE-AU PHYs for IEEE 802.3cz compliance. The standard defines optical automotive Ethernet (2.5–50 Gbps) over multimode fiber, providing low-latency, EMI-resistant links with high bandwidth, and lighter cabling. Keysight’s platform helps enable faster, more reliable in-vehicle networks for software-defined and autonomous vehicles.

Paired with Keysight’s DCA-M sampling oscilloscope and FlexDCA software, the AE6980T offers Transmitter Distortion Figure of Merit (TDFOM) and TDFOM-assisted measurements, essential for evaluating optical signal quality. Device debugging is simplified through detailed margin and eye-quality evaluations. The compliance application also automates complex test setups and generates HTML reports showing how devices pass or fail against defined limits.
AE6980T software provides full compliance with IEEE 802.3cz-2023, Amendment 7, and Open Alliance TC7 test house specifications. It currently supports 10-Gbps data rates, with 25 Gbps planned for the future.
For more information about Keysight in-vehicle network test solutions and their automotive use cases, visit Streamline In-Vehicle Networking.
The post Optical Tx tests ensure robust in-vehicle networks appeared first on EDN.
Gate drivers tackle 220-V GaN designs

Two half-bridge GaN gate drivers from ST integrate a bootstrap diode and linear regulators to generate high- and low-side 6-V gate signals. The STDRIVEG210 and STDRIVEG211 target systems powered from industrial or telecom bus voltages, 72-V battery systems, and 110-V AC line-powered equipment.

The high-side driver of each device withstands rail voltages up to 220 V and is easily supplied through the embedded bootstrap diode. Separate gate-drive paths can sink 2.4 A and source 1.0 A, ensuring fast switching transitions and straightforward dV/dt tuning. Both devices provider short propagation delay with 10-ns matching for low dead-time operation.
ST’s gate drivers support a broad range of power-conversion applications, including power supplies, chargers, solar systems, lighting, and USB-C sources. The STDRIVEG210 works with both resonant and hard-switching topologies, offering a 300-ns startup time that minimizes wake-up delays in burst-mode operation. The STDRIVEG211 adds overcurrent detection and smart shutdown functions for motor drives in tools, e-bikes, pumps, servos, and class-D audio systems.
Now in production, the STDRIVEG210 and STDRIVEG211 come in 5×4-mm, 18-pin QFN packages. Prices start at $1.22 each in quantities of 1000 units. Evaluation boards are also available.
The post Gate drivers tackle 220-V GaN designs appeared first on EDN.
Apple’s M5: The SoC-and-systems cadence (sorta) continues to thrive

A month and a few days ago, Apple dedicated an in-person event (albeit with the usual pre-recorded presentations) to launching its latest mainstream and Pro A19 SoCs and the various iPhone 17s containing them, along with associated smart watch and earbuds upgrades. And at the end of my subsequent coverage of Amazon and Google’s in-person events, I alluded to additional Apple announcements that, judging from both leaks (some even straight from the FCC) and historical precedents, might still be on the way.
Well, earlier today (as I write these words on October 15), at least some of those additional announcements just arrived, in the form of the new baseline M5 SoC and the various upgraded systems containing it. But this time, again following historical precedent, they were delivered only in press release form. Any conclusions you might draw as the relative importance within Apple of smartphones versus other aspects of the overall product line are…well…

Looking at the historical trends of M-series SoC announcements, you’ll see that the initial >1.5-year latency between the baseline M1 (November 2020) and M2 (June 2022) chips subsequently shrunk to a yearly (plus or minus a few months) cadence. To wit, since the M4 came out last May but the M5 hadn’t yet arrived this year, I was assuming we’d see it soon. Otherwise, its lingering absence would likely be reflective of troubles within Apple’s chip design team and/or longstanding foundry partner TSMC. And indeed, the M5 has finally shown up. But my concerns about development and/or production troubles still aren’t completely alleviated.

Let’s parse through the press release.
Built using third-generation 3-nanometer technology…
This marks the third consecutive generation of M-series CPUs manufactured on a 3-nm litho process (at least for the baseline M5…I’ll delve into higher-end variants next). Consider this in light of Wikipedia’s note that TSMC began risk production on its first 2 nm process mid-last year and was originally scheduled to be in mass production on 2 nm in “2H 2025”. Admittedly, there are 2.5 more months to go until 2025 is over, but Apple would have had to make its process-choice decision for the M5 many months (if not several years) in the past.
Consider, too, that the larger die size Pro and Max (and potentially also Ultra) variants of the M5 haven’t yet arrived. This delay isn’t without precedent; there was a nearly six-month latency between the baseline M4 and its Pro and Max variants, for example. That said, the M4 had shown up in early May, with the Pro and Max following in late October, so they all still arrived in 2024. And here’s an even more notable contrast: all three variants of the M3 were launched concurrently in late October 2023. Consider all of this in the light of persistent rumors that M5 Pro- and Max-based systems may not show up until spring-or-later 2026.
M5 introduces a next-generation 10-core GPU architecture with a Neural Accelerator in each core, enabling GPU-based AI workloads to run dramatically faster, with over 4x the peak GPU compute performance compared to M4. The GPU also offers enhanced graphics capabilities and third-generation ray tracing that combined deliver a graphics performance that is up to 45 percent higher than M4.
Note that these Neural Accelerators are presumably different than those in the dedicated 16-core Neural Engine. The latter historically garnered the bulk of the AI-related press release “ink”, but this time it’s limited to terse “improved” and “faster” descriptions. What does this tell me?
- “Neural Accelerator” is likely a generic term reflective of AI-tailored shader and other functional block enhancements, analogous to the increasingly AI-optimized capabilities of NVIDIA’s various GPU generations.
- The Neural Engine, conversely, is (again, I’m guessing) largely unchanged here from the one in the M4 series, instead indirectly benefiting from a performance standpoint due to the boosted overall SoC-to-external memory bandwidth.
M5 features the world’s fastest performance core, with up to a 10-core CPU made up of six efficiency cores and up to four performance cores. Together, they deliver up to 15 percent faster multithreaded performance over M4.
Core count proportions and totals both match those of the M4. Aside from potential “Neural Accelerator” tweaks such as hardware-accelerated instruction set additions (à la Intel’s MMX and SSE), I suspect they’re largely the same as the prior generation, with any performance uplift resulting from overall external memory bandwidth improvements. Speaking of which…
M5 also features…a nearly 30 percent increase in unified memory bandwidth to 153GB/s.
And later…
M5 offers unified memory bandwidth of 153GB/s, providing a nearly 30 percent increase over M4 and more than 2x over M1. The unified memory architecture enables the entire chip to access a large single pool of memory, which allows MacBook Pro, iPad Pro, and Apple Vision Pro to run larger AI models completely on device. It fuels the faster CPU, GPU, and Neural Engine as well, offering higher multithreaded performance in apps, faster graphics performance in creative apps and games, and faster AI performance running models on the Neural Accelerators in the GPU or the Neural Engine.
The enhanced memory controller is, I suspect, the nexus of overall M4-to-M5 advancements, as well as explaining why Apple’s still able to cost-effectively (i.e., without exploding the total transistor count budget) fabricate the new chip on a legacy 3-nm lithography. How did the company achieve this bandwidth boost? While an even wider bus width than that used with the M4 might conceptually provide at least part of the answer, it’d also both balloon the required SoC pin count and complicate the possible total memory capacity increments. I therefore suspect a simpler approach is at play. The M4 used 7500 Mbps DDR5X SDRAM, while the M4 Pro and Max leveraged the faster 8533 Mbps DDR5X speed bin. But if you look at Samsung’s website (for example), you’ll see an even faster 9600 Mbps speed bin listed. 9600 Mbps is 28% more than 7500 Mbps…voila, there’s your “nearly 30 percent increase”.
There’s one other specification, this time not found in the SoC press release but instead in the announcement for one of the M5-based systems, that I’d like to highlight:
…up to 2x faster storage read and write speeds…
My guess here is that Apple has done a proprietary (or not)-interface equivalent to the industry-standard PCI Express 4.x-to-5.x and UFS 4.x-to-5.x evolutions, which also tout doubled peak transfer rate speeds.
Speaking of speeds…keep in mind when reading about SoC performance claims that they’re based on the chip running at its peak possible clock cadence, not to mention when outfitted with maximum available core counts. An especially power consumption-sensitive tablet computer, for example, might clock-throttle the processor compared to the SoC equivalent in a mobile or (especially) desktop computer. Yield-maximization (translating into cost-minimization) “binning” aspirations are another reason why the SoC in a particular system configuration may not perform to the same level as a processor-focused press release might otherwise suggest. Such schemes are particularly easy for someone like Apple—who doesn’t publish clock speeds anyway—to accomplish.
And speaking of cost minimization, reducing the guaranteed-functional core counts on a chip can significantly boost usable silicon yield, too. To wit, about those M5-based systems…
11” and 13” iPad Pros
Last May’s M4 unveil marked the first time that an iPad, versus a computer, was the initial system to receive a new M-series processor generation. More generally, the fifth-gen iPad Pro introduced in April 2021 was the first iPad to transition from Apple’s A-series SoCs to the M-series (the M1, to be precise). This was significant because, up to that point, M-series chips had been exclusively positioned as for computers, with A-series processors for iPhones and iPads.
This time, both the 11” and 13” iPad Pro get the M5, albeit with inconsistent core counts (and RAM allocations, for that matter) depending on the flash memory storage capacity and resultant price tag. From 9 to 5 Mac’s coverage:
- 256GB storage: 12GB memory, M5 with 9-core CPU, 10-core GPU
- 512GB storage: 12GB memory, M5 with 9-core CPU, 10-core GPU
- 1TB storage: 16GB memory, M5 with 10-core CPU, 10-core GPU
- 2TB storage: 16GB memory, M5 with 10-core CPU, 10-core GPU
It bears noting that the 12 GByte baseline capacity is 4 GBytes above what baseline M4 iPad Pros came with a year-plus ago. Also, the deprecated CPU core in the lower-end variants is one of the four performance cores; CPU efficiency core counts are the same across all models, as are—a pleasant surprise given historical precedents and a likely reflection of TSMC’s process maturity—the graphics core counts. And for the first time, a cellular-equipped iPad has switched from a Qualcomm modem to Apple’s own: the newest C1X, to be precise, along with the N1 for wireless communications, both of which we heard about for the first time just a month ago.
A brief aside: speaking of A-series to M-series iPad Pro transitions, mine is a second-generation 11” model (one of the fourth-generation iPad Pros) dating from March 2020 and based on the A12Z Bionic processor. It’s still running great, but I’ll bet Apple will drop software support for it soon (I’m frankly surprised that it survived this year’s iPadOS 26 cut, to be honest). My wife-and-I have a wedding anniversary next month. Then there’s Christmas. And my 60th birthday next May. So, if you’re reading this, honey…

This one was not-so-subtly foreshadowed by Apple’s marketing VP just yesterday. The big claim here, aside from the inevitable memory bandwidth-induced performance-boost predications, is “phenomenal battery life of up to 24 hours” (your mileage may vary, of course). And it bears noting that, in today’s tariff-rife era, the $1599 entry-level pricing is unchanged from last year.
The Vision Pro
The underlying rationale for the performance boost is more obvious here; the first-generation model teased in June 2023 with sales commencing the following February was based on the three-generations-older M2 SoC. That said, given the rampant rumors that Apple has redirected its ongoing development efforts to smart glasses, I wonder how long we’ll be stuck with this second-generation evolutionary tweak of the VR platform. A redesigned headband promises a more comfortable wearing experience. Apple will also start selling accessories from Logitech (the Muse pencil, available now) and Sony (the PlayStation VR2 Sense controller, next month).
Anything else?I should note, by the way, that the Beats Powerbeats Fit earbuds that I mentioned a month back, which had been teased over YouTube and elsewhere but were MIA at Apple’s event, were finally released at the end of September. And on that note, other products (some currently with evaporating inventories at retail, another common tipoff that a next-generation device is en route) are rumored candidates for near-future launch:
- Next-gen Apple TV 4K
- HomePod mini 2
- AirTag 2
- One (or multiple) new Apple Studio Display(s)
- (???)
We shall see. Until next time, I welcome your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Amazon and Google: Can you AI-upgrade the smart home while being frugal?
- The transition to Apple silicon Arm-based computers
- Apple’s 2022 WWDC: Generational evolution and dramatic obsolescence
- Apple’s Spring 2024: In-person announcements no more?
- Apple’s “October Surprise”: the M3 SoC Family and the A17 Bionic Reprise
- Apple’s 2H 2025 announcements: Tariff-touched but not bound, at least for this round
The post Apple’s M5: The SoC-and-systems cadence (sorta) continues to thrive appeared first on EDN.
TI launches power management devices for AI computing

Texas Instruments Inc. (TI) announced several power management devices and a reference design to help companies meet AI computing demands and scale power management architectures from 12 V to 48 V to 800 VDC. These products include a dual-phase smart power stage, a dual-phase smart power module for lateral power delivery, a gallium nitride (GaN) intermediate bus converter (IBC), and a 30-kW AI server power supply unit reference design.
“Data centers are very complex systems and they’re running very power-intensive workloads that demand a perfect balance of multiple critical factors,” said Chris Suchoski, general manager of TI’s data center systems engineering and marketing team. “Most important are power density, performance, safety, grid-to-gate efficiency, reliability, and robustness. These factors are particularly essential in developing next-generation, AI purpose-driven data centers, which are more power-hungry and critical today than ever before.”
(Source: Texas Instruments Inc.)
Suchoski describes grid-to-gate as the complete power path from the AC utility gird to the processor gates in the AI compute servers. “Throughout this path, it’s critical to maximize your efficiency and power density. We can help improve overall energy efficiency from the original power source to the computational workload,” he said.
TI is focused on helping customers improve efficiency, density, and security at every stage in the power data center by combining semiconductor innovation with system-level power infrastructure, allowing them to achieve high efficiency and high density, Suchoski said.
Power density and efficiency improvementsTI’s power conversion products for data centers address the need for increased power density and efficiency across the full 48-V power architecture for AI data centers. These include input power protection, 48-V DC/DC conversion, and high-current DC/DC conversion for the AI processor core and side rails. TI’s newest power management devices target these next-generation AI infrastructures.
One of the trends in the market is a move from single-phase to dual-phase power stages that enable higher current density for the multi-phase buck voltage regulators that power these AI processors, said Pradeep Shenoy, technologist for TI’s data center systems engineering and marketing team.
The dual-phase power stage has very high-current capabilities, 200-A peak, Shenoy said, and it is in a very small, 5 × 5-mm package that comes in a thermally enhanced package with top-side cooling, enabling a very efficient and reliable supply in a small area.
The CSD965203B dual-phase power stage claims the highest peak power density power stage on the market, with 100 A of peak current per phase, combining two power phases in a 5 × 5-mm quad-flat no-lead package. With this device, designers can increase phase count and power delivery across a small printed-circuit-board area, improving efficiency and performance.
Another related trend is the move to dual-phase power modules, Shenoy said. “These power modules combine the power stages with the inductors, all in a compact form factor.”
The dual-phase power module co-packages the power stages with other components on the bottom and the inductor on the top, and it offers both trans-inductor voltage regulator (TLVR) and non-TLVR options, he added. “They help improve the overall power density and current density of the solution with over a 2× reduction in size compared with discrete solutions.”
The CSDM65295 dual-phase power module delivers up to 180 A of peak output current in a 9 × 10 × 5-mm package. The module integrates two power stages and two inductors with TLVR options while maintaining high efficiency and reliable operation.
The GaN-based IBC achieves over 1.5 kW of output power with over 97.5% peak efficiency, and it also enables regulated output and active current sharing, Shenoy said. “This is important because as we see the power consumption and power loads are increasing in these data centers, we need to be able to parallel more of these IBCs, and so the current sharing helps make that very scalable and easy to use.”
The LMM104RM0 GaN converter module offers over 97.5% input-to-output power conversion efficiency and high light-load efficiency to enable active current sharing between multiple modules. It can deliver up to 1.6 kW of output power in a quarter-brick (58.4 × 36.8-mm) form factor.
TI also introduced a 39-kW dual-stage power supply reference design for AI servers that features a three-phase, three-level flying capacitor power-factor-correction converter paired with dual delta-delta three-phase inductor-inductor-capacitor converters. The power supply is configurable as a single 800-V output or separate output supplies.
30-kW HVDC AI data center reference design (Source: Texas Instruments Inc.)
TI also announced a white paper, “Power delivery trade-offs when preparing for the next wave of AI computing growth,” and its collaboration with Nvidia to develop power management devices to support 800-VDC power architectures.
The solutions will be on display at Open Compute Summit (OCP), Oct. 13–16, in San Jose, California. TI is exhibiting at Booth #C17. The company will also participate in technology sessions, including the OCP Global Summit Breakout Session and OCP Future Technologies Symposium.
The post TI launches power management devices for AI computing appeared first on EDN.
100-V GaN transistors meet automotive standard

Infineon Technologies AG unveils its first gallium nitride (GaN) transistor family qualified to the Automotive Electronics Council (AEC) standard for automotive applications. The new CoolGaN automotive transistor 100-V G1 family, including high-voltage (HV) CoolGaN automotive transistors and bidirectional switches, meet AEC-Q101.
(Source: Infineon Technologies AG)
This supports Infineon’s commitment to provide automotive solutions from low-voltage infotainment systems addressed by the new 100-V GaN transistor to future HV product solutions in onboard chargers and traction inverters. “Our 100-V GaN auto transistor solutions and the upcoming portfolio extension into the high-voltage range are an important milestone in the development of energy-efficient and reliable power transistors for automotive applications,” said Johannes Schoiswohl, Infineon’s head of the GaN business line, in a statement.
The new devices include the IGC033S10S1Q CoolGaN automotive transistor 100 V G1 in a 3 × 5-mm PQFN package, and the IGB110S10S1Q CoolGaN transistor 100 V G1 in a 3 × 3-mm PQFN. The IGC033S10S1Q features an Rds(on) of 3.3 mΩ and the IGB110S10S1Q has an Rds(on) of 11 mΩ. Other features include dual-side cooling, no reverse recovery charge, and ultra-low figures of merit.
These GaN e-mode power transistors target automotive applications such as advanced driver assistance systems and new climate control and infotainment systems that require higher power and more efficient power conversion solutions. GaN power devices offer higher energy efficiency in a smaller form factor and lower system cost compared to silicon-based components, Infineon said.
The new family of 100-V CoolGaN transistors target applications such as zone control and main DC/DC converters, high-performance auxiliary systems, and Class D Audio amplifiers. Samples of the pre-production automotive-qualified product range are now available. Infineon will showcase its automotive GaN solutions at the OktoberTech Silicon Valley, October 16, 2025.
The post 100-V GaN transistors meet automotive standard appeared first on EDN.
Voltage-to-period converter offers high linearity and fast operation

The circuit in Figure 1 converts the input DC voltage into a pulse train. The period of the pulses is proportional to the input voltage with a 50% duty cycle and a nonlinearity error of 0.01%. The maximum conversion time is less than 5 ms.
Figure 1 The circuit uses an integrator and a Schmitt trigger with variable hysteresis to convert a DC voltage into a pulse train where the period of the pulses is proportional to the input voltage.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The circuit is made of four sections. The op-amp IC1 and resistors R1 to R5 create two reference voltages for the integrator.
The integrator, built with IC2, RINT, and CINT, generates two linear ramps. Switch S1 changes the direction of the current going to the integrating capacitor; in turn, this changes the direction of the linear ramps. The rest of the circuit is a Schmitt trigger with variable hysteresis. The low trip point VLO is fixed, and the high trip point VHI is variable (the input voltage VIN comes in there).
The signal coming from the integrator sweeps between the two trip points of the trigger at an equal rate and in opposite directions. Since R4 = R5, the duty cycle is 50% and the transfer function is as follows:
![]()
To start oscillations, the following relation must be satisfied when the circuit gets power:
![]()
Figure 2 shows that the transfer function of the circuit is perfectly linear (the R² factor equals unity). In reality, there are slight deviations around the straight line; with respect to the span of the output period, these deviations do not exceed ± 0.01%. The slope of the line can be adjusted to 1000 µs/V by R2, and the offset can be easily cancelled by the microcontroller (µC).

Figure 2 The transfer function of the circuit in Figure 1. It is very linear and can be easily adjusted via R2.
Figure 1 shows that the µC converts period T into a number by filling the period with clock pulses of frequency fCLK = 1 MHz. It also adds 50 to the result to cancel the offset. The range of the obtained numbers is from 200 to 4800, i.e., the resolution is 1 count per mV.
Resolution can be easily increased by a factor of 10 by setting the clock frequency to 10 MHz. The great thing is that the nonlinearity error and conversion time remain the same, which is not possible for the voltage-to-frequency converters (VFCs). Here is an example.
Assume that a voltage-to-period converter (VPC) generates pulse periods T = 5 ms at a full-scale input of 5 V. Filling the period with 1 MHz clock pulses produces a number of 5000 (N = T * fCLK). The conversion time is 5 ms, which is the longest for this converter. As we already know, the nonlinearity is 0.01%.
Now consider a VFC which produces a frequency f = 5 kHz at a 5-V input. To get the number of 5000, this signal must be gated by a signal that is 1 second long (N = tG * f). Gate time is the conversion time.
The nonlinearity in this case is 0.002 % (see References), which is five times better than VPC’s nonlinearity. However, conversion time is 200 times longer (1 s vs. 5 ms). To get the same number of pulses N for the same conversion time as the VPC, the full-scale frequency of the VFC must go up to 1 MHz. However, nonlinearity at 1 MHz is 0.1%, ten times worse than VPC’s nonlinearity.
The contrast becomes more pronounced when the desired number is moved up to 50,000. Using the same analysis, it becomes clear that the VPC can do the job 10 times faster with 10 times better linearity than the VFCs. An additional advantage of the VPC is the lower cost.
If you plan to use the circuit, pay attention to the integrating capacitor. As CINT participates in the transfer function, it should be carefully selected in terms of tolerance, temperature stability, and dielectric material.
Jordan Dimitrov is an electrical engineer & PhD with 30 years of experience. Currently, he teaches electrical and electronics courses at a Toronto community college.
Related Content
- Voltage-to-period converter improves speed, cost, and linearity of A-D conversion
- Circuits help get or verify matched resistors
- RMS stands for: Remember, RMS measurements are slippery
References:
- AD650 voltage-to-frequency and frequency-to-voltage converter. Data sheet from Analog Devices; www.analog.com
- VFC320 voltage-to-frequency and frequency-to-voltage converter. Data sheet from Burr-Brown; www.ti.com
The post Voltage-to-period converter offers high linearity and fast operation appeared first on EDN.
“Flip ON Flop OFF” for 48-VDC systems with high-side switching

My Design Idea (DI), “Flip ON Flop OFF for 48-VDC systems,“ was published and referenced Stephen Woodward’s earlier “Flip ON Flop OFF” circuit. Other DIs published on this subject matter were for voltages less than 15 V, which is the voltage limit for CMOS ICs, while my DI was intended for higher DC voltages, typically 48 VDC. In this earlier DI, the ground line is switched, which means the input and output grounds are different. This is acceptable to many applications since the voltage is small and will not require earthing.
However, some readers in the comments section wanted a scheme to switch the high side, keeping the ground the same. To satisfy such a requirement, I modified the circuit as shown in Figure 1, where input and output grounds are kept the same and switching is done on the positive line side.

Figure 1 VCC is around 5 V and should be connected to the VCC of the ICs U1 and U2. The grounds of ICs U1 and U2 should also be connected to ground (connection not shown in the circuit). Switching is done in the high side, and the ground is the same for the input and output. Note, it is necessary for U1 to have a heat sink.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In this circuit, the voltage dividers R5 and R7 set the voltage at around 5 V at the emitter of Q2 (at VCC). This voltage is applied to ICs U1 and U2. A precise setting is not important, as these ICs can operate from 3 to 15 V. R2 and C2 are for the power ON reset of U1. R1 and C1 are for the push button (PB) switch debounce.
When you momentarily push PB once, the Q1-output of the U1 counter (not the Q1 FET) goes HIGH, saturating the Q3 transistor. Hence, the gate of Q1 (PMOSFET, IRF 9530N, VDSS=-100 V, IDS=-14 A, RDS=0.2 Ω) is pulled to ground. Q1 then conducts, and its output goes near 48 VDC.
Due to the 0.2-Ω RDS of Q1, there will be a small voltage drop depending on load current. When you push PB again, transistor Q3 turns OFF and Q1 stops conducting, and the voltage at the output becomes zero. Here, switching is done at the high side, and the ground is kept the same for the input and output sides.
If galvanic isolation is required (this may not always be the case), you may connect an ON/OFF mechanical switch prior to the input. In this topology, on-load switching is taken care of by the PB-operated circuit, and the ON/OFF switch switches zero current only, so it does not need to be bulky. You can select a switch that passes the required load current. While switching ON, first close the ON/OFF switch and then operate PB to connect. While switching OFF, first push PB to disconnect and operate the ON/OFF switch.
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- Flip ON Flop OFF
- To press ON or hold OFF? This does both for AC voltages
- Flip ON Flop OFF without a Flip/Flop
- Elaborations of yet another Flip-On Flop-Off circuit
- Another simple flip ON flop OFF circuit
- Flip ON Flop OFF for 48-VDC systems
The post “Flip ON Flop OFF” for 48-VDC systems with high-side switching appeared first on EDN.
A logically correct SoC design isn’t an optimized design

The shift from manual design to AI-driven, physically aware automation of network-on-chip (NoC) design can be compared to the evolution of navigation technology. Early GPS systems revolutionized road travel by automating route planning. These systems allowed users to specify a starting point and destination, aiming for the shortest travel time or distance, but they had a limited understanding of real-world conditions such as accidents, construction, or congestion.
The result was often a path that was correct, and minimized time or distance under ideal conditions, but not necessarily the most efficient in the real world. Similarly, early NoC design approaches automated connectivity, yet without awareness of physical floorplans or workloads as inputs for topology generation, they usually fell well short of delivering optimal performance.

Figure 1 The evolution of NoC design has many similarities with GPS navigation technology. Source: Arteris
Modern GPS platforms such as Waze or Google Maps go further by factoring in live traffic data, road closures, and other obstacles to guide travelers along faster, less costly routes. In much the same way, automation in system-on-chip (SoC) interconnects now applies algorithms that minimize wire length, manage pipeline insertion, and optimize switch placement based on a physical awareness of the SoC floorplan. This ensures that designs not only function correctly but are also efficient in terms of power, area, latency, and throughput.
The hidden cost of “logically correct”
As SoC complexity increases, the gap between correctness and optimization has become more pronounced. Designs that pass verification can still hide inefficiencies that consume power, increase area, and slow down performance. Just because a design is logically correct doesn’t mean it is optimized. While there are many tools to validate that a design is logically correct, both at the RTL and physical design stages, what tools are there to check for design optimization?
Traditional NoC implementations depend on experienced NoC design experts to manually determine switch locations and route the connections between the switches and all the IP blocks that the NoC needs to connect. Design verification (DV) tools can verify that these designs meet functional requirements, but subtle inefficiencies will remain undetected.
Wires may take unnecessarily long detours around blocks of IP, redundant switches may persist after design changes, and piecemeal edits often accumulate into suboptimal paths. None of these are logical errors that many of today’s EDA tools can detect. They are inefficiencies that impact area, power, and latency while remaining invisible to standard checks.
Manually designing an NoC is also both tedious and fragmented. A large design may take several days to complete. Expert designers must decide where to place switches, how to connect them, and when to insert pipeline stages to enable timing closure.
While they may succeed in producing a workable solution, the process is vulnerable to oversights. When engineers return to partially completed work, they may not recall every earlier decision, especially for work done by someone else on the team. As changes accumulate, inefficiencies mount.
The challenge compounds when SoC requirements shift. Adding or removing IP blocks is routine, yet in manual flows, such changes often force large-scale rework. Wires and switches tied to outdated connections often linger because edits rarely capture every dependency.
Correcting these issues requires yet more intervention, increasing both cost and time. Automating NoC topology generation eliminates these repetitive and error-prone tasks, ensuring that interconnects are optimized from the start.
Scaling with complexity
The need for automation grows as SoC architectures expand. Connecting 20 IP blocks is already challenging. At 50, the task becomes overwhelming. At 500, it’s practically impossible to optimize without advanced algorithms. Each block introduces new paths, bandwidth requirements, and physical constraints. Attempting this manually is no longer realistic.
Simplified diagrams of interconnects often give the impression of manageable scale. Reality is far more daunting, where a single logical connection may consist of 512, 1024, or even 2048 individual wires. Achieving optimized connectivity across hundreds of blocks requires careful balancing of wire length, congestion, and throughput all at once.
Another area where automation adds value is in regular topology generation. Different regions of a chip may benefit from different structures such as meshes, rings, or trees. Traditionally, designers had to decide these configurations in advance, relying on experience and intuition. This is much like selecting a fixed route on your GPS, without knowing how conditions may change.
Automation changes the approach. By analyzing workload and physical layout, the system can propose or directly implement the topology best suited for each region. Designers can choose to either guide these choices or leave the system to determine the optimal configuration. Over time, this flexibility may make rigid topologies less relevant, as interconnects evolve into hybrids tailored to the unique needs of each design.
In addition to initial optimization, adaptability during the design process is essential. As new requirements emerge, interconnects must be updated without requiring a complete rebuild. Incremental automation preserves earlier work while incorporating new elements efficiently, removing elements that are no longer required. This ability mirrors modern navigation systems, which reroute travelers seamlessly when conditions change rather than responding to the evolving conditions once the journey has started.
For SoC teams, the value is clear. Incremental optimization saves time, avoids unnecessary rework, and ensures consistency throughout the design cycle.

Figure 2 FlexGen smart NoC IP unlocks new performance and efficiency advantages. Source: Arteris
Closing the gap with smart interconnects
SoC development has benefited from decades of investment in design automation. Power analysis, functional safety, and workload profiling are well-established. However, until now, the complexity of manually designing and updating NoCs left teams vulnerable to inefficiencies that consumed resources and slowed progress. Interconnect designs were often logically correct, but rarely optimal.
Suboptimal wire length is one of the few classes of design challenges that some EDA tools still may not detect. NoC automation has bridged the gap, eliminating them at the source, delivering a correct wire length optimized to meet the throughput constraints of the design specification. By embedding intelligence into the interconnect backbone, design teams achieve solutions that are both correct and efficient, while reducing or even eliminating reliance on scarce engineering expertise.
NoCs have long been essential for connecting IP blocks in modern complex SoC design, and often the cause of schedule delays and throughput bottlenecks. Smart NoC automation now transforms interconnect design by reducing risk for both the project schedule and its ultimate performance.
At the forefront of this change is smart interconnect IP created to address precisely these challenges. By automating topology generation, minimizing wire lengths, and enabling incremental updates, a smart interconnect IP like FlexGen closes the gap between correctness and optimization. As a result, engineering groups under pressure to deliver complex designs quickly gain a path to higher performance with less effort.
There is a difference between finding a path and finding the best path. In SoC design, that difference determines competitiveness in performance, power, and time-to-market, and smart NoC automation is what makes it possible.
Rick Bye is Director of Product Management and Marketing at Arteris, overseeing the FlexNoC family of non-coherent NoC IP products. Previously, he was a senior product manager at Arm, responsible for a demonstration SoC and compression IP. Rick has extensive product management and marketing experience in semiconductors and embedded software.
Related Content
- SoC Interconnect: Don’t DIY!
- The network-on-chip interconnect is the SoC
- SoC interconnect architecture considerations
- SoCs Get a Helping Hand from AI Platform FlexGen
- Smarter SoC Design for Agile Teams and Tight Deadlines
The post A logically correct SoC design isn’t an optimized design appeared first on EDN.
AI Ethernet NIC drives trillion-parameter AI workloads

Broadcom Inc. introduces Thor Ultra, claiming the industry’s first 800G AI Ethernet network interface card (NIC). The Ethernet NIC, adopting the open Ultra Ethernet Consortium (UEC) specification, can interconnect hundreds of thousands of XPUs to drive trillion-parameter AI workloads.
The UEC modernized remote direct memory access (RDMA) for large AI clusters, which the Thor Ultra leverages, offering several RDMA innovations. These include packet-level multipathing for efficient load balancing, out-of-order packet delivery directly to XPU memory for maximizing fabric utilization, selective retransmission for efficient data transfer, and programmable receiver-based and sender-based congestion control algorithms.
By providing these advanced RDMA capabilities in an open ecosystem, it allows customers to connect to XPUs, optics, or switches and to reduce dependency on proprietary, vertically integrated solutions, Broadcom said.
(Source: Broadcom Inc.)
The Thor Ultra joins Broadcom’s Ethernet AI networking portfolio, including Tomahawk 6, Tomahawk 6-Davisson, Tomahawk Ultra, Jericho 4, and Scale-Up Ethernet (SUE), as part of an open ecosystem for large scale high-performance XPU deployments.
The Thor Ultra Ethernet NIC is available in standard PCIe CEM and OCP 3.0 form factors. It offers 200G or 100G PAM4 SerDes with support for long-reach passive copper, and claims the industry’s lowest bit error rate SerDes, reducing link flaps and accelerating job completion time.
Other features include a PCI Express Gen 6 ×16 host interface, programmable congestion control pipeline, secure boot with signed firmware and device attestation, and line-rate encryption and decryption with PSP offload, which relieves the host/XPU of compute-intensive tasks.
The Ethernet NIC also provides packet trimming and congestion signaling support with Tomahawk 5, Tomahawk 6, or any UEC compliant switch. Thor Ultra is now sampling.
The post AI Ethernet NIC drives trillion-parameter AI workloads appeared first on EDN.
Power design tools ease system development

Analog Devices, Inc. (ADI) launches its ADI Power Studio, a family of products that offers advanced modeling, component recommendations, and efficiency analysis with simulation to help streamline power management design and optimization. ADI also offers early versions of two new web-based tools as part of Power Studio.
The web-based ADI Power Studio Planner and ADI Power Studio Designer tools, together with the full ADI Power Studio portfolio, are designed to streamline the entire power system design process from initial concept through measurement and evaluation. The Power Studio portfolio also features ADI’s existing desktop and web-based power management tools, including LTspice, SIMPLIS, LTpowerCAD, LTpowerPlanner, EE-Sim, LTpowerPlay, and LTpowerAnalyzer.
(Source: Analog Devices Inc.)
The Power Studio tools address key challenges in designing electronic systems with dozens of power rails and interdependent voltage domains, which creates greater design complexity. These bottlenecks require rework during architecture decisions, component selection, and validation, ADI said.
Power Studio addresses these challenges by providing a workflow that helps engineering teams make better decisions earlier by simulating real-world performance with accurate models and automating key outputs such as bill of materials and report generation, helping to reduce rework.
The ADI Power Studio Planner web-based tool targets system-level power tree planning. It provides an interactive view of the system architecture, making it easier to model power distribution, calculate power loss, and analyze system efficiency. Key features include intelligent parametric search and tradeoff comparisons.
The ADI Power Studio Designer is a web-based tool for IC-level power supply design. It provides optimized component recommendations, performance estimates, and tailored efficiency analysis. Built on the ADI power design architecture, Power Studio Designer offers guided workflows so engineers can set key parameters to build accurate models to simulate real-world performance, with support for both LTspice and SIMPLIS schematics, before moving to hardware.
Power Studio Planner and Power Studio Designer are available now as part of the ADI Power Studio. These tools are the first products released under ADI’s vision to deliver a fully connected power design workflow for customers. ADI plans to introduce ongoing updates and product announcements in the months ahead.
The post Power design tools ease system development appeared first on EDN.
Broadcom delivers Wi-Fi 8 chips for AI

Broadcom Inc. claims the industry’s first Wi-Fi 8 silicon solutions for the broadband wireless edge, including residential gateways, enterprise access points, and smart mobile clients. The company also announced the availability of its Wi-Fi 8 IP for license in IoT, automotive, and mobile device applications.
Designed for AI-era edge networks, the new Wi-Fi 8 chips include the BCM6718 for residential and operator access applications, the BCM43840 and BCM43820 for enterprise access applications, and the BCM43109 for edge wireless clients such as smartphones, laptops, tablets and automotive. These new chips also include a hardware-accelerated telemetry engine, targeting AI-driven network optimization. This engine collects real-time data on network performance, device behavior, and environmental conditions.
(Source: Broadcom Inc.)
The engine is a critical input for AI models and can be used by customers to train and run inference on the edge or in the cloud for use cases such as measuring and optimizing quality of experience (QoE), strengthening Wi-Fi network security and anomaly detection, and lowering the total cost of ownership through predictive maintenance and automated optimization, Broadcom said.
Wi-Fi 8 silicon chipsThe BCM6718 residential Wi-Fi access point chip features advanced eco modes for up to 30% greater energy efficiency and third-generation digital pre-distortion, which reduces peak power by 25%. Other features include a four-stream Wi-Fi 8 radio, receiver sensitivity enhancements enabling faster uploads, BroadStream wireless telemetry engine for AI training/inference, and BroadStream intelligent packet scheduler to maximize QoE. It also provides full compliance to IEEE 802.11bn and WFA Wi-Fi 8 specifications.
The BCM43840 (four-stream Wi-Fi 8 radio) and BCM43820 (two-stream scanning and analytics Wi-Fi 8 radio) enterprise Wi-Fi access point chips also feature advanced eco modes and third-generation digital pre-distortion, a BroadStream wireless telemetry engine for AI training/inference, and full compliance to IEEE 802.11bn and WFA Wi-Fi 8 specifications. They also provide an advanced location tracking capability.
The highly-integrated BCM43109 dual-core Wi-Fi 8, high-bandwidth Bluetooth, and 802.15.4 combo chip is optimized for mobile handset applications. The combo chip offers non-primary channel access for latency reduction and improved low-density parity check coding to extend gigabit coverage. It also provides full compliance to IEEE 802.11bn and WFA Wi-Fi 8 specifications, along with 802.15.4 support including Thread V1.4 and Zigbee Pro, and Bluetooth 6.0 high data throughput and higher-bands support. Other key features include a two-stream Wi-Fi 8 radio with 320-MHz channel support, enhanced long range Wi-Fi, and sensing and secure ranging.
The Wi-Fi 8 silicon is currently sampling to select partners. The Wi-Fi IP is currently available for licensing, manufacture, and use in edge client devices.
The post Broadcom delivers Wi-Fi 8 chips for AI appeared first on EDN.
Microchip launches PCIe Gen 6 switches

Microchip Technology Inc. expands its Switchtec PCIe family with its next-generation Switchtec Gen 6 PCIe fanout switches, supporting up to 160 lanes for high-density AI systems. Claiming the industry’s first PCIe Gen 6 switches manufactured using a 3-nm process, the Switchtec Gen 6 family features lower power consumption and advanced security features, including a hardware root of trust and secure boot with post-quantum-safe cryptography compliant with the Commercial National Security Algorithm Suite (CNSA) 2.0.
The PCIe 6.0 standard doubles the bandwidth of PCIe 5.0 to 64 GT/s per lane, making it suited for AI workloads and high-performance computing applications that need faster data transmission and lower latency. It also adds flow control unit (FLIT) mode, a lightweight forward-error-correction (FEC) system, and dynamic resource allocation, enabling more efficient and reliable data transfer, particularly for small packets in AI workloads.
As a high-performance interconnect, the Switchtec Gen 6 PCIe switches, Microchip’s third-generation PCIe switch, enable high-speed connectivity between CPUs, GPUs, SoCs, AI accelerators, and storage devices, reducing signal loss and maintaining the low latency required by AI fabrics, Microchip said.
Though there are no production CPUs with PCIe Gen 6 support on the market, Microchip wanted to make sure that they had all of the infrastructure components in advance of PCIe Gen 6 servers.
“This breakthrough is monumental for Microchip, establishing us once again as a leader in data center connectivity and broad infrastructure solutions,” said Brian McCarson, corporate vice president of Microchip’s data center solutions business unit.
Offering full PCIe Gen 6 compliance, which includes FLIT, FET, 64-Gbits/s PAM4 signaling, deferrable memory, and 14-bit tag, the Switchtec Gen 6 PCIe switches feature 160 lanes, 20 ports, and 10 stacks with each port featuring hot- and surprise-plug controllers. Also available are 144-lane variants. These switches support non-transparent bridging to connect and isolate multiple host domains and multicast for one-to-many data distribution within a single domain. They are suited for high-performance compute, cloud computing, and hyperscale data centers.
(Source: Microchip Technology Inc.)
Multicast support is a key feature of the next-generation switch. Not all switch providers have multicast capability, McCarson said.
“Without multicast, if a CPU needs to communicate to two drives because you want to have backup storage, it has to cast to one drive and then cast to the second drive,” McCarson said. “With multicast, you can send a signal once and have it cast to multiple drives.
“Or if the GPU and CPU have to communicate but you need to have all of your GPUs networked together, the CPU can communicate to an entire bank of GPUs or vice versa if you’re operating through a switch with multicast capability,” he added. “Think about the power savings from not having a GPU or CPU do the same thing multiple times day in, day out.”
McCarson said customers are interested in PCIe Gen 6 because they can double the data rate, but when they look at the benefits of multicast, it could be even bigger than doubling the data rates in terms of efficient utilization of their CPU and GPU assets.
Other features include advanced error containment and comprehensive diagnostics and debug capabilities, several I/O interfaces, and an integrated MIPS processor with bifurcation options at x8 and x16. Input and output reference clocks are based on PCIe stacks with four input clocks per stack.
Higher performanceThe Switchtec Gen 6 product delivers on performance in signal integrity, advanced security, and power consumption.
PCIe 6.0 uses PAM4 signaling, which enables the doubling of the data rate, but it can also reduce the signal-to-noise ratio, causing signal integrity issues. “Signal integrity is one of the key factors when you’re running this higher data rate,” said Tam Do, technical engineer, product marketing for Microchip’s Data Center Solutions business unit
Signal loss, or insertion loss, set by the PCIe 6 spec is 32 dB. The new switch meets the spec thanks in part to its SerDes design and Microchip’s recommended layout of the pinout and package, according to Do.
In addition, Microchip added post-quantum cryptography to the new chip, which is not part of the PCIe standard, to meet customer requirements for a higher level of security, Do said.
The PCIe switch also offers lower power consumption, thanks to the 3-nm process, than competing PCIe Gen 6 devices built on older technology nodes.
Development tools include Microchip’s ChipLink diagnostic tools, which provide debug, diagnostics, configuration, and analysis through an intuitive graphical user interface. ChipLink connects via in-band PCIe or sideband signals such as UART, TWI, and EJTAG. Also available is the PM61160-KIT Switchtec Gen 6 PCIe switch evaluation kit with multiple interfaces.
Switchtec Gen 6 PCIe switches (x8 and x16 bifurcation) and an evaluation kit are available for sampling to qualified customers. A low-lane-count version with 64 and 48 lanes with x2, x4, x8, x16 bifurcation for storage and general enterprise use cases will also be available in the second quarter of 2026.
The post Microchip launches PCIe Gen 6 switches appeared first on EDN.
Amps x Volts = Watts
Analog topologies abound for converting current to voltage, voltage to current, voltage to frequency, and frequency to voltage, among other conversions.
Figure 1 joins the flock while singing a somewhat different tune. This current, voltage, and power (IVW) DC power converter multiplies current by voltage to sense wattage. Here’s how it gets off the ground.

Figure 1 The “I*V = W” converter comprises voltage-to-frequency conversion (U1ab & A1a) with frequency (F) of 2000 * Vload, followed by frequency-to-voltage conversion (U1c & A1b) with Vw = Iload * F / 20000 = (Iload * Vload) / 10 = Watts / 10 where Vload < 33 V and Iload < 1.5 A.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The basic topology of the IVW converter comprises a voltage-to-frequency converter (VFC) cascaded with a frequency-to-voltage converter (FVC). U1ab and A1a, combined with the surrounding discretes (Q1, Q2, Q3, etc.), make a VFC similar to the one described in this previous Design Idea, “Voltage inverter design idea transmogrifies into a 1MHz VFC”
The U1ab, A1a, C2, etc., VFC forms an inverting charge pump feedback loop that actively balances the 1 µA/V current through R2. Each cycle of the VFC deposits a charge of 5v * C2, or 500 picocoulombs (pC), onto integrator capacitor C3 to produce an F of 2 kHz * Vload (= 1 µA / 500 pC) for the control signal input of the FVC switch U1c.
The other input to the U1c FVC is the -100 mV/A current-sense signal from R1. This combo forces U1c to pump F * -0.1 V/amp * 500 pF = -2 kHz * Vload * 50 pC * Iload into the input of the A1b inverting integrator.
The melodious result is:
Vw = R1 * Iload * 2000 * Vload * R6 * C6
or,
Vw = Iload * Vload * 0.1 * 2000 * 1 MΩ * 500 pF = 100 mV/W.
The R6C5 = 100-ms integrator time constant provides >60-dB of ripple attenuation for Vload > 1-V and a low noise 0- to 5-V output suitable for consumption by a typical 8- to 10-bit resolution ADC input. Diode D1 provides fire insurance for U1 in case Vload gets shorted to ground.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Voltage inverter design idea transmogrifies into a 1MHz VFC
- A simulated 100-MHz VFC
- A simple, accurate, and efficient charge pump voltage inverter for $1 (in singles)
- 100-MHz VFC with TBH current pump
The post Amps x Volts = Watts appeared first on EDN.
Inside Walmart’s onn. 4K Plus: A streaming device with a hidden bonus
Walmart onn. coverage
Walmart’s onn. (or is it now just “onn”?) line of streaming media boxes and sticks are regularly represented here at Brian’s Brain, for several good reasons. They’re robustly featured, notably more economical than Google’s own Android TV-now-Google TV offerings, and frequently price-undershoot competitive devices from companies like Apple and Roku, too. Most recently, from a “box” standpoint, I took apart the company’s high-end onn. 4K Pro for publication at EDN in July, following up on the entry-level onn. 4K, which had appeared in April. And, within a subsequent August-published teardown of Google’s new TV Streamer 4K, I also alluded to an upcoming analysis of Walmart’s mid-tier onn. 4K Plus.
An intro to the onn.That time is now. And “mid-tier” is subjective. Hold that thought until later in the write-up. For now, I’ll start with some stock shots:

Here’s how Walmart slots the “Plus” within its current portfolio of devices:

Note that, versus the Pro variant, at least in its final configuration, the remote control is not backlit this time. I was about to say that I guess we now know where the non-backlit remotes for the initial production run(s) of the Pro came from, although this one’s got the Free TV button, so it’s presumably a different variant from the other two, too (see what I did there?). Stand by.
And hey, how about a promo video too, while we’re at it?
Now for some real-life photos. Box shots first:




Is it wrong…

that I miss the prior packaging, even though there’s no longer a relevant loop on top of the box?

I digress. Onward:

Time to dive inside:

Inside is a two-level tray, with our patient (and its companion wall wart) on top, along with a sliver of literature:


Flip the top half over:

and the rest of the kit comes into view: a pair of AA batteries, an HDMI cable, and the aforementioned remote control:

Since I just “teased” the remote control, let’s focus on that first, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:





All looks the same as before so far, eh? Well then, last but not least, let’s look at the back:



Specifically, what does the product-code sticker say this time?

Yep, v2.32, different than the predecessors. Here’s the one in the baseline onn. 4K (v2.15, if you can’t read the tiny print):

And the two generations that ship(ped) with the 4K Pro, Initial (v2.26):

And subsequently, whose fuller feature set matched the from-the-beginning advertising (v2.30):

Skipping past the HDMI cable and AA battery set (you’re welcome), here’s the wall wart:

Complete with a “specs” close-up,”

whose connector, believe it or not, marks the third iteration within the same product generation: micro-USB for the baseline 4K model:

“Barrel” for the 4K Pro variant:

And this time, USB-C:

I would not want to be the person in charge of managing onn. product contents inventory…
Finally, our patient, first still adorned with its protective translucent, instructions-augmented plastic attire:

And now, stark nekkid. Top:

Front:

Bare left side:

Back: left-to-right are the reset switch, HDMI output, and USB-C input. Conceptually, you could seemingly tether the latter to an OTG (on-the-go) splitter, thereby enabling you to (for example) feed the device with both power and data coming from an external storage device, but in practice, it’s apparently hit-and-miss at best:

And equally bare right side:

There’s one usual externally visible adornment that we haven’t yet seen. Can you guess what it is before reading the following sentence?
Yes, clever-among-you, that’s right: it’s the status LED. Flip the device over and…there it be:

Now for closeups of the underside marking and (in the second) the aforementioned LED, which is still visible from the front of the device when illuminated because it’s on a beveled edge:

Enough of the teasing. Let’s get inside. For its similar-form-factor mainstream 4K precursor, I’d gone straight to the exposed circumference gap between the two halves. But I couldn’t resist a preparatory peek underneath the rubber feet that taunted me this time:

Nope. No screw heads here:

Back to Plan B:

There we go, with only a bit of collateral clip-snipped damage:


The inside of the bottom half of the case is bland, unless you’re into translucent LED windows:

The other half of the previous photo is much more interesting (at least to me):

Three more screws to go…

And the PCB then lifts right out of the enclosure’s remaining top half:

Allowing us to first-time see the PCB topside:


Here are those two PCB sides again, now standalone. Bottom:

and top:

Much as (and because) I know you want me to get to ripping the tops off those Faraday cages, I’ll show you some side shots first. Right:

Front; check out those Bluetooth and Wi-Fi antennae, reminiscent of the ones in the original 4K:

Left:

And back:

Let’s pop the top off the PCB bottom-side cage first:

Pretty easy; I managed that with just my fingernail and a few deft yanks:


At the bottom is the aforementioned LED:

And within the cage boundaries,

are two ICs of particular note; an 8 Gbit (1 GByte) Micron DDR4 SDRAM labeled as follows:
41R77
D8BPK
And, below these ICs are the nonvolatile memory counterpart, a FORESEE FEMDNN016G 16 GByte eMMC.
Now for the other (top) side. As you likely already noticed from the side shots, the total cage height here is notably thicker than that of its bottom-side counterpart. That’s because, unsurprisingly, there’s a heat sink stuck on top of it. Heat rises, after all; I already suspected, even before not finding the application processor inside the bottom-side cage, that we’d find it here instead.
My initial attempts at popping off the cage-plus-heatsink sandwich using traditional methods—first my fingernail, followed by a Jimmy—were for naught, threatening only to break my nail and bend the blade, as well as to damage the PCB alongside the cage base. I then peeked under the sticker attached to the top of the heatsink to see if it was screwed down in place. Nope:

Eventually, by jamming the Jimmy in between the heatsink and cage top, I overcame the recalcitrant adhesive that to that point had succeeded in keeping them together:




Now, the cage came off much more easily. In retrospect, it was the combined weight of the two pieces (predominantly the heatsink, a hefty chunk of metal) that had seemingly made my prior efforts be for naught:



At the bottom, straddling the two aforementioned antennae, is the same Fn-Link Technology 6252B-SRB wireless communications module that we’d found in the earlier 4K Pro teardown:

And inside the cage? Glad you asked:

To the left is the other 8 Gbit (1 GByte) Micron DDR4 SDRAM. And how did I know they’re both DDR4 in technology, by the way? That’s because it’s the interface generation that mates up with the IC on the right, the application processor, which is perhaps the most interesting twist in this design. It’s the Amlogic S905X5M, an upgrade to the S905X4 found in the 4K Pro. It features a faster Arm Cortex A-55 CPU quad-core cluster (2.5 GHz vs 2 GHz), which justifies the beefy heatsink, and an enhanced GPU core (Arm Mali-G310 v2 vs Arm Mali-G21 MP2).
The processing enhancements bear fruit when you look at the benchmark comparisons. Geekbench improvements for the onn. 4k Plus scales linearly with the CPU clock speed boost:
While GFXBench comparative results also factor in the graphics subsystem enhancements:

I’d be remiss if I didn’t also point out the pricing disparity between the two systems: the 4K Plus sells for $29.88 while the 4K Pro is normally priced $20 more than that ($49.88), although as I type these words, it’s promotion-priced at 10% off, $44.73. Folks primarily interested in gaming on Google TV platforms, whether out-of-the-box or post-jailbreaking, are understandably gravitating toward the cheaper, more computationally capable 4K Plus option.
That said, the 4K Pro also has 50% more DRAM and twice the storage, along with an integrated wired Ethernet connectivity option and other enhancements, leaving it the (potentially, at least) better platform for general-purpose streaming box applications, if price isn’t a predominant factor.
That wraps up what I’ve got for you today. I’ll keep the system disassembled for now in case readers have any additional parts-list or other internal details questions once the write-up is published. And then, keeping in mind the cosmetic-or-worse damage I did getting the heatsink and topside cage off, I’ll put it back together to determine whether its functionality was preserved. One way or another, I’ll report back the results in the comments. And speaking of which, I look forward to reading your thoughts there, as well.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Perusing Walmart’s onn. 4K Pro Streaming Device with Google TV: Storage aplenty
- Walmart’s onn. full HD streaming device: Still not thick, just don’t call it a stick
- Walmart’s onn. UHD streaming device: Android TV at a compelling price
- Walmart’s onn. FHD streaming stick: Still Android TV, but less thick
- Walmart’s onn. 4K streaming box: A Google TV upgrade doesn’t clobber its cost
The post Inside Walmart’s onn. 4K Plus: A streaming device with a hidden bonus appeared first on EDN.
Tesla’s wireless-power “dream” gets closer to reality—maybe

You are likely at least slightly aware of the work that famed engineer, scientist, and researcher Nikola Tesla did in the early 1900s in his futile attempt to wirelessly transmit usable power via a 200-foot tower. The project is described extensively on many credible web sites, such as “What became of Nikola Tesla’s wireless dream?” and “Tesla’s Tower at Wardenclyffe” as well as many substantive books.
Since Tesla, there have been numerous other efforts to transmit power without wires using RF (microwave and millimeter waves) and optical wavelengths. Of course, both “bands” are wireless and governed by Maxwell’s equations, but there are very different practical implications.
Proponents of wireless transmitted power see it as a power-delivery source for both stationary and moving targets including drones and larger aircraft—very ambitious objectives, for sure. We are not talking about near-field charging for devices such as smartphones, nor the “trick” of wireless lighting of a fluorescent bulb that is positioned a few feet away from a desktop Tesla coil. We are talking about substantial distances and power.
Most early efforts to beam power were confined to microwave frequencies due to available technologies. However, they require relatively larger antennas to focus the transmitted beam, so millimeter waves or optical links are likely to work better.
The latest efforts and progress have been in the optical spectrum. These systems use a fiber-optic-based laser for a tightly confined beam. The “receivers” for optical power transmission are specialized photovoltaic cells optimized to convert a very narrow wavelength of light into electric power with very high efficiency. The reported efficiencies can exceed 70%, more than double that of a typical broader-spectrum solar cell.
In one design from Powerlight Technologies, the beam is contained within a virtual enclosure that senses an object impinging on it—such as a person, bird, or even airborne debris—and triggers the equipment to cut power to the main beam before any damage is done (Figure 1). The system monitors the volume the beam occupies, along with its immediate surroundings, allowing the power link to automatically reestablish itself when the path is once again clear.

Figure 1 This free-space optical-power path link includes a safety “curtain” which cuts off the beam within a millisecond if there is a path interruption. Source: Powerlight Technologies
Although this is nominally listed as a “power” project, as with any power-related technology, there’s a significant amount of analog-focused circuitry and components involved. These provide raw DC power to the laser driver and to the optical-conversion circuits, lasers, overall system management at both ends, and more.
Recent progress raises effectiveness
In May 2025, DARPA’s Persistent Optical Wireless Energy Relay (POWER) program achieved several new records for transmitting power over distance in a series of tests in New Mexico. The team’s POWER Receiver Array Demo (PRAD) recorded more than 800 watts of power delivered during a 30-second transmission from a laser 8.6 kilometers (5.3 miles) away. Over the course of the test campaign, more than a megajoule of energy was transferred.
In the never-ending power-versus-distance challenge, the previous greatest reported distance records for an appreciable amount of optical power (>1 microwatt) were 230 watts of average power at 1.7 kilometers for 25 seconds and a lesser (but undisclosed) amount of power at 3.7 kilometers (Figure 2).

Figure 2 The POWER Receiver Array Demo (PRAD) set the records for power and distance for optical power beaming; the graphic shows how it compares to previous notable efforts. Source: DARPA
To achieve the power and distance record, the power receiver array used a new receiver technology designed by Teravec Technologies with a compact aperture for the laser beam to shine. That’s to ensure that very little light escapes once it has entered the receiver. Inside the receiver, the laser strikes a parabolic mirror that reflects the beam onto dozens of photovoltaic cells to convert the energy back to usable power (Figure 3).

Figure 3 In the optical power-beaming receiver designed for PRAD, the laser enters the center aperture, strikes a parabolic mirror, and reflects onto dozens of photovoltaic cells (left) arranged around the inside of the device to convert the energy back to usable power (right). Source: Teravec Technologies
While it may seem logical to use a mirror or lens when it comes to redirecting laser beams, the project team instead found that diffractive optics were a better choice because they are good at efficiently handling monochromatic wavelengths of light. They used additive manufacturing to create optics and included an integrated cooling system.
Further details on this project are hard to come by, but that’s almost beside the point. The key message is that there has been significant progress. As is usually the case, some of it leverages progress in other disciplines, and much of it is “home made.” Nonetheless, there are significant technical costs, efficiency burdens, and limitations due to atmospheric density—especially at lower attitudes and ground level.
Do you think advances in various wireless-transmission components and technologies will reach to where it’s a viable power-delivery approach for broader uses besides highly specialized ones? Can it be made to work for moving targets as well as stationary ones? Or will this be one of those technologies where success is always “just around the corner”? And finally, is there any relationship between this project and the work on directed laser energy systems to “shoot” drones out of the sky, which has parallels to the beam generation/emission part?
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- What…You’re Using Lasers for Area Heating?
- Forget Tesla coils and check out Marx generators
- Pulsed high-power systems are redefining weapons
- Measuring powerful laser output takes a forceful approach
The post Tesla’s wireless-power “dream” gets closer to reality—maybe appeared first on EDN.
Intel releases more details about Panther Lake AI processor

Intel Corp. unveils new details about its next-generation client processor for AI PCs, the Core Ultra series 3, code-named Panther Lake, which is expected to begin shipping later this year. The company also gave a peek into its Xeon6+ server processor, code-named Clearwater Forest, expected to launch in the first half of 2026.
Core Ultra series 3 client processor (Source: Intel Corp.)
Panther Lake is the company’s first product built on the advanced Intel 18A semiconductor process, the first 2-nanometer class node manufactured in the United States. It delivers up to 15% better performance per watt and 30% improved chip density compared to Intel 35 thanks to two key advances—RibbonFET and PowerVia.
The RibbonFET transistor architecture, Intel’s first in over a decade, delivers greater scaling and more efficient switching for better performance and energy efficiency. The PowerVia backside power delivery system improves power flow and signal delivery.
Also contributing to its greater flexibility and scalability is Foveros, Intel’s advanced packaging and 3D chip stacking technology for integrating multiple chiplets into advanced SoCs.
Panther LakeThe Core Ultra series 3 processors offer scalable AI PC performance, targeting a range of consumer and commercial AI PCs, gaming devices, and edge solutions. Intel said the multi-chiplet architecture offers flexibility across form factors, segments, and price points.
The Panther lake processors offer Lunar Lake-level power efficiency and Arrow Lake-class performance, according to Intel. They offer up to 16 CPU cores, up to 96-GB LPDDR5, and up to 180 TOPS across the platform. They also feature new P- and E-cores, along with a new GPU and next-generation IPU 7.5 and NPU 5, delivering higher-performance and greater efficiency over previous generations.
Key features include up to 16 new performance-cores (P-cores) and efficient-cores (E-cores) delivering more than 50% faster CPU performance versus the previous generation; 30% lower power consumption versus Lunar Lake; and a new Intel Xe3 Arc GPU with up to 12 Xe cores delivering more than 50% faster graphics performance versus the previous generation, along with up to 12 ray tracking units and up to 16-MB L2 cache.
Panther Lake also features the next-gen NPU 5 with up to 50 trillion of operations per second (TOPS), offering >40% TOPS/area versus Lunar Lake and 3.8× TOPS versus Arrow Lake-H.
The IPU 7.5 offers AI-based noise reduction and local tone mapping. It delivers 16-MP stills and 120 frames per second slow motion and supports up to three concurrent cameras. It also offers a 1.5-W reduction in power with hardware staggered HDR compared to Lunar Lake.
Other features include enhanced power management, up to 12 lanes PCIe 5, integrated Thunderbolt 4, integrated Intel Wi-Fi 7 (R2) and dual Intel Bluetooth Core 6, and LPCAMM support.
Panther Lake will also extend to edge applications including robotics, Intel said. A new Intel Robotics AI software suite and reference board is available with AI capabilities to develop robots using Panther Lake for both controls and AI/perception. The suite includes vision libraries, real-time control frameworks, AI inference engines, orchestration-ready modules, and hardware-aware tuning
Panther Lake will begin ramping high-volume production this year, with the first SKU scheduled to ship before the end of the year. General market availability will start in January 2026.
Recommended Intel’s confidence shows as it readies new processors on 18A
Clearwater ForestIntel also provided a sneak peek into the Xeon 6+, its first 18A-based server processor. It is also touted as the company’s most efficient server processor. Both Panther Lake and Clearwater Forest, built on Intel 18A, are being manufactured at Intel’s new Fab 52, which is Intel’s fifth high-volume fab at its Ocotillo campus in Chandler, Arizona.
Xeon 6+ server processor (Source: Intel Corp.)
Clearwater Forest is Intel’s next-generation E-core processor, featuring up to 288 E-cores, and a 17% increase in instructions per cycle (IPC) over the previous generation. Expected to offer significant improvements in density, throughput, and power efficiency, Intel plans to launch Xeon 6+ in the first half of 2026. This server processor series targets hyperscale data centers, cloud providers, and telcos.
The post Intel releases more details about Panther Lake AI processor appeared first on EDN.
Broadcom debuts 102.4-Tbits/s CPO Ethernet switch

Broadcom Inc. launches the Tomahawk 6 – Davisson (TH6-Davisson), the company’s third-generation co-packaged optics (CPO) Ethernet switch, delivering the bandwidth, efficiency, and reliability for next-generation AI networks. The TH6-Davisson provides advances in power efficiency and traffic stability for higher optical interconnect performance required to scale-up and scale-out AI clusters.
The trend toward CPOs in data centers is to increase bandwidth and lower energy consumption. With the TH6-Davisson, Broadcom claims the industry’s first 102.4 Tbits/s of optically enabled switching capacity, doubling the bandwidth of any CPO switch available today. This sets a new benchmark for data-center performance, Broadcom said.
(Source: Broadcom)
Designed for power efficiency, the TH6-Davisson heterogeneously integrates TSMC Compact Universal Photonic Engine (TSMC COUPE) technology-based optical engines with advanced substrate-level multi-chip packaging. This is reported to dramatically reduce the need for signal conditioning and minimize trace loss and reflections, resulting in a 70% reduction in optical interconnect power consumption. This is more than 3.5× lower than traditional pluggable optics, delivering a significant improvement in energy efficiency for hyperscale and AI data centers, Broadcom said.
In addition to power efficiency, the TH6-Davisson Ethernet switch addresses link stability, which has become a critical bottleneck as AI training jobs scale, the company added, with even minor interruptions causing losses in XPU and GPU utilization.
The TH6-Davisson solves this challenge by directly integrating optical engines onto a common package with the Ethernet switch. The integration eliminates many of the sources of manufacturing and test variability inherent in pluggable transceivers, resulting in significantly improved link flap performance and higher cluster reliability, according to Broadcom.
In addition, operating at 200 Gbits/s per channel, TH6-Davisson doubles the line rate and overall bandwidth of Broadcom’s second-generation TH5-Bailly CPO solution. It seamlessly interconnects with DR-based transceivers as well as NPO and CPO optical interconnects running at 200 Gbits/s per channel, enabling connectivity with advanced NICs, XPUs, and fabric switches.
The TH6-Davisson BCM78919 supports a scale-up cluster size of 512 XPUs and up to 100,000+ XPUs in two-tier networks at 200 Gbits/s per link. Other features include 16 × 6.4 Tbits/s Davisson DR optical engines and field-replaceable ELSFP laser modules.
Broadcom is now developing its fourth-generation CPO solution. The new platform will double per-channel bandwidth to 400 Gbits/s and deliver higher levels of energy efficiency.
The TH6-Davisson BCM78919 is IEEE 802.3 compliant and interoperable with existing 400G and 800G standards. Broadcom is currently sampling the Ethernet switch to its early access customers and partners.
The post Broadcom debuts 102.4-Tbits/s CPO Ethernet switch appeared first on EDN.
Analog frequency doublers

High school trigonometry combined with four-quadrant multipliers can be exploited to yield sinusoidal frequency doublers. Nothing non-linear is involved, which means no possibly strident filtering requirements.
Starting with some sinusoidal signal and needing to derive new sinusoidal signals at multiples of the original sinusoidal frequency, a little trigonometry and four-quadrant multipliers can be useful. Consider the following SPICE simulation in Figure 1.

Figure 1 Two analog frequency doublers, A1 + U1 and A2 + U2, in cascade to form a frequency quadruple.
The above sketch shows the pair A1 and U1 configured as a frequency doubler from V1 to V2, and the pair A2 and U2 configured as another frequency doubler from V2 to V3. Together, the two of them form a frequency quadrupler from V1 to V3. With more circuits, you can make an octupler and so on within the bandwidth limits of the active semiconductors, of course.
Frequency doubler operation is based on these trigonometric identities:
sin² (x) = 0.5 * ( 1 – cos (2x) ) and cos² (x) = 0.5 * ( 1 + cos (2x) )
sin² (x) = 0.5 – 0.5 * cos (2x) and cos² (x) = 0.5 + 0.5* cos (2x)
Take your pick, both equations yield a DC offset plus a sinusoid at twice the frequency you started with. Do a DC block as with C1 and R1 above, and you are left with a doubled-frequency sinusoid at half the original amplitude. Follow that up with a times two gain stage, and you have made a sinusoid at twice the original frequency and at the same amplitude with which you started.
This way of doing things takes less stuff than having to do some non-linear process on the input sinusoid to generate a harmonic comb and then having to filter out everything except the one frequency you want.
Although there might actually be some other harmonics at each op-amp output, depending on how non-ideal the multiplier and op-amp might be, this process does not nominally generate other unwanted harmonics. Such harmonics as might incidentally arise won’t require a high-performance filter for their removal.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Frequency doubler with 50 percent duty cycle
- A 50 MHz 50:50-square wave output frequency doubler/quadrupler
- Frequency doubler operates on triangle wave
- Fast(er) frequency doubler with square wave output
- Triangle waves drive simple frequency doubler
The post Analog frequency doublers appeared first on EDN.



