Українською
  In English
EDN Network
DMM Plug-In Test Resistor with temperature sensing

Proper precision calibration resistors are expensive and usually bulky, often in a large box or can. These are overkill for low-cost handheld digital multimeters (DMMs) and LCR Meters, especially when used as a “sanity check” before making critical measurements.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Most of the time, I would use a precision axial-leaded resistor for this purpose. Still, I thought about making something more convenient that provided the precision resistor with some mechanical protection to directly plug into a DMM or LCR meter. If using a high-resolution bench DMM like a Keysight KS34465A or Keithley DMM6500, often you want to know the resistor temperature as well as the resistance value, and the thought came to thermally link the precision resistor with a precision thermistor for this purpose. In the spirit of low-cost DIY, the idea to link the axial-lead precision resistor with an axial-lead precision thermistor with a short section of heat shrink tubing seemed reasonable, and you can’t get much simpler or cheaper than the concept shown in Figure 1!
Figure 1 Linking an axial-lead precision resistor and an axial-lead precision thermistor with a short section of heat shrink tubing.
I still needed mechanical protection for the resistor/thermistor combo, and how to make direct DMM connections. A standard dual banana plug is a good host for the combo, but only has two connection terminals. Using a 3D printer, I created a custom 3D printed “plug” to support the thermistor leads as shown in Figure 2.
Figure 2 A custom, 3D-printed “plug” supports the thermistor leads, allowing for a resistor/thermistor combo with mechanical protection.
Figure 3 shows the resistor/thermistor combo mounting scheme, where the precision axial resistor leads are inserted into the dual banana plug holes and secured by the plug’s internal screws (leave some slack in the resistor leads to help reduce mechanical stress on the precision resistor). Note how the resistor lead ends are looped over, creating small terminals for external “clip lead” or “Kelvin clips” measurements. The axial thermistor leads are threaded through the custom 3D printed plug and loop at the top.
Figure 3 The resistor/thermistor combo mounting scheme, where the precision axial resistor leads are inserted into the dual banana plug holes and secured by the plug’s internal screws.
Overall, the concept creates a small compact holder for the precision resistor and thermistor combo with convenient connections to the resistor measurement instrument directly by the dual banana plug. Temperature measurements use small clip leads to the thermistor leads, which protrude from the top of the dual banana plug and a custom 3D printed plug top, as shown in Figure 4.
Figure 4 A resistor temperature reading showing the small clip leads to the thermistor leads, which protrude from the top of the dual banana plug and custom 3D printed plug.
When using this setup, I found the bench-type DMMs like the DMM6500 have banana input terminals 3~3.5 °C warmer than ambient (the KS34465A was 2~3 °C warmer). This helps explain the long settling time for bench DMM new banana connections to stabilize, where differential thermal EMFs can corrupt sensitive measurements. Handheld DMMs seem to stabilize much quicker since the internal handheld temperature is slightly above ambient, whereas the bench DMMs are much warmer.
Anyway, I hope some folks find this DIY Precision Resistor with built-in thermistor concept convenient and useful, although wouldn’t recommend this for precision resistors below ~100 Ω, this is 4-wire Kelvin territory, and certainly not considered “Metrology Qualified.”
Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Ex-elis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.
Related Content
- Simple 5-component oscillator works below 0.8V
- Injection locking acts as a frequency divider and improves oscillator performance
- Investigating injection locking with DSO Bode function
- DIY custom Tektronix 576 & 577 curve tracer adapters
The post DMM Plug-In Test Resistor with temperature sensing appeared first on EDN.
Accelerator improves RAID array management

Microchip’s Adaptec SmartRAID 4300 series of NVMe RAID storage accelerators speeds access to NVMe storage in AI data centers. It achieves this through a disaggregated architecture that separates storage software from the hardware layer, leveraging dedicated PCIe controllers to offload CPU processing and accelerate RAID operations. Microchip reports the SmartRAID 4300 achieves up to 7× higher I/O performance compared to the previous generation in internal testing.
In the SmartRAID 4300 architecture, storage software runs on the host CPU while the accelerator offloads parity-based redundancy tasks, such as XOR operations. This allows write operations to bypass the accelerator and go directly from the host CPU to NVMe drives at native PCIe speeds. By avoiding in-line bottlenecks, the design supports high-throughput configurations with up to 32 CPU-attached x4 NVMe devices and 64 logical drives or RAID arrays. It is compatible with both PCIe Gen 4 and Gen 5 host systems.
The SmartRAID 4300 accommodates NVMe and cloud-capable SSDs for versatile enterprise deployments. It uses architectural techniques like automatic core idling and autonomous power reduction to optimize efficiency. The accelerator also provides security features, including hardware root of trust, secure boot, attestation, and Self-Encrypting Drive (SED) support to ensure data protection.
For information on production integration, contact Microchip sales or an authorized distributor here.
The post Accelerator improves RAID array management appeared first on EDN.
Rugged film capacitors off high pulse strength

EPCOS B3264xH double-sided metallized polypropylene film capacitors from TDK withstand pulse stress up to 6500 V/µs. Suited for resonant topologies—particularly LLC designs—these compact, AEC-Q200-compliant capacitors operate continuously from -55°C to +125°C, ensuring reliable performance in harsh environments.
The capacitors cover a rated DC voltage range of 630 V to 2000 V with capacitance values from 2.2 nF to 470 nF. Their specialized dielectric system, combining polypropylene with double-sided metallized PET film electrodes, enables both high pulse strength and current handling. These characteristics make them well-suited for onboard chargers and DC/DC converters in xEVs, as well as uninterruptible power supplies, industrial switch-mode power supplies, and electronic ballasts.
TDK reports that B3264xH capacitors offer high insulation resistance, low dissipation factor, and strong self-healing properties, resulting in a 200,000-hour service life at +85°C and full rated voltage. They are available in three lead spacings—10 mm, 15 mm, and 22.5 mm—to allow integration in space-constrained circuit layouts.
The post Rugged film capacitors off high pulse strength appeared first on EDN.
Hybrid redrivers aid high-speed HDMI links

With integrated display data channel (DDC) listeners, Diodes’ 3.3-V, quad-channel hybrid ReDrivers preserve HDMI signal integrity for high-resolution video transmission. The PI3HDX12311 supports HDMI 2.1 fixed rate link (FRL) signaling up to 12 Gbps and transition-minimized differential signaling (TMDS) up to 6 Gbps. The PI3HDX6311 supports HDMI 2.0 at up to 6 Gbps.
Both devices operate in either limited or linear mode. In HDMI 1.4 applications, they function as limited redrivers, using a predefined differential output swing—set via swing control—to maintain HDMI-compliant levels at the receptacle. For HDMI 2.0 and 2.1, they switch to linear mode, where the output swing scales with the input signal, effectively acting as a trace canceller. This mode remains transparent to link training signals and, in the PI3HDX12311, supports 8K video resolution and data rates up to 48 Gbps (12 Gbps per channel).
The PI3HDX12311 and PI3HDX6311 provide Dual-Mode DisplayPort (DP++) V1.1 level shifting and offer flexible coupling options, allowing AC, DC, or mixed coupling on both input and output signals. To reduce power consumption, the devices monitor the hot-plug-detect (HPD) pin and enter a low-power state if HPD remains low for more than 2 ms.
In 3500-unit quantities, the PI3HDX12311 costs $0.99 each, and the PI3HDX6311 costs $0.77 each.
The post Hybrid redrivers aid high-speed HDMI links appeared first on EDN.
Bluetooth 6.0 modules target varied applications

KAGA FEI is expanding its Bluetooth LE portfolio with two Bluetooth 6.0 modules that offer different memory configurations. Like the existing EC4L15BA1, the new EC4L10BA1 and EC4L05BA1 are based on Nordic Semiconductor’s nRF54L series of wireless SoCs and integrate a built-in antenna.
The EC4L15BA1 offers the highest memory capacity, with 1.5 MB of NVM and 256 KB of RAM. For applications with lighter requirements, the EC4L10BA1 includes 1.0 MB of NVM and 192 KB of RAM, while the EC4L05BA1 provides 0.5 MB of NVM and 96 KB of RAM. This range enables use cases from industrial IoT and healthcare to smart home devices and cost-sensitive, high-volume designs.
The post Bluetooth 6.0 modules target varied applications appeared first on EDN.
CCD sensor lowers noise for clearer inspections

Toshiba’s TCD2728DG CCD linear image sensor uses lens reduction to cut random noise, enhancing image quality in semiconductor inspection equipment and A3 multifunction printers. As a lens-reduction type sensor, it optically compresses the image before projection onto the sensor. According to Toshiba, the TCD2728DG has lower output amplifier gain than the earlier TCD2726DG and reduces random noise by about 40%, down to 1.9 mV.
The color CCD image sensor features 7500 image-sensing elements across three lines, with a pixel size of 4.7×4.7 µm. It supports a 100-MHz data rate (50-MHz × 2 channels), enabling high-speed processing of large image volumes. This makes it well-suited for line-scan cameras in inspection systems that require real-time decision-making. A built-in timing generator and CCD driver simplify integration and help streamline system development.
The sensor’s input clocks accept a CMOS-level 3.3-V drive. It operates with 3.1- V to 3.5-V analog, digital, and clock driver supplies (VAVDD, VDVDD, VCKDVDD), plus a 9.5-V to 10.5-V supply for VVDD10. Typical RGB sensitivity values are 6.7 V/lx·s, 8.5 V/lx·s, and 3.1 V/lx·s, respectively.
Toshiba has begun volume shipments of the TCD2728DG CCD image sensor in 32-pin WDIPs.
Toshiba Electronic Devices & Storage
The post CCD sensor lowers noise for clearer inspections appeared first on EDN.
Car speed and radar guns

The following would apply to any moving vehicle, but just for the sake of clear thought, we will use the word “car”.
Imagine a car coming toward a radar antenna that is transmitting a microwave pulse which goes out toward that car and then comes back from that car in a time interval called “T1”. Then that same radar antenna transmits a second microwave pulse that also goes out toward that still oncoming car and then comes back from that car, but in a time interval called “T2”. This concept is illustrated in Figure 1.
Figure 1 Car radar timing where T1 is the time it takes for a first pulse to go out toward a vehicle get reflected back to the radiating source, and T2 is the time it takes for a second pulse to go out toward the same vehicle and get reflected back to the radiating source.
The further away the car is, the longer T1 and T2 will be, but if a car is moving toward the antenna, then there will be a time difference between T1 and T2 for which the distance the car has moved will be proportional to that time difference. In air, that scale factor comes to 1.017 nanoseconds per foot (ns/ft) of distance (see Figure 2).
Figure 2 Calculating roundtrip time for a radar signal required to catch a vehicle traveling at 55 mph and 65 mph.
Since we are interested in the time it takes to traverse the distance from the antenna to the car twice (round trip), we would measure a time difference of 2.034 ns/ft of car travel.
A speed radar measures the positional change of an oncoming or outgoing car. Since 60 mph equals 88 ft/s, we know that 55 mph comes to (80+2/3) ft/s. If the interval between transmitted radar pulses were one pulse per second, a distance of (80+2/3) feet would correspond to an ABS(T1-T2) time difference of 164.0761 ns. A difference in time intervals of more than that many nanoseconds would then be indicative of a driver exceeding a speed limit of 55 mph.
For example, a speed of 65 mph would yield 193.9081 ns, and on most Long Island roadways, it ought to yield a speeding ticket.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Mattel makes a real radar gun, on the cheap
- Simple Optical Speed Trap
- Whistler’s DE-7188: Radar (And Laser) Detection Works Great
- Accidental engineering: 10 mistakes turned into innovation
The post Car speed and radar guns appeared first on EDN.
Impedance mask in power delivery network (PDN) optimization

In telecommunication applications, target impedance serves as a crucial benchmark for power distribution network (PDN) design. It ensures that the die operates within an acceptable level of rail voltage noise, even under the worst-case transient current scenarios, by defining the maximum allowable PDN impedance for the power rail on the die.
This article will focus on the optimization techniques to meet the target impedance using a point-of-load (PoL) device, while providing valuable insights and practical guidance for designers seeking to optimize their PDNs for reliable and efficient power delivery.
Defining target impedance
With the rise of high-frequency signals and escalating power demands on boards, power designers are prioritizing noise-free power distribution that can efficiently supply power to the IC. Controlling the power delivery network’s impedance across a certain frequency range is one approach to guarantee proper operation of high-speed systems and meet performance demands.
This impedance can generally be estimated by dividing the maximum allowed ripple voltage by the maximum expected current step load. The power delivery network’s target impedance (ZTARGET) can be calculated with below equation:
Achieving ZTARGET across a wide frequency spectrum requires a power supply at lower frequencies, combined with strategically placed decoupling capacitors at middle and higher frequencies. Figure 1 shows the impedance frequency characteristics of multi-layer ceramic capacitors (MLCCs).
Figure 1 The impedance frequency characteristics of MLCCs are shown across a wide frequency spectrum. Source: Monolithic Power Systems
Maintaining the impedance below the calculated threshold ensures that even the most severe transient currents generated by the IC, as well as induced voltage noise, remain within acceptable operational boundaries.
Figure 2 shows the varying target impedance across different frequency ranges, based on data from Qualcomm website. This means every element in the power distribution must be optimized at different frequencies.
Figure 2 Here is a target impedance example for different frequency ranges. Source: Qualcomm
Understanding PDN impedance
In theory, a power rail aims for the lowest possible PDN impedance. However, it’s unrealistic to achieve an ideal zero-impedance state. A widely adopted strategy to minimize PDN impedance is placing various decoupling capacitors beneath the system-on-chip (SoC), which flattens the PDN impedance across all frequencies. This prevents voltage fluctuations and signal jitter on output signals, but it’s not necessarily the most effective method to optimize power rail design.
Three-stage low-pass filter approach
To further explore optimizing power rail design, the fundamentals of PDN design must be re-examined in addition to considering new approaches to achieve optimal performance. Figure 3 shows the PDN conceptualized as a three-stage low-pass filter, where each stage of this network plays a specific role in filtering and stabilizing the current drawn from the SoC die.
Figure 3 The PDN is conceptualized as a three-stage low-pass filter. Source: Monolithic Power Systems
The three-stage low-pass filter is described below:
- Current drawn from the SoC die: The process begins with current being drawn from the SoC die. Any current drawn is filtered by the package, which interacts with die-side capacitors (DSCs). This initial filtering stage reduces the current’s slew rate before it reaches the PCB socket.
- PCB layout considerations and MLCCs: Once the current passes through the PCB ball grid arrays (BGAs), the second stage of filtering occurs as the current flows through the power planes on the PCB and encounters the MLCCs. During this stage, it’s crucial to focus on selecting capacitors that effectively operate at specific frequencies. High-frequency capacitors placed beneath the SoC do not significantly influence lower frequency regulation.
- Voltage regulator (VR) with power planes and bulk capacitors: The final stage involves the VR and bulk capacitors, which work together to stabilize the power supply by addressing lower-frequency noise.
The PDN’s three-stage approach ensures that each component contributes to minimizing impedance across different frequency bands. This structured methodology is vital for achieving reliable and efficient power delivery in modern electronic systems.
Case study: Telecom evaluation board analysis
This in-depth examination uses a telecommunications-specific evaluation board from MPS, which demonstrates the capabilities of the MPQ8785, a high-frequency, synchronous buck converter, in a real-world setting. Moreover, this case study underlines the importance of capacitor selection and placement to meet the target impedance.
To initiate the process, PCB parasitic extraction is performed on the MPS evaluation board. Figure 4 shows a top view of the MPQ8785 evaluation board layout, where two ports are selected for analysis. Port 1 is positioned after the inductor, while Port 2 is connected to the SoC BGA.
Figure 4 PCB parasitic extraction is performed on the telecom evaluation board. Source: Monolithic Power Systems
Capacitor models from vendor websites are also included in this layout, including the equivalent series inductance (ESL) and equivalent series resistance (ESR) parasitics. As many capacitor models as possible are allocated beneath the SoC in the bottom of the PCB to maintain a flat impedance profile.
Table 1 Here is the initial capacitor selection for different quantities of capacitors targeting different frequencies. Source: Monolithic Power Systems
Figure 5 shows a comparison of the target impedance profile defined by the PDN mask for the core rails to the actual initial impedance measured on the MPQ8785 evaluation board using the initially selected capacitors. This graphical comparison enables a direct assessment of the impedance characteristics, facilitating the evaluation of the PDN performance.
Figure 5 Here is a comparison between the target impedance profile and initial impedance using the initially selected capacitors. Source: Monolithic Power Systems
Based on the data from Figure 5, the impedance exceeds the specified limit within the 300-kHz to 600-kHz frequency range, indicating that additional capacitance is required to mitigate this issue. Introducing additional capacitors effectively reduces the impedance in this frequency band, ensuring compliance with the specification.
Notably, high-frequency capacitors are also observed to have a negligible impact on the impedance at higher frequencies, suggesting that their contribution is limited to specific frequency ranges. This insight informs optimizing capacitor selection to achieve the desired impedance profile.
Through an extensive series of simulations that systematically evaluate various capacitor configurations, the optimal combination of capacitors required to satisfy the impedance mask requirements was successfully identified.
Table 2 The results of this iterative process outline the optimal quantity of capacitors and total capacitance. Source: Monolithic Power Systems
The final capacitor selection ensures that the PDN impedance profile meets the specified mask, thereby ensuring reliable power delivery and performance. Figure 6 shows the final impedance with optimized capacitance.
Figure 6 The final impedance with optimized capacitance meets the specified mask. Source: Monolithic Power Systems
With a sufficient margin at frequencies above 10 MHz, capacitors that primarily affect higher frequencies can be eliminated. This strategic reduction minimizes the occupied area and decreases costs while maintaining compliance with all specifications. Performance, cost, and space considerations are effectively balanced by using the optimal combination of capacitors required to satisfy the impedance mask requirements, enabling robust PDN functionality across the operational frequency range.
To facilitate the case study, the impedance mask was modified within the 10-MHz to 40-MHz frequency range, decreasing its overall value to 10 mΩ. Implementing 10 additional 0.1-µF capacitors was beneficial to reduce impedance in the evaluation board, which then effectively reduced the impedance in the frequency range of interest.
Figure 7 shows the decreased impedance mask as well as the evaluation board’s impedance response. The added capacitance successfully reduces the impedance within the specified frequency range.
Figure 7 The decreased PDN mask with optimized capacitance reduces impedance within the specified frequency range. Source: Monolithic Power Systems
Compliance with impedance mask
This article used the MPQ8785 evaluation board to optimize PDN performance, ensuring compliance with the specified impedance mask. Through this optimization process, models were developed to predict the impact of various capacitor types on impedance across different frequencies, which facilitates the selection of suitable components.
Capacitor selection for optimized power rail design depends on the specific impedance mask and frequency range of interest. A random selection of capacitors for a wide variety of frequencies is insufficient for optimizing PDN performance. Furthermore, the physical layout must minimize parasitic effects that influence overall impedance characteristics, where special attention must be given to optimizing the layout of capacitors to mitigate these effects.
Marisol Cabrera is application engineer manger at Monolithic Power Systems (MPS).
Albert Arnau is application engineer at Monolithic Power Systems (MPS).
Robert Torrent is application engineer at Monolithic Power Systems (MPS).
Related Content
- SoC PDN challenges and solutions
- Power 107: Power Delivery Networks
- Debugging approach for resolving noise issues in a PDN
- Optimizing capacitance in power delivery network (PDN) for 5G
- Power delivery network design requires chip-package-system co-design approach
The post Impedance mask in power delivery network (PDN) optimization appeared first on EDN.
Flip ON Flop OFF: high(ish) voltages from the positive supply rail

We’ve seen lots of interesting conversations and Design Idea (DI) collaboration devising circuits for power switching using inexpensive (and cute!) momentary-contact SPST pushbuttons. A recent and interesting extension of this theme by frequent contributor R Jayapal addresses control of relatively high DC voltages: 48 volts in his chosen case.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In the course of implementing its high voltage feature, Jayapal’s design switches the negative (Vss a.k.a. “ground”) rail of the incoming supply instead of the (more conventional) positive (Vdd) rail. Of course, there’s absolutely nothing physically wrong with this choice (certainly the electrons don’t know the difference!). But because it’s a bit unconventional, I worry that it might create possibilities for the unwary to make accidental, and potentially destructive, misconnections.
Figure 1’s circuit takes a different tack to avoid that.
Figure 1 Flip ON/Flop OFF referenced to the V+ rail. If V+ < 15v, then set R4 = 0 and omit C2 and Z1. Ensure that C2’s voltage rating is > (V+ – 15v) and if V+ > 80v, R4 > 4V+2
Figure 1 returns to an earlier theme of using a PFET to switch the positive rail for power control, and a pair of unbuffered CMOS inverters to create a toggling latch to control the FET. The basic circuit is described in “Flip ON Flop OFF without a Flip/Flop.”
What’s different here is that all circuit nodes are referenced to V+ instead of ground, and Zener Z1 is used to synthesize a local bias reference. Consequently, any V+ rail up to the limit of Q1’s Vds rating can be accommodated. Of course, if even that’s not good enough, higher rated FETs are available.
Be sure to tie the inputs of any unused U1 gates to V+.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Flip ON flop OFF
- Flip ON Flop OFF for 48-VDC systems
- Flip ON Flop OFF without a Flip/Flop
- Elaborations of yet another Flip-On Flop-Off circuit
- Latching D-type CMOS power switch: A “Flip ON Flop OFF” alternative
The post Flip ON Flop OFF: high(ish) voltages from the positive supply rail appeared first on EDN.
The next AI frontier: AI inference for less than $0.002 per query

Inference is rapidly emerging as the next major frontier in artificial intelligence (AI). Historically, the AI development and deployment focus has been overwhelmingly on training with approximately 80% of compute resources dedicated to it and only 20% to inference.
That balance is shifting fast. Within the next two years, the ratio is expected to reverse to 80% of AI compute devoted to inference and just 20% to training. This transition is opening a massive market opportunity with staggering revenue potential.
Inference has a fundamentally different profile—it requires lower latency, greater energy efficiency, and predictable real-time responsiveness than training-optimized hardware, which entails excessive power consumption, underutilized compute, and inflated costs.
When deployed for inference, the training-optimized computing resources result in a cost-per-query at one or even two orders of magnitude higher than the benchmark of a cost of $0.002 per query established by a 2023 McKinsey analysis based on the Google 2022 search activity estimated to be in average 100,000 queries per second.
Today, the market is dominated by a single player whose quarterly results reflect its stronghold. While a competitor has made some inroads and is performing respectably, it has yet to gain meaningful market share.
One reason is architectural similarity; by taking a similar approach to the main player, rather than offering a differentiated, inference-optimized alternative, the competitor faces the same limitations. To lead in the inference era, a fundamentally new processor architecture is required. The most effective approach is to build dedicated, inference-optimized infrastructure, an architecture specifically tailored to the operational realities of processing generative AI models like large language models (LLMs).
This means rethinking everything from compute units and data movement to compiler design and LLM-driven architectures. By focusing on inference-first design, it’s possible to achieve significant gains in performance-per-watt, cost-per-query, time-to-first-token, output-token-per-second, and overall scalability, especially for edge and real-time applications where responsiveness is critical.
This is where the next wave of innovation lies—not in scaling training further, but in making inference practical, sustainable, and ubiquitous.
The inference trinity
AI inference hinges on three critical pillars: low latency, high throughput and constrained power consumption, each essential for scalable, real-world deployment.
First, low latency is paramount. Unlike training, where latency is relatively inconsequential—a job taking an extra day or costing an additional million dollars is still acceptable as long as the model is successfully trained—inference operates under entirely different constraints.
Inference must happen in real time or near real time, with extremely low latency per query. Whether it’s powering a voice assistant, an autonomous vehicle or a recommendation engine, the user experience and system effectiveness hinge on sub-millisecond response times. The lower the latency, the more responsive and viable the application.
Second, high throughput at low cost is essential. AI workloads involve processing massive volumes of data, often in parallel. To support real-world usage—especially for generative AI and LLMs—AI accelerators must deliver high throughput per query while maintaining cost-efficiency.
Vendor-specified throughput often falls short of peak targets in AI workload processing due to low-efficiency architectures like GPUs. Especially, when the economics of inference are under intense scrutiny. These are high-stakes battles, where cost per query is not just a technical metric—it’s a competitive differentiator.
Third, power efficiency shapes everything. Inference performance cannot come at the expense of runaway power consumption. This is not only a sustainability concern but also a fundamental limitation in data center design. Lower-power devices reduce the energy required for compute, and they ease the burden on the supporting infrastructure—particularly cooling, which is a major operational cost.
The trade-off can be viewed from the following two perspectives:
- A new inference device that delivers the same performance at half the energy consumption can dramatically reduce a data center’s total power draw.
- Alternatively, maintaining the same power envelope while doubling compute efficiency effectively doubles the data center’s performance capacity.
Bringing inference to where users are
A defining trend in AI deployment today is the shift toward moving inference closer to the user. Unlike training, inference is inherently latency-sensitive and often needs to occur in real time. This makes routing inference workloads through distant cloud data centers increasingly impractical—from both a technical and economic perspective.
To address this, organizations are prioritizing edge-based inference processing data locally or near the point of generation. Shortening the network path between the user and the inference engine significantly improves responsiveness, reduces bandwidth costs, enhances data privacy, and ensures greater reliability, particularly in environments with limited or unstable connectivity.
This decentralized model is gaining traction across industry. Even AI giants are embracing the edge, as seen in their development of high-performance AI workstations and compact data center solutions. These innovations reflect a clear strategic shift: enabling real-time AI capabilities at the edge without compromising on compute power.
Inference acceleration from the ground up
One high-tech company, for example, is setting the engineering pace with a novel architecture designed specifically to meet the stringent demands of AI inference in data centers and at the edge. The architecture breaks away from legacy designs optimized for training workloads with near-theoretical performance in latency, throughput, and energy efficiency. More entrants are certain to follow.
Below are some of the highlights of this inference technology revolution in the making.
Breaking the memory wall
The “memory wall” has challenged chip designers since the late 1980s. Traditional architectures attempt to mitigate the impact on performance introduced by data movement between external memory and processing units by layering memory hierarchies, such as multi-layer caches, scratchpads and tightly coupled memory, each offering tradeoffs between speed and capacity.
In AI acceleration, this bottleneck becomes even more pronounced. Generative AI models, especially those based on incremental transformers, must constantly reprocess massive amounts of intermediate state data. Conventional architectures struggle here. Every cache miss—or any operation requiring access outside in-memory compute—can severely degrade performance.
One approach collapses the traditional memory hierarchy into a single, unified memory stage: a massive SRAM array that behaves like a flat register file. From the perspective of the processing units, any register can be accessed anywhere, at any time, within a single clock. This eliminates costly data transfers and removes the bottlenecks that hamper other designs.
Flexible computational tiles with 16 high-performance processing cores dynamically reconfigurable at run-time executes either AI operations, like multi-dimensional matrix operations (ranging from 2D to N-dimensional), or advanced digital signal processing (DSP) functions.
Precision is also adjustable on-the-fly, supporting formats from 8 bits to 32 bits in both floating point and integer. Both dense and sparse computation modes are supported, and sparsity can be applied on the fly to either weights or data—offering fine-grained control for optimizing inference workloads.
Each core features 16-million registers. While a vast register file presents challenges for traditional compilers, two key innovations come to rescue:
- Native tensor processing, which handles vectors, tensors, and matrices directly in hardware, eliminates the need to reduce them to scalar operations and manually implements nested loops—as required in GPU environments like CUDA.
- With high-level abstraction, developers can interact with the system at a high level—PyTorch and ONNX for AI and Matlab-like functions for DSP—without the need to write low-level code or manage registers manually. This simplifies development and significantly boosts productivity and hardware utilization.
Chiplet-based scalability
A physical implementation leverages a chiplet architecture, with each chiplet comprising two computational cores. By combining chiplets with high-bandwidth memory (HBM) chiplet stacks, the architecture enables highly efficient scaling for both cloud and edge inference scenarios.
- Data center-grade inference for efficient tailoring of compute and memory resources suits edge constraints. The configuration pairs eight VSORA chiplets with eight HBM3e chiplets, delivering 3,200 TFLOPS of compute performance in FP8 dense mode and optimized for large-scale inference workloads in data centers.
- Edge AI configurations allow efficient tailoring of compute resources and lower memory requirements to suit edge constraints. Here, two chiplets + one HBM chiplet = 800 TFLOPS and four chiplets + one HBM chiplet = 1,600 TFLOPS.
Power efficiency as a side effect
The performance gains are clear as is power efficiency. The architecture delivers twice the performance-per-watt of comparable solutions. In practical terms, the chip draw stops at just 500 watts, compared to over one kilowatt for many competitors.
When combined, these innovations provide multiple times the actual performance at less than half the power—offering an overall advantage of 8 to 10 times compared to conventional implementations.
CUDA-free compilation
One often-overlooked advantage of the architecture lies in its streamlined and flexible software stack. From a compilation perspective, the flow is simplified compared to traditional GPU environments like CUDA.
The process begins with a minimal configuration file—just a few lines—that defines the target hardware environment. This file enables the same codebase to execute across a wide range of hardware configurations, whether that means distributing workloads across multiple cores, chiplets, full chips, boards, or even across nodes in a local or remote cloud. The only variable is execution speed; the functional behavior remains unchanged. This makes on-premises and localized cloud deployments seamless and scalable.
A familiar flow without complexity
Unlike CUDA-based compilation processes, the flow appears basic without layers of manual tuning and complexity through a more automated and hardware-agnostic compilation approach.
The flow begins by ingesting standard AI inputs, such as models defined in PyTorch. These are processed by a proprietary graph compiler that automatically performs essential transformations such as layer reordering or slicing for optimal execution. It extracts weights and model structure and then outputs an intermediate C++ representation.
This C++ code is then fed into an LLVM-based backend, which identifies the compute-intensive portions of the code and maps them to the architecture. At this stage, the system becomes hardware-aware, assigning compute operations to the appropriate configuration—whether it’s a single A tile, an edge device, a full data center accelerator, a server, a rack or even multiple racks in different locations.
Invisible acceleration for developers
From a developer’s point of view, the accelerator is invisible. Code is written as if it targets the main processor. During compilation, the compilation flow identifies the code segments best suited for acceleration and transparently handles the transformation and mapping to hardware, lowering the barrier for adoption and requiring no low-level register manipulation or specialized programming knowledge.
The instruction set is high-level and intuitive, carrying over capabilities from its origins in digital signal processing. The architecture supports AI-specific formats such as FP8 and FP16, as well as traditional DSP operations like FP16/ arithmetic, all handled automatically on a per-layer basis. Switching between modes is instantaneous and requires no manual intervention.
Pipeline-independent execution and intelligent data retention
A key architectural advantage is pipeline independence—the ability to dynamically insert or remove pipeline stages based on workload needs. This gives the system a unique capacity to “look ahead and behind” within a data stream, identifying which information must be retained for reuse. As a result, data traffic is minimized, and memory access patterns are optimized for maximum performance and efficiency, reaching levels unachievable in conventional AI or DSP systems.
Built-in functional safety
To support mission-critical applications such as autonomous driving, functional safety features are integrated at the architectural level. Cores can be configured to operate in lockstep mode or in redundant configurations, enabling compliance with strict safety and reliability requirements.
In the final analysis, a memory architecture that eliminates traditional bottlenecks, compute units tailored for tensor operations, and unmatched power efficiency sets a new standard for AI inference.
Lauro Rizzatti is a business advisor to VSORA, an innovative startup offering silicon IP solutions and silicon chips, and a noted verification consultant and industry expert on hardware emulation.
Related Content
- AI at the edge: It’s just getting started
- Custom AI Inference Has Platform Vendor Living on the Edge
- Partitioning to optimize AI inference for multi-core platforms
- Revolutionizing AI Inference: Unveiling the Future of Neural Processing
The post The next AI frontier: AI inference for less than $0.002 per query appeared first on EDN.
Why modulate a power amplifier?—and how to do it

We recently saw how certain audio power amplifiers can be used as oscillators. This Design Idea shows how those same parts can be used for simple amplitude modulation, which is trickier than it might seem.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The relevant device is the TDA7052A, which we explored in some detail while making it oscillate. It has a so-called logarithmic gain-control input, the gain in dBs being roughly proportional to the voltage on that pin over a limited range.
However, we may want a reasonably linear response, which would mean undoing some of the chip designers’ careful work.
First question: why—what’s the application?
The original purpose of this circuit was to amplitude-modulate the power output stage of an infrasonic microphone. That gadget generated both the sub-10‑Hz baseband signal and an audio tone whose pitch varied linearly with it, allowing one to hear at least a proxy for the infrasonics. The idea was to keep the volume low during relatively inactive periods and only increase it during the peaks, whether those were positive or negative, so that frequency and amplitude modulation would work hand in hand.
The two basic options are to use the device’s inherent “log” law (more like antilog), so that the perceived loudness was modulated, or to feed the control pin with a logarithmically-squashed signal—the inverse of the gain-control curve—to linearize the modulation. The former is simpler but sounded rather aggressive; the latter, more complicated but smoother, so we’ll concentrate on that. The gain-control curve from the datasheet, overlaid with real-life measurements, is shown in Figure 1. Because we need gain to drive the speaker, we can only use the upper, more bendy, part of the curve, with around 26 dB of gain variation available.
Figure 1 The TDA7052A’s control voltage versus its gain, with the theoretical curve and practical readings.
For accurate linear performance, an LM13700 OTA configured as an amplitude modulator worked excellently, but needed a separate power output stage and at least ±6-V supplies rather than the single, split 5-V rail used for the rest of the circuitry. An OTA’s accuracy and even precision are not needed here; we just want the result to sound right, and can cut some corners. (The LM13700’s datasheet is full of interesting applications.)
Next question: how?
At the heart of this DI is an interesting form of full-wave rectifier. We’ll look at it in detail, and then pull it to pieces.
If we take a paralleled pair of current sources, one inverting and the other not, we can derive a current proportional to the absolute value of the input: see Figure 2.
Figure 2 A pair of current sources can make a novel full-wave rectifier.
The upper, inverting, section sources current towards ground when the input is positive (with respect to the half-rail point), and the lower, non-inverting part does so for negative half-cycles. R1 sets the transconductance for both stages. Thus, the output current is a function of the absolute value of the input voltage. It’s shown as driving R4 to produce a voltage with respect to 0 V, which sounds more useful than it really is.
Conventional full-wave rectifiers usually have a voltage output, stored on a capacitor, and representing the peak levels. This circuit can’t do that: connecting a capacitor across R4 merely averages the signal. To extract the peaks, another stage would be needed: pointless. By the way, the original thoughts for this stage were standard precision rectifiers with incorporated or added current sources, but they proved to be more complicated while performing no better—except for inputs below ~5 mV, where they had less “crossover distortion.”
The maximum output voltage swing is limited by the ratios of R4 to R2 (or R3). Excessive positive inputs will tend to saturate Q1, so VOUT can approach Vs/2. (The transistor’s emitter is servoed to Vs/2.) With R4 = R2 = R3, negative swings saturate Q2, but the ratio of R3 and R4 means that VOUT can only approach Vs/4. Q1 and Q2 respond differently to overloads, with Q2’s circuit folding back much sooner. If R2, R3, and R4 are all equal, the maximum unclipped voltage swing across R4 is just less than a quarter of the supply rail voltage.
Increasing R1 and making R4 much greater than R2 or R3 allows a greater swing for those negative inputs, but at the expense of increased offset errors. Adding an extra gain stage would give those same problems while needing more parts.
Applying the current source to the power amp
Conclusion: This circuit is great for sourcing a current to ground, but if you need a linear voltage output, it’s less useful. We don’t want linearity but something close to a logarithmic response, or the inverse of the power amp’s control voltage. Feeding the current through a network containing a diode can do just that, and the resulting circuit is shown in Figure 3.
Figure 3 Schematic of a power amplifier that is amplitude-modulated using the dual current source.
The current source is just as described above. With R1 = 100k, the output peaks at 23 µA for ±2.5 V inputs. That current feeds the network R4/R5/D3, which suitably squashes the signal, ready for buffering into A2’s Vcon input. The gain characteristic is now much more linear, as the waveforms in Figure 4 indicate. The TDA7052A’s Vcon pin normally either sinks or sources current, but emitter follower Q3 overrides that as well as buffering the output from the network.
Figure 4 Some waveforms from Figure 3, showing its operation.
To show the operation more cleanly, the plots were made using a 10-Hz tri-wave to modulate a 700-Hz sine wave. (The target application would have an infrasonic signal—from, say, 300 mHz to 10 Hz—modulating a pitch-linear audio tone ranging from about 250 to 1000 Hz depending on the signal’s absolute level.)
Some further notes on the circuitry
The values for R4/R5/D3 were optimized by a process of heuristic iteration, which is fancy-speak for lots of fiddling with trimmers until things looked right on the ’scope. These worked for me with the devices to hand. Others gave similar results; the absolute values are less important than the overall combination.
R7 and R8 may seem puzzling: there’s nothing like them on the PA’s datasheet. I found that applying a little bias to the audio input pin helps minimize the chip’s internal offsets, which otherwise cause some (distorted) feedthrough from the control voltage to the outputs. With a modulating input but no audio present, trim R7 for minimum signal at the output(s). The difference is barely audible, but it shows up clearly on a ’scope as traces that are badly slewed.
The audio feed needs to come from a volume-control pot. While it might seem more obvious to incorporate gain control in the network driving A2.4—after all, that’s the primary function of that pin—that proved over-complicated, and introduced yet more temperature effects.
Temperature effects! The current source is (largely) free of them, but D3, Q3, and A2 aren’t, and I have made no attempt to compensate for their contributions. The practical solution is to make R6 variable: a large, user-friendly knob labeled “Effect”, thus turning the problem into A Feature.
A2’s Vcon pin sinks/sources some (temperature-dependent) current, so varying R6 allows reasonable, if manual, temperature compensation. Because its setting affects both the gain and the part of the gain curve that we are using, the effective baseline is shifted, allowing more or less of the audio corresponding to low-level modulating signals to pass through. Figure 5 shows its effect on the output at around 20°C.
Figure 5 Varying R6 helps compensate for temperature problems and allows different audible effects.
Don’t confuse this circuit with a “proper” amplitude modulator! But for taking an audio signal, modulating it reasonably linearly, and driving the result directly into a speaker, it works well. The actual result can be seen in Figure 6, which shows both the detected infrasonic signal resulting from a gusty day and the audio output, whose frequency changes are invisible with the timebase used, but whose amplitude can be seen to track the modulating signal quite nicely.
Figure 6 A real-life infrasonic signal with the resulting audio modulated in both frequency (too fast to show up here) and amplitude.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- Power amplifiers that oscillate— Part 1: A simple start.
- Power amplifiers that oscillate—deliberately. Part 2: A crafty conclusion.
- Revealing the infrasonic underworld cheaply, Part 1
- Revealing the infrasonic underworld cheaply, Part 2
- Ultra-low distortion oscillator, part 1: how not to do it.
- Ultra-low distortion oscillator, part 2: the real deal
The post Why modulate a power amplifier?—and how to do it appeared first on EDN.
Disassembling a LED-based light that’s not acting quite right…right?

A few months back, I came across an LED-based desk lamp queued up to go out to the trash. When I asked my wife about it, she said (or at least my recollection is that she said) that it had gone dim, so she’d replaced it with another one. But the device didn’t include any sort of “dimmer” functionality, and I wasn’t (at the time, at least) aware that LED lighting’s inherent intensity could fade over time, only that it would inevitably flat-out fail at some point.
My curiosity sufficiently piqued, especially since I’d intercepted it on the way to the landfill anyway, I decided to take it apart first. It’s Hampton Bay’s 15.5 in. Black Indoor LED Desk Lamp, originally sold by Home Depot and currently “out of stock” both in-store and online; I assume it’s no longer available for purchase. Here are some stock shots of what it looks like, to start:
See: no dimmer. Just a simple on/off toggle:
I don’t remember when we bought it or what we paid for it; it had previously resided on my wife’s sewing table. The Internet Archive has four “snapshots” of the page, ranging from the end of June 2020, when it was apparently on sale for $14.71 versus the $29.97 MSRP (I hope we snagged it then!), through early December of last year. My wife took up sewing during the COVID-19 lockdown, so a 2020-era acquisition sounds about right.
Here’s what it looks like in “action” (if you can call it that) in my furnace room, striving (and effectively failing) to differentiate its “augmentation” of the baseline overhead lighting:
Turn off the room light, and the lamp’s standalone luminary capabilities still aren’t impressive:
And here’s a close-up of the light source in “action”, if you can call it that, in my office:
Scan through the reviews on the product page and, unless I overlooked something, you won’t find anyone complaining that it’s not bright enough. Several of the positive reviews go so far as to specifically note that it’s very bright. And ironically, one of the (few) negative ones indicates that it’s too bright. The specs claim that it has a 3W output (no explicit lumens rating, however, far from a color temperature), which roughly translates to a 30W incandescent equivalent.
Time to dive in. Let’s begin with the underside, where a label is attached to a felt “foot”:
A Google search on “Arcadia AL40165” reveals nothing meaningful results-wise aside from the Home Depot product page. “Intertek 4000145” isn’t any more helpful. And, regardless of when we actually bought it, this particular unit was apparently manufactured in December 2016.
Peeling back the felt “foot”, I was initially confused by the three closed-end crimp connectors revealed underneath:
until I peeled it away the rest of the way and…oh yeah, the on/off switch:
Note the wiring colors. Typically, in my experience, the “+” DC feed corresponds to the white wire, with the “-“ return segment handled by the black wire, and the “+” portion of the circuit is what’s switched. This implementation seems opposite of that convention. Hold that thought.
Now for the light source. With the lamp switched off, you can see what appears to be a central LED surrounded by several others in circumference. Conceptually, this matches the arrangement I’ve seen before with LED light bulbs, so my initial working theory was that whatever circuitry was driving the LEDs in the perimeter had failed, leaving only the central one still operational. Why there would be such a two-stage arrangement at all wasn’t obvious, although I postulated that this same hardware might find use in another lamp with a three-position (bright/dim/off) toggle switch.
Removing the diffuser:
unfortunately dashed that theory; there was only a single LED in the center:
Here’s what it looks like illuminated, this time absent the diffuser:
A brief aside at this point: what’s with the second “right?” in the title? Well, when I mentioned to my wife the other day that I’d completed the teardown but hadn’t definitively figured out why the lamp had dimmed over time, she now said that to the best of her recollection, it had always been dim. Hmmm. If indeed I’d previously misunderstood her (and of course, my default stance is to always assume my wife is right…right?), then what we have is a faulty LED from the get-go. But just for grins, let’s pretend my dimmer-over-time recollection is correct and proceed.
One other root cause possibility is that the power supply feeding the LED is in the process of failing, thereby outputting under-spec voltage and/or current. Revisiting the earlier white-vs-black wire discussion, when I initially probed the solder connections with my multimeter using standard polarity conventions, I got a negative voltage reading:
The LED theoretically could have been operating in reverse-bias breakdown (albeit presumably not for long). But more likely, in conjunction with the earlier-mentioned switch location in the circuit, the wire colors were just reversed. Yep, that’s more like it:
Note that their connections to the LED might still be reversed, however. Or perhaps the lamp’s power supply was current output-compromised. To test both of these suppositions, I probe-connected and fueled the LED with my inexpensive-and-passable power supply instead:

With the connections using standard white vs. black conventions, I got…nothing. Reversed, the LED light output weakly matched that delivered when driven by the lamp’s own power supply. And my standalone power supply also informed me that the lamp pulls 180 mA at 10 V.
About that “lamp’s own power supply”, by the way (as-usual accompanied by a 0.75″/19.1 mm diameter U.S. penny for size comparison purposes):
The label refers to it as an “LED Driver,” but I’m guessing that it’s just a normal “wall wart”, absent a plug on the output end. And a Google search of “JG-LED1-5UPPL” (that’s the number 5, not an S, by the way) further bolsters that hypothesis (“Intertek 4002637” conversely wasn’t helpful at all, aside from suggesting that this power supply unit (PSU) was originally intended for a different lamp model). But I’m still baffled by the “DC5-10V MAX” notation in the labeled output specs…???
And removing two more screws, here’s what the plate the LED is mounted to looks like when separated from the “heatsink” behind it (note the trivial dab of thermal paste between them):
All leaving me with the same question I had at the start: what caused the LED-based desk lamp’s light output to dim, either over time or from the very beginning (depending on which spouse’s story you’re going with)? The most likely remaining culprit, I’m postulating, is the phosphor layer above the LED. I’ve already noted the scant-at-best heat-transfer interface between the LED and the metal plate behind it. More generally, as this device definitely exemplifies, my research suggests that inexpensive designs skimp on the number of LEDs to keep the BOM cost down, compensating by overdriving the one(s) that remain. The resulting thermal stress prematurely breaks down the phosphor, resulting in color temperature shifts and reduced light output, along with eventual complete component failure.
That’s my take; what’s yours? Let me know your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- LDR PC Desk Lamp
- Constant-current wall warts streamline LED driver design for lamps, cabinet lights
- Magic carpets come alive with LEDs
- Can GE’s new LED bulbs help you get to sleep?
- Six LED challenges that still remain
- LED lamp cycles on and off, why?
The post Disassembling a LED-based light that’s not acting quite right…right? appeared first on EDN.
Triac and relay combination

Check out this link for On Semiconductor’s datasheet for a line of “Zero-Cross Optocoupled Triac Drivers.”
The ability to zero-cross when turning on AC line voltage to some loads may be advantageous. The following sketch in Figure 1 is a slightly simplified version of one circuit from the above datasheet.
Figure 1 A simplified triac drive arrangement that will turn on AC line voltage to a load.
The zero-crossover behavior of the triac and its driver operates nicely as the control input signal at pin 2 decides if AC is applied to the node or not. However, I had a somewhat different triac control requirement calling for two manually operated pushbuttons, one for turning AC power on and the other for turning AC power off while preserving the zero-crossover feature. Another issue was that at the required load power, the thermal burden to be borne by the controlled triac was much too severe.
The thermal burden on the triac was relieved as follows in Figure 2.
Figure 2 The revised triac drive arrangement with a relay added such that when the pushbutton is pressed, the triac turns on AC to the load using its zero-crossover feature.
A relay was added whose coil was tied in parallel with the load and whose normally open contacts were in parallel with the anode and cathode of the triac.
When the pushbutton was pressed “on,” the triac would turn on AC to the load using its zero-crossover feature and then the relay contacts were closed across the triac. When the relay contacts closed, the load current burden was shifted away from the triac to the relay. The triac only needed to operate through the duration of the relay’s closure time which in the case I was working on, was approximately 50 ms or just a little longer than three cycles of the input AC line voltage.
We had the zero-crossover benefits, and the triac never even got warm.
One normally-open pushbutton for turning on the load’s power was set up to drive the LED at the input of the optocoupler. You can come up with a million ways to accomplish that, so we’ll just leave that discussion aside.
Another normally-closed pushbutton was set up to remove the drive from the relay’s coil. With the first pushbutton assumed to be open and idle at that moment, and since the triac was already off, the second relay’s contacts would open up, and the load power would be turned off.
John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Simple SSR has zero-cross on/off switching
- TRIAC Dimmer Avoids Snap-ON
- Optimizing the Triacs
- TRIAC Dimmer with Active Reset
The post Triac and relay combination appeared first on EDN.
Quad-core MPU adds AI edge to HMI apps

The Renesas 64-bit RZ/G3E microprocessor powers HMI applications with a quad-core Arm Cortex-A55 CPU and Ethos-U55 neural processing unit (NPU) for AI tasks. Running at up to 1.8 GHz, the Cortex-A55 handles both HMI and edge computing functions, while an integrated Cortex-M33 core performs real-time tasks independently of the main CPU to enable low-power operation.
With Full HD graphics and high-speed connectivity, the RZ/G3E is well-suited for industrial and consumer HMI systems, including factory equipment, medical monitors, retail terminals, and building automation. It outputs 1920×1080 video at 60 fps on two independent displays via an LVDS (dual-link) interface. MIPI-DSI and parallel RGB outputs are also available, along with a MIPI-CSI interface for video input and sensing tasks.
The microprocessor’s 1-GHz NPU delivers 512 GOPS for AI workloads such as image classification, object and voice recognition, and anomaly detection—while offloading the CPU. Power management features in the RZ/G3S reduce standby consumption by maintaining sub-CPU operation and peripheral functions at approximately 50 mW, dropping to about 1 mW in deep standby mode.
The RZ/G3E microprocessor is available now. Visit the product page below to check distributor availability.
The post Quad-core MPU adds AI edge to HMI apps appeared first on EDN.
Fuel gauges ensure accurate battery tracking

TI’s single-chip battery fuel gauges, the BQ41Z90 and BQ41Z50, extend battery runtime by up to 30% using a predictive modeling algorithm. Their adaptive Dynamic Z-Track algorithm delivers state-of-charge and state-of-health accuracy within 1%, enabling precise monitoring in battery-powered devices such as laptops and e-bikes.
The fuel gauges provide accurate battery capacity readings under varying load conditions, allowing designers to right-size batteries without overprovisioning. The BQ41Z90 integrates a fuel gauge, monitor, and protector for 3- to 16-cell Li-ion battery packs, while the BQ41Z50 supports 2 to 4 cells. Integration reduces board complexity and can shrink footprint by up to 25% compared to discrete implementations.
Each battery pack manager monitors voltage, current, temperature, available capacity, and other key parameters using integrated analog peripherals and an ultra-low-power 32-bit RISC processor. Both devices report data to the host system over an SMBus v3.2-compatible interface, while the BQ41Z90 also supports I²C. It additionally enables simultaneous current and voltage conversion for real-time power calculations and supports sense resistors as low as 0.25 mΩ.
Pre-production quantities of the BQ41Z90 and production quantities of the BQ41Z50 are available now on TI.com. Evaluation modules, reference designs, and simulation models are also available.
The post Fuel gauges ensure accurate battery tracking appeared first on EDN.
DDR4 memory streamlines rugged system design

Teledyne’s 16-Gbyte DDR4 memory module, designated TDD416Y12NEPBM01, is screened and qualified as an Enhanced Product (EP) for high-reliability aerospace and defense systems. The solder-down device is smaller than a postage stamp, making it well-suited for space-constrained systems where performance is critical.
Rated for -40°C to +105°C operation, the module delivers 3200 MT/s (3200 MHz) and integrates memory, termination, and passives in a compact 22×22- mm, 216-ball BGA package. It replaces multiple discrete components, helping to simplify board layout. An optional companion ECC chip is available for applications requiring error correction.
The TDD416Y12NEPBM01 interfaces with x64 and x72 memory buses and supports a range of processors and FPGAs, including those from Xilinx, Microchip, NXP, and Intel, as well as Teledyne’s LS1046-Space. According to Teledyne, the DDR4 module achieves 42% lower power, 42% less jitter, and 39% PK/PK reduction compared to conventional SODIMMs.
To request further information on the TDD416Y12NEPBM01, click here.
The post DDR4 memory streamlines rugged system design appeared first on EDN.
Accelerator speeds data, cuts latency

With over twice the throughput of its predecessor, MaxLinear’s Panther V storage accelerator achieves 450 Gbps, scalable to 3.2 Tbps. It enables low-latency data processing across file, block, and object storage in HPC, hyperscale, hyperconverged, and AI/ML environments.
Panther V offloads the CPU from compute-intensive data transformation tasks—including compression, deduplication, encryption, and real-time verification. According to MaxLinear, the hardware-based approach offers higher performance, lower storage costs, and improved energy efficiency compared to conventional software-only, FPGA-based, and other competing solutions.
Panther V features a PCIe Gen5 x16 interface to fully leverage the bandwidth of next-generation server platforms. Its MaxHash-based deduplication, combined with deep compression algorithms, achieves data reduction ratios of up to 15:1 for structured data. By reducing CPU and memory bandwidth demands, as well as storage device usage, Panther V helps lower both capital and operating costs. Built-in reliability features ensure high data integrity and six-nines availability.
MaxLinear will unveil Panther V at the upcoming Flash Memory Summit (FMS25).
The post Accelerator speeds data, cuts latency appeared first on EDN.
TIA advances PAM4 optical performance

Designed for 400G and 800G optical networks, Coherent’s CHR1065 100G transimpedance amplifier (TIA) operates at 56 Gbaud using PAM4 modulation. It joins the company’s open-market ASIC portfolio, offering four channels with a 750-µm optical pitch suited for compact DR, FR, and LR module configurations.
The CHR1065 minimizes input-referred noise to 2.3 µA RMS, enhancing receiver sensitivity for longer reach. High linearity up to 2.5 mA ensures reliable performance across varying link budgets. Consuming just 227 mW per channel at 25°C, the TIA supports dense deployments in power-constrained data centers. An I²C interface enables integration with system-level monitoring and control functions.
Tested to JEDEC standards for lifetime reliability, the CHR1065 is now available as a wire-bondable bare die and in full volume production. Engineering samples ship in 25-piece waffle packs. For more information or to request samples, click here.
The post TIA advances PAM4 optical performance appeared first on EDN.
Power Tips #143: Tips for keeping the power converter cool in automotive USB PD applications

Today’s car buyers, whether purchasing premium or economy models, expect to charge multiple devices simultaneously through in-vehicle USB ports. To meet this demand, automakers are replacing legacy USB Type-A ports with multiple USB Type-C ports that support the latest USB power delivery (PD) standards. These standards enable significantly higher power levels—up to 48 V and 240 W—suitable for fast-charging laptops, tablets, and phones.
USB PD controllers operate alongside internal or external DC/DC converters, which add their own thermal stress to the system. This challenge becomes even more critical in automotive, industrial, and other space-constrained designs where airflow is minimal and ambient temperatures are high. If left unmanaged, excessive heat can damage or degrade system reliability. Elevated temperatures accelerate the aging of semiconductors and passive components, cause solder joint fatigue, and, in the worst cases, can lead to printed circuit board (PCB) delamination or thermal runaway. These risks make thermal management a priority in system-level USB PD designs, especially when long-term reliability or safety are requirements. In this power tip, I’ll explore different methods to manage heat and improve system reliability when implementing automotive USB PD solutions.
A typical 12-V battery automotive system needs these components to implement a USB PD charging port:
- A DC/DC converter. The converter steps the 12-V battery voltage up to the desired USB output (commonly 5 V to 20 V, up to 60 W, or even 48 V and 240 W with the latest USB PD specifications).
- A controller that supports USB PD. This controller is at the heart of modern high-power charging systems, negotiating power roles and voltage levels with connected devices. The TPS26744E-Q1 from Texas Instruments (TI) is an example of a dual-port automotive controller that manages USB PD profiles and controls the associated DC/DC converter.
Challenges when designing high-power USB PD from a 12-V rail include:
- Wide voltage variations: Both input (car battery) and output (USB Type-C port and connected load) voltages vary significantly, requiring a reliable and flexible power architecture.
- High current requirements: Delivering 100 W from a 12-V input can require more than 10 A of input current, necessitating large inductors, low drain-to-source on-resistance MOSFETs, and careful PCB layout to manage losses on the power components.
- Thermal bottlenecks: Most designs use buck-boost converters with four external MOSFETs, which can introduce substantial thermal stress under high load conditions, especially at low input voltages and high output power.
The automotive industry is transitioning toward 48-V power architectures, which simplifies USB PD designs and improves thermal efficiency. With a higher input voltage, a buck-only topology is sufficient, replacing the more complex buck-boost design. You’ll need fewer external components (no four-FET bridge, and with significantly reduced inductor size and current rating requirements).
For example, TI’s LM72880-Q1 is an integrated automotive-grade buck converter suitable for 48-V input USB PD applications. Figure 1 shows two USB PD DC/DC converters: a buck-boost converter off a 12-V battery to the left and a buck converter only off a 48-V battery to the right. You can see that the total solution size and components are much lower for the 48-V based system. The 48V-based solution achieves a 58% reduction in PCB area, from 1.75 in² to 0.74 in².
Figure 1 Buck-boost topology for 12-V architecture versus a buck-only topology for the 48-V architecture. Source: Texas Instruments
Switching frequency has a direct impact on power loss. Higher frequencies reduce the size of passive components but increase switching losses in MOSFETs; lower frequencies reduce switching losses but increase inductor ripple, and may require larger output filters.
Figure 2 compares the same board working at different switching frequencies, with 400 kHz to the left and 200 kHz to the right.
Figure 2 Thermal images of the same board working at a switching frequency of 400 kHz (left) versus 200kHz (right). Source: Texas Instruments
The thermal test comparing a 400 kHz versus 200 kHz switching frequency (both at a 54-V input, 5-A output, and with fan cooling) shows that lowering the frequency reduces converter temperature by 18°C. The inductor temperature does rise slightly from 60°C to 63°C, indicating the need for output filtering to balance the heat distribution.
Thicker copper, more PCB layersPCB design plays a crucial role in thermal management. Increasing copper thickness and adding more layers can significantly reduce temperature rise, especially when fan cooling is not available.
Figure 3 shows thermal images from two similarly sized boards. The board on the left is four layers, each with 1 oz of copper. The board on the right is six layers, with 2 oz of copper for the top and bottom layers and 1 oz of copper for the inner layers.
Figure 3 Thermal images of two PCBs: one with four layers with 1 oz of copper each (left) and one with six layers with 2 oz outer layers and 1 oz inner layers (right). Source: Texas Instruments
Both boards operate at a 48-V input and a 20-V output with 400 kHz switching. The board on the right carries 5 A versus 4.25 A for the board on the left, yet experiences 50% less temperature rise from improved heat dissipation. This underscores the importance of investing in copper-heavy PCB stacks for thermally demanding automotive applications.
Thermal foldbackTraditional protection methods often rely on thermal shutdown, completely disabling the system when a temperature threshold is crossed. While thermal shutdown protects hardware, this approach is abrupt and disruptive. In applications where continuous operation is preferable to complete shutdown—such as in automotive infotainment, industrial USB charging, or consumer docking stations—thermal shutdown simply doesn’t provide a good user experience.
USB PD controllers today, including those from TI, support firmware-configurable thermal foldback, a more sophisticated, dynamic thermal response system that reduces power delivery as temperature rises. Instead of cutting power entirely, the controller steps down the VBUS output power, allowing the system to cool while still maintaining basic functionality. It’s a “fail-soft” approach that maintains safety and system uptime.
TI’s USB PD controllers monitor system temperature through an external negative temperature coefficient thermistor connected to an analog-to-digital converter input. The firmware evaluates this voltage to assess temperature conditions. As the temperature rises, the system progresses through configurable thermal phases, each with increasing levels of power reduction.
In Figure 4, thermal foldback is divided into three thermal phases, each representing a higher level of thermal severity:
- Phase 1: Mild temperature rise. Power is reduced slightly to reduce thermal buildup.
- Phase 2: Intermediate temperature. Power delivery is throttled further to stabilize the system.
- Phase 3: High-temperature alert. Power is significantly reduced or disabled to avoid dangerous overheating.
Figure 4 Thermal thresholds rising and falling with three main phases of thermal foldback. Source: Texas Instruments
Each phase is defined by two voltage thresholds: a rising (Vth_R) and falling (Vth_F) threshold, creating hysteresis to prevent rapid toggling between phases when temperatures hover around a transition point.
In response to phase transitions, the USB PD controller will renegotiate the USB PD contract with the connected sink device. The maximum power allowed in each phase is configurable, offering precise control. For example, if the maximum port power is 100 W, thermal foldback could reduce the power to 60 W when entering phase 1, 27 W in phase 2, and 7.5 W in phase 3.
Thermal foldback is no longer a luxury feature; it’s a necessity in high-power USB PD designs. With firmware-configurable behavior, TI’s USB PD controllers give engineers the flexibility to maintain safe, efficient operation under thermal stress without sacrificing usability or system availability. By stepping power down intelligently instead of shutting off entirely, thermal foldback improves product reliability, extends component life, and delivers a better end-user experience in demanding environments.
USB PD thermal managementThermal management is an important design consideration in automotive USB PD applications. By leveraging higher-voltage systems, optimizing switching frequency, and investing in PCB design, you can significantly reduce heat-related stress and improve overall reliability. TI offers a range of automotive-grade USB PD controllers and DC/DC converters, such as the TPS26744E-Q1 and LM72880-Q1, to help you design compact, efficient, and thermally reliable USB Type-C charging solutions.
Josh Mandelcorn has been at Texas Instrument’s Power Design Services team for two decades focused on designing power solutions for automotive and communications / enterprise applications. He has designed high-current multiphase converters to power core and memory rails of processors handling large rapid load changes with stringent voltage under / overshoot requirements. He previously designed off-line AC to DC converters in the 250W to 2 kW range with a focus on emissions compliance. He is listed as either an author or co-author on 17 US patents related to power conversion. He received a BSEE degree from the Carnegie-Mellon University, Pittsburgh, Pennsylvania.
Seong Kim is an Applications Engineer at Texas Instruments, where he focuses on automotive USB Power Delivery and DC/DC converter solutions. With over a decade of experience at TI, Seong has supported a wide range of embedded and power designs – from Wi-Fi/Bluetooth MCUs for IoT to high-speed USB-C and PD systems in automotive environments. He works closely with automotive OEMs and Tier-1s to enable reliable fast-charging systems, and is regarded as a go-to expert on PD integration challenges. Seong has authored technical collateral and training materials used across TI’s global customer base, and is listed as an inventor on a pending U.S. patent related to USB Power Delivery. He holds a BSEE from the University of Texas at Dallas and is based in Dallas, Texas.
Stefano Panaro is a Systems Engineer in Texas Instrument’s Power Design Services team focused on designing power solution for Automotive and Communications applications. His main focus is on the design of DCDC converters, with a power level ranging from mW to kW. He received his BS in ECE and his MS in Electronic Engineering from Politecnico di Torino, Italy.
Related Content
- A quick and practical view of USB Power Delivery (USB-PD) design
- USB Power Delivery: incompatibility-derived foibles and failures
- Power Tips #130: Migrating from a barrel jack to USB Type-C PD
- Power Tips #75: USB Power Delivery for automotive systems
- Power Tips #142: A comparison study on a floating voltage tracking power supply for ATE
Additional resources
- For more information, see the reference design from TI, “Automotive, 24V to 60V input, two-port USB Power Delivery 60W maximum per port reference design.”
The post Power Tips #143: Tips for keeping the power converter cool in automotive USB PD applications appeared first on EDN.
A new IDM era kickstarts in the gallium nitride (GaN) world

The news about TSMC exiting the gallium nitride (GaN) foundry business has stunned the semiconductor industry, also laying the groundwork for integrated device manufacturers (IDMs) like Infineon Technologies to seize the moment and fill the vacuum.
Technology and trade media are abuzz with how GaN power device manufacturing is different from traditional power semiconductors, and how it doesn’t create strong demand for foundry services. Industry watchers are also pointing to the rising price pressure from Chinese GaN fabs as a driver for TSMC’s exit.
To offer clarity on this matter, EDN recently spoke with Johannes Schoiswohl, senior VP and GM of Business Line GaN Systems at Infineon. We began with asking how GaN manufacturing differs from mainstream silicon fabrication. “They are fundamentally not so different because we start with a silicon wafer and then grow epitaxy of GaN on top of it,” he said.
The dedicated epitaxial machines conduct the process of growing a GaN layer on top of a silicon substrate. “That’s the key difference,” Schoiswohl added. “From then onward, when GaN epi is grown, we use processes and tools similar to silicon fabrication.”
Figure 1 Johannes Schoiswohl explains the engineering know-how required in GaN fabrication. Source: Infineon
GaN’s journey to 300 mm
While China’s Innoscience claims to be the world’s largest 8-inch GaN IDM, operating a dedicated GaN-on-silicon facility, Infineon is hedging its bets on 300-mm GaN manufacturing. The German chipmaker plans to produce the first 300-mm GaN samples by the end of 2025 and kickstart volume manufacturing in 2026.
That will make Infineon the first semiconductor manufacturer to successfully develop 300-millimeter GaN power wafer technology within its existing high-volume manufacturing infrastructure. “We were able to move from 6-inch to 8-inch quickly and now to 300-mm because we could use the existing silicon equipment, and that’s beautiful from a capex perspective,” said Schoiswohl.
Figure 2 GaN production on 300-mm wafers is technically more advanced and significantly more efficient compared to established 200-mm wafers. Source: Infineon
What’s really new is the 300-mm epi tool, he added. “Moving to 300-mm fabrication is indeed challenging because there are a lot of engineering issues that need to be resolved,” Schoiswohl said. The GaN layer on top of the silicon layer has a different crystal structure, which causes a lot of strain and mismatch. Additionally, there could be a significant amount of wafer breakage. “It means that a lot of engineering know-how will go into the 300-mm GaN fabrication,” he said.
In a report published in Star Market Daily, Innoscience CEO Weiwei Luo acknowledged significant barriers that hinder the commercial realization of 12-inch or 300-mm GaN wafers. He especially mentioned the lack of metal-organic chemical vapor deposition (MOCVD) equipment capable of supporting 300-mm GaN epitaxy; MOCVD is the core equipment for the epitaxial growth of GaN layers.
Regarding MOVCD tools for 300-mm wafers, Schoiswohl acknowledged that Infineon is currently in the early stages. “We are working closely with MOCVD equipment vendors.”
GaN fabrication model
TSMC’s exit has raised questions about why the foundry model is losing traction in GaN. According to Innoscience CEO Luo, power GaN devices aren’t well-suited for the traditional foundry model because they require close coupling between design, epitaxy, process, and application. That’s where the foundry-client model struggles while the IDM model offers the agility and control.
Infineon’s Schoiswohl says that GaN manufacturing isn’t low margin, but what you need to do is ensure value creation. “First and foremost, you need to be cost-competitive,” he said. “You need to drive down costs aggressively, and for that, you must have a cost-effective manufacturing technology, which is 300-mm GaN wafers in this case.”
Second, IDMs like Infineon can innovate at the system level. “It’s not enough to simply develop a GaN transistor,” Schoiswohl said. “We need to have gate drivers and controllers and thus demonstrate how to create a system that offers maximum value.”
Figure 3 The system approach for GaN devices complements cost competitiveness. Source: Infineon
With optimized controllers and gate drivers, engineers can create GaN solutions that bring the system costs down. That makes GaN a meaningful and profitable business; however, this is far more challenging for a foundry than an IDM.
With 300-mm enablement and a focus on the system-level approach, Schoiswohl is confident that GaN can eventually reach cost parity with silicon. “The progress on product level triggers innovation on system level, where gate drivers and controller ICs are optimized for high-frequency implementations and new topologies.”
Future of GaN technology
While Infineon is doubling down on GaN manufacturing, Schoiswohl foresees massive advancements in the performance of GaN from a design standpoint. “We’ll see a huge drop in parasitic capacitance and on-state resistance in a given form factor.”
That, in turn, could harness the release of high-voltage bi-directional switches, where devices are analytically integrated into one die. You could turn it on and off in both directions, which enables a lot of new topologies.
With TSMC’s exit from the GaN fabrication business, will IDMs be the winners in this power electronics segment? Will GaN heavyweight Infineon be able to execute its 300-mm GaN roadmap as planned? Will other fabs follow suit after TSMC’s departure? These questions make the GaN turf a lot more fun to watch.
Related Content
- Navitas Previews New Fab Plans, ‘Bidirectional GaN’
- The diverging worlds of SiC and GaN semiconductors
- A brief history of gallium nitride (GaN) semiconductors
- Infineon Successfully Develops World’s First 300 mm Power Gan Technology
- Power GaN Manufacturing Landscape: Foundries and Vertically Integrated IDMs Compete
The post A new IDM era kickstarts in the gallium nitride (GaN) world appeared first on EDN.