EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 1 година 42 хв тому

5G RedCap module enables high-speed IoT connectivity

Чтв, 02/26/2026 - 19:53

Cavli’s CQM220 5G Reduced Capability (RedCap) module provides power- and cost-optimized 5G connectivity for IoT applications. Compliant with 3GPP Release 17, it delivers downlink speeds up to 220 Mbps and uplink up to 120 Mbps, with LTE Cat 4 fallback for 4G compatibility.

The module features an Arm Cortex-A7 processor running up to 1.9 GHz, flexible memory configurations, and advanced power management options including eDRX/DRX modes. It comes with the OpenWrt-based OpenSDK for on-module application development, reducing external MCU dependency.

Integrated multi-constellation, dual-band GNSS with L1 and L5 support enables precise positioning using GPS, GLONASS, Galileo, BeiDou, NavIC, QZSS, and SBAS in urban, industrial, and remote environments.

The CQM220 is available in a 28.0×25.5×2.7-mm LGA package for compact embedded designs and an M.2 form factor for routers, gateways, and CPE. It provides USB 2.0, PCIe Gen2, I2C, UART, SPI, SDIO, I2S, and ADC interfaces, along with main, diversity, and GNSS antenna connections.

Samples and evaluation kits can be ordered on the product page linked below.

CQM220 product page 

Cavli Wireless 

The post 5G RedCap module enables high-speed IoT connectivity appeared first on EDN.

Jumping the Jeep: An alternative cost-effective solar cell example app

Чтв, 02/26/2026 - 15:00

A solar charging kit, inexpensive as-is and purchased after further promotional enticement, enables keeping a remotely located vehicle battery topped off.

One of the things I enjoy most about technology is watching a new approach (along with products based on it) hit its high-volume stride, typically driven by one or only a couple of early applications, and then just explode from there, both replacing precursor technologies and expanding into brand new applications and markets. This has certainly been the case, for example, with LEDs. See, for example, my recent teardown (where they replaced fluorescent tubes) for an example of the former, and an earlier teardown (where their low power consumption and DC voltage foundation enabled the development of a light bulb with integrated battery backup) for an example of the latter.

A solar revolution

Or take, as another technology case study, solar cells. Their combination of efficiency and cost-effectiveness, in combination with equally pervasive lithium battery technology, has enabled widespread replacement of predecessor SLA-based energy storage systems, both portable and whole-home permanent installations, while dramatically expanding the accessible market for such devices. At the same time, they’re helping create entirely new categories of products. Take, as a humble example, Renogy’s 10W solar trickle charger kit, two of which I purchased back in October 2024 and one of which I recently, belatedly, and finally pressed into service:

Right now, as I write this, they’re selling on Amazon for $25.17 each, brand new. A year-plus ago, during Amazon’s Prime Days sales, I got them off the Resale (formerly Warehouse) site in used, like-new condition for $17.74. I don’t think they’d even been opened by the prior purchaser(s) prior to getting returned. The intent at the time was to use them to keep the batteries in two of my vehicles, then outdoor-stored at a lot about a half hour drive away, trickle-charged up. But I could never figure out how to securely attach the solar cells to the vehicle covers, far from routing their outputs to the battery compartments. That said, I eventually figured that latter part out: SAE extension cables:

One of the vehicles, my 2001 Volkswagen Eurovan Camper, is now parked in my garage for critter-protection purposes. The other, a 2006 Jeep Wrangler Unlimited Rubicon, most recently mentioned last March when I discussed its then-drained battery state, is still down there (now with a permanently disconnected battery). A few months back, when I drove down and checked on it, my preparatory suspicion was confirmed; as happens every few years, the combination of persistent sun and still-frequent precipitation (rain, snow, hail…) exposure, along with also-frequent wind, had disintegrated the cover:

Successful experimentation

While waiting for the replacement cover to arrive, I had a bright idea; this’d be the perfect time to finally try out that solar cell kit! My original idea was to mount it to the now-exposed vehicle hood. But then I realized that I had an even better option available, inside the vehicle:

in combination with the 12V auxiliary power connector built into the console:

As you can see from the above image (which I snagged from an enthusiast forum thread post to save me an hour-long round-trip drive to the storage lot to take my own shot; that’s not actually my rig), there are two of them. One, the “cigarette lighter” located within the ashtray, is ignition-switched. It obviously won’t work for my purposes. The other, while (I think) still fused, otherwise routes directly to the battery; it’s always “hot”. That’s the one I needed and used:

And it works perfectly! My perhaps-obvious concern was two-fold:

  • It’d either not work sufficiently (or at all), leaving me with an eventually-drained battery once again, or
  • It’d work too well, not terminating the trickle charge when it sensed a “full” state, thereby also leading to the battery’s demise (along with who-knows-what other issues).

Two weeks later, when I went back and checked (in the process of installing the new vehicle cover), I happily discovered that all my worrying was for naught; it was working exactly as planned. Now I just need to figure out how to securely attach the solar cell to the outside of the new cover, and I’ll be set! Suggestions, along with more general thoughts, are as-always welcomed in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post Jumping the Jeep: An alternative cost-effective solar cell example app appeared first on EDN.

Power Tips #150: Overcoming high-voltage monitoring challenges in gigawatt-scale data centers

Чтв, 02/26/2026 - 15:00

As AI and machine learning workloads accelerate, data center power consumption is beginning to outstrip existing infrastructure capacity. To meet this rising demand, new high-voltage DC standards support the higher-power, denser server racks now found at gigawatt-scale facilities. These high-voltage standards create engineering challenges when monitoring high-voltage power rails.

Designers need reliable, accurate, and fast-acting voltage supervision to prevent overvoltage damage to downstream components, and to help ensure a timely system response to undervoltage conditions. This article presents a supervision approach that addresses these requirements and enables the reliable deployment of next-generation high-voltage DC architectures.

 The push toward high-voltage DC architectures

The power profile of modern data centers is undergoing a dramatic shift as AI becomes the dominant application. Machine learning with large graphics processing unit arrays consumes power at levels once associated with industrial equipment rather than IT hardware. It is increasingly common for a single rack to draw 60 kW to 100 kW. Next‑generation AI systems are expected to push beyond 150 kW per rack.

Because traditional 48-V distribution designs cannot efficiently support these levels, designers are turning to a new class of high‑voltage DC standards centered around ±400 V or 800 V distribution. This shift, as shown in Figure 1, is not simply an incremental upgrade; it represents a fundamental change in the delivery of power across gigawatt‑scale facilities.

Figure 1 Conventional versus high-voltage data center power distribution. (Source: Texas Instruments)

Efficiency continues to drive the transition to higher voltages. Higher voltages reduce current and the I²R losses that dominate high power distribution, while also substantially cutting current and reducing conduction losses in cables, busbars, and connectors. Higher efficiency at large AI campuses means lower cooling requirements, improved energy performance, and increased computing density.

Higher voltages also unlock greater power‑delivery capability. Delivering 150 kW to 300 kW per rack at 48V requires heavy conductors, parallel cabling, and complex routing. High voltages deliver the same power with manageable current levels, enabling simpler infrastructures and longer distribution distances without excessive copper mass.

Cost provides yet another compelling factor. Smaller conductors, lighter busbars, and reduced copper usage lower material and installation expenses. At modern hyperscale data center campuses, these reductions are substantial.

Challenges in monitoring high-voltage power rails

As data‑center power architectures migrate toward higher‑voltage DC distribution, the need for monitoring and protection circuitry increases significantly. Higher-voltage DC distribution increases demands on monitoring and protection circuitry. Operating at ±400 V or 800 V means that every disturbance or transient condition carries more stored energy, with components operating closer to their absolute limits. These conditions reduce the margin for error and make precise power‑rail supervision essential.

Designers must contend with higher fault energy levels, faster electrical dynamics, increased electromagnetic noise, and tighter system‑level coordination requirements. In this environment, monitoring circuits must distinguish between harmless fluctuations and true fault conditions, with far greater accuracy and speed than lower‑voltage systems.

With these broader challenges in mind, let’s look more closely at two specific issues surrounding under- and overvoltage events:

  • Response time. The voltage monitor must respond to faults fast enough to prevent damage to downstream components, but should not trigger erroneously from a noisy environment or short transient voltage fluctuations. For example, imagine a large current spike causing the supply voltage to drop while the power supply responds. If the voltage drops for only a very short time, it may not be considered a fault condition, thus requiring no action. As soon as the voltage is low enough to be considered a fault, however, the voltage monitor should take action as soon as possible to prevent damage.
  • Size requirements are another common challenge for voltage monitoring. High-voltage data-center power supplies have extremely limited space, requiring the smallest possible monitoring solution. But it also has to be reliable. Ensuring that the voltage monitoring solution can be trusted to respond to faults is imperative to a reliable power supply and distribution system.
Requirements for monitoring high-voltage power rails

Figure 2 shows a minimal high-voltage monitoring circuit implementation using:

  • A high-voltage resistor ladder to step down the power rail for sensing comparators.
  • Two comparators to signal under- and overvoltage faults.
  • A voltage reference for comparators.
  • Filtering components.
  • An amplifier to provide a scaled-down voltage for the analog-to-digital converter (ADC) for analog monitoring and telemetry of the power rail.

Figure 2 High-voltage monitoring circuit building blocks. (Source: Texas Instruments)

Implementing this circuit with discrete components may present significant drawbacks. Individual component tolerances will add together, resulting in significant errors requiring costly, high-accuracy, low-temperature-drift components. Resistors are especially problematic, as each resistor’s uncorrelated error will sum to create a significant cumulative error in the resistor-divider. Discrete components consume significant board space, which is typically at a premium in data-center applications.

Figure 3 shows a reference layout with space requirements for high-voltage monitoring with discrete components.

Figure 3 A discrete high-voltage monitoring implementation. (Source: Texas Instruments)

An integrated solution

An integrated device for high-voltage supervision addresses these challenges by fully integrating the high-voltage resistor-divider, comparators, buffer, and additional features. The functional diagram in Figure 4 illustrates this approach, helping reduce total solution size while maintaining high performance.

By integrating the resistors, reference, and comparators, TI’s TPS371K-Q1 achieves an accuracy of 1% across the –40°C to 125°C temperature range, with a fast fault detection time of <5 µs, programmable glitch rejection and release delay time, as well as a 1% accurate high-bandwidth buffer that can directly drive 16-bit ADCs or downstream control circuits.

Figure 4 TPS371K-Q1 functional block diagram. (Source: Texas Instruments)

An integrated monitoring solution also provides significant board space savings in a compact package (Figure 5), requiring minimal external components.

Figure 5 Integrated high-voltage monitoring solution. (Source: Texas Instruments)

Application example

The implementation of a voltage monitoring system using the TPS371K-Q1 is straightforward. Figure 6 shows a basic schematic for monitoring the ±400V or 800V input to a DC/DC converter.

Figure 6 Voltage monitoring for a high-voltage DC/DC converter. (Source: Texas Instruments)

Using resistors on the ADJ OV and ADJ UV pins, designers can select under- and overvoltage thresholds to fit their system. The CTR and CTS pins allow the use of a capacitor to program a delay before assertion of a fault and a delay before deassertion once the voltage returns to normal. Open-drain outputs enable easy interface with logic levels other than the device’s own supply voltage. The VSENSE output pin provides a scaled representation of the SENSE input voltage for direct connection to an ADC. Designers can select voltage sense output factors with options ranging from 200 to 900.

Integrated monitoring solutions

The transition to high‑voltage DC architectures is reshaping design requirements for next‑generation data‑center power systems, especially as AI workloads continue to push rack‑level power far beyond the limits of today’s distribution schemes. Reliable voltage supervision becomes foundational, helping ensure high‑energy power-rail monitoring with the speed, accuracy, and reliability required to protect downstream converters and maintain system stability.

Integrated monitoring solutions such as the TPS371K-Q1 address these challenges by combining precise threshold detection, fast fault response, programmable filtering, and compact implementation into a single device optimized for the electrical and space constraints of modern data centers. By adopting advanced monitoring approaches, designers can confidently deploy ±400 V and 800 V architectures that deliver the efficiency, power density, and reliability needed to support the continued growth of AI‑driven computing at the gigawatt scale.

Henry Naguski is an applications engineer for Linear Power at Texas Instruments, working with voltage references and supervisors. He specializes in shunt voltage references and high-voltage supervisors. Henry holds a bachelor’s degree in computer engineering from Montana State University.

 

 

Masoud Beheshti leads application engineering and marketing for Linear Power at Texas Instruments. He brings extensive experience in power management, having held roles in system engineering, product line management, and marketing and applications leadership. Masoud holds a bachelor’s degree in electrical engineering from Ryerson University and an MBA with concentrations in marketing and finance from Southern Methodist University.

Related Content

The post Power Tips #150: Overcoming high-voltage monitoring challenges in gigawatt-scale data centers appeared first on EDN.

Variable‑reluctance sensors: From fundamentals to speed sensing

Чтв, 02/26/2026 - 13:04

Variable reluctance (VR) sensors transform mechanical motion into electrical signals by exploiting changes in magnetic flux. As a ferromagnetic target moves past the sensor’s pole piece, the reluctance of the magnetic circuit varies, inducing a voltage in the coil.

This simple yet robust principle has made VR sensors indispensable in applications ranging from automotive crankshaft speed detection to industrial position monitoring. Their ability to deliver precise motion feedback without requiring external excitation makes them a cost-effective choice for engineers designing systems that demand reliable speed and position sensing.

Magnetic reluctance and VR sensors

Reluctance is a physical quantity that describes the opposition a magnetic circuit offers to the flow of magnetic flux. For instance, in the air gap of a permanent magnet—an essential part of a magnetic circuit—the reluctance is high because air has very low magnetic permeability.

This reluctance drops significantly when a piece of soft iron is placed in direct contact with the magnet’s poles, while it assumes an intermediate value if the same iron piece is positioned within the air gap without touching the poles. In each case, the magnetic field is altered accordingly.

VR sensors exploit this property by combining a permanent magnet with a coil to detect changes in magnetic flux. As ferromagnetic targets—such as gear teeth—modulate the magnetic circuit’s reluctance, an alternating voltage is induced in the coil. These passive magnetic transducers are widely applied in engine speed sensing and crankshaft/camshaft timing, valued for their ruggedness in high‑temperature and high‑performance environments.

The diagram below illustrates the operation of a VR sensor. The coil’s core is positioned close to a rotating gear, and each time a tooth passes near the sensor, the reluctance of the magnetic circuit formed by the permanent magnet changes. This variation alters the magnetic field, inducing a current in the coil and producing a voltage signal.

The frequency and amplitude of this signal are directly proportional to the gear’s rotational speed, while the direction of rotation has no effect. The signal amplitude, however, decreases as the air gap between the sensor and the gear teeth increases. Consequently, the primary limitation of VR sensors is their inability to reliably detect very slow or distant movements.

Figure 1 Schematic depicts the core arrangement of a variable reluctance sensor near a gear tooth. Source: Author

In essence, a permanent magnet forms the core of a VR sensor, establishing a fixed magnetic field. When a ferrous metal target—such as a gear tooth—approaches and passes the pole piece, the field strength changes. The alternating presence and absence of the ferrous material modulates the reluctance, or “resistance to the flow” of the magnetic field. This dynamic variation alters the field strength, inducing a current in the coil winding connected to the output terminals.

This has led to the widespread use of VR sensors across many industries. Consequently, they are also known by a range of application-specific names, including magnetic pickups, passive speed sensors, motion sensors, pulse generators, frequency generators, variable reluctance speed sensors, transducers, magnetic probes, and timing probes.

From this point onward, the discussion turns to the principal theme of the post—variable‑reluctance speed (VRS) sensors. Let us take a quick look at VRS sensors in action and the practical factors that matter most.

Note on terminology: To prevent confusion between VR and VRS, VR designates the broader class of magnetic transducers that convert motion into electrical signals, while VRS identifies the specialized subset engineered for rotational speed measurement.

Understanding VRS industrial magnetic speed sensors

A variable reluctance speed (VRS) sensor—often marketed by manufacturers as an industrial magnetic speed sensor—is a rugged, self-powered device that requires no external voltage source. It’s widely used to deliver speed, timing, and synchronization data to control circuits or displays as a pulse train, and is valued for its reliability in high temperature, high-performance environments.

In basic terms, a VRS industrial magnetic speed sensor employs a permanent magnet, pole piece, and coil to convert the motion of a ferrous target—such as a gear tooth—into an electrical signal.

The most common target is metal gear, but examples include bolt heads, disc perforations, and turbine blades. In every case, the target must be ferrous—preferably unhardened steel—to ensure reliable signal generation.

The output of a VRS sensor is an AC voltage whose amplitude and waveform vary with the speed of the monitored device. This signal is typically specified in terms of peak-to-peak voltage (Vp-p). Each complete waveform (cycle) is generated as a target passes the sensor’s pole piece (sensing area). When a standard gear is used, the resulting output signal closely resembles a sine wave when observed on an oscilloscope.

Figure 2 Diagram illustrates an application example of an industrial variable reluctance speed sensor. Source: Phoenix America

Signal conditioning for VRS sensors

Conditioning the output signal from a VRS sensor is crucial before it’s processed by downstream electronics such as a microcontroller. Proper conditioning ensures that the analog signal is efficiently and reliably converted into a clean, usable form—free from interference and with an amplitude compatible with the rest of the circuitry.

Not to refrain, but converting a possibly noisy analog signal with variable amplitude and frequency into a TTL/CMOS-compatible signal is a challenging task that demands careful design and robust signal-conditioning techniques.

Although signal conditioning can be implemented with discrete electronics, several semiconductor manufacturers now offer ICs specifically designed to handle this demanding task. Notably, onsemi provides the NCV1124, while Maxim Integrated, now part of ADI, offers the MAX992x family, both tailored for reliable conversion of variable-reluctance sensor outputs into clean, logic-level signals.

As a related note, this recalls some of my earlier experiments with classic interface and frequency-to-voltage converter ICs such as LM1815, LM2907, and LM2917. These devices, though older in design, provided valuable insight into the challenges of conditioning variable-reluctance sensor outputs and converting them into usable forms for measurement and control applications.

Figure 3 Simplified block diagram of MAX9924 highlights the IC’s role in transforming noisy variable-reluctance sensor inputs into clean, microcontroller-compatible signals. Source: Analog Devices

Just a quick tip: STMicroelectronics’ L9788 is a multifunction IC for automotive engine management systems. Among its many integrated features, it includes a dedicated VRS interface. This block processes crankshaft and camshaft sensor signals, offering both normal operation (conversion of differential voltages) and diagnostic mode (detecting shorts or open conditions). With adaptive hysteresis and built-in filtering, the VRS interface ensures reliable engine synchronization while reducing the need for external conditioning circuits.

Application considerations for VRS sensors

VRS sensors are not intended for sensing extremely low rotational speeds. The target passing the pole piece must travel at a minimum velocity or surface speed to generate an adequate output voltage. Proper sensor selection requires ensuring that the device delivers the necessary Vp-p at the lowest speed of interest, while still operating reliably at the maximum frequency of the application.

In most cases, the polarity of the output signal is inconsequential; when polarity matters, simply reversing the output leads resolves the issue. Furthermore, for every gear-tooth configuration, there exists an optimum pole-piece size and shape that maximize sensor output voltage, a relationship clearly documented in manufacturer datasheets. In addition, correct load resistance and precise air-gap setting are critical to achieving stable performance and consistent signal quality across the operating range.

That is all for now. While simplifying complex topics to fit into the pocket of fundamentals, there is always more detail waiting in the wings. This time, the essentials have been chalked out; deeper layers can follow in future installments—so if you found this technical take useful, share it with colleagues or add your thoughts in the comments to help shape the next deep dive.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Variable‑reluctance sensors: From fundamentals to speed sensing appeared first on EDN.

An LM317-based 0-20mA to 4-20mA 2-wire converter

Срд, 02/25/2026 - 15:00

My recent DI contribution (Another silly simple precision 0/20mA to 4/20mA converter) used the LM337 regulator in a novel circuit arrangement to translate an input 0-20 mA current source into a 4-20 mA two-wire transmitter current loop (a standard two-terminal industrial current source).

The circuit can also be flipped over to perform the same operation using the LM317, which is more widely used, hence easily available, and lower priced. As before, it relies on tapering off an initial 4 mA current to zero in proportion to the input 0-20 mA, and adding the input and the tapered off 4 mA signal to create a 2-wire 4-20 mA output loop. The operation is identical, only with the current directions reversed.

Refer to Figure 1.

Figure 1 An input external boost transistor is used to limit U1 dissipation.

Wow the engineering world with your unique design: Design Ideas Submission Guide

At 0-mA input, the series combination of Rs and Rz sets the 4 mA “zero” value in the 4-20 mA output loop, set using Pz. This is pulled from the 24-V supply, going from IN to OUT, and into the output loop via Rs and Rz.

 A 20-mA input current creates a 1.25 V drop in Rs, opposing the internal reference and reducing the current through Rz to 0 mA. The 20-mA input current is pushed into point X and the – (negative terminal) of the output loop, and pulled from the 24-V supply via OUT (through U1), to create a current of 20 mA + 0 mA in the output loop. Span setting is done by Ps.

Accurate current setting requires 2 S/Z passes to set the output current to within 0.05% transfer accuracy, or (actually much) better. Pots should be multi-turn 3296 types or similar, but single-turn trimmers may also work fairly well, as both pots have a small trim range by design.

The di/dV, current stability, and ease of trimming are excellent. Input to output linearity of the basic circuit is 0.02%. The heat sink has been replaced by an external boost transistor that ‘takes over’ above an output loop current of @ 6 mA, limiting U1 dissipation to give better di/dv performance of 0.02% over a voltage range from 5 V to 32 V.

For intermediate input currents, as before, a “zero” 4 mA current is tapered off to 0 mA proportional to the input 0-20 mA to give corresponding matching input and output loop currents.

A reverse protection diode is recommended in the 4-20 mA loop. Current limiting should be applied to limit fault current to safe levels. It can easily be inserted into Q1 emitter in this case.

The 0-20 mA input sees a positive drop here, of 1 to 0 V.

The comments regarding current stability and potentiometers in “Another silly simple precision 0/20mA to 4/20mA converter” are applicable here too.

In conclusion, the operation of an inexpensive, novel, and precise circuit to convert 0-20mA current signals to 4-20mA 2-wire current signals is described, using the LM317 regulator and an external boost transistor.

It easily holds a short-term stability of 0.02% of span, and has a linearity of 0.02%.

Only two stable resistors are needed for good overall temperature stability.

Pot tempco, type, and contact resistance are less critical due to the configuration used.

A 3 V minimum operating voltage allows as much as 1000E of loop resistance with a 24-V supply.

Ashutosh Sapre lives and works in a large city in western India. Drifting uninspired through an EE degree way back in the late nineteen-eighties, he was lucky enough to stumble across and be electrified by the Art of Electronics 1 and 2. Cut to now, he is a confirmed circuit addict, running a business designing, manufacturing, and selling industrial signal processing modules. He is proud of his many dozens of design pads consisting mostly of crossed-out design ideas.

Related Content

The post An LM317-based 0-20mA to 4-20mA 2-wire converter appeared first on EDN.

Simple shorts sniffer

Втр, 02/24/2026 - 15:00

Recently, frequent and favorite contributor Nick Cornford gave us a cool and novel acoustic-interface design for a super sub-ohmmeter capable of audibly sniffing out defects in PWBs: “Tuneful track-tracing.”

Figure 1’s design shamelessly nicks Cornford’s concept. It stretches the resistance sensing range by a few decades, thus spanning single-digit milliohms to double-digit ohms. This adds extra versatility for locating spurious connections in both loaded boards and boards with shorts in ground planes. Here’s how it works.

Figure 1 Audible milli-ohmeter output frequency is linear versus resistance over several orders of magnitude. 

Wow the engineering world with your unique design: Design Ideas Submission Guide

A 50-mA excitation current is provided to the PWB under test by R5 via connections A (source) and B (half-Kelvin sense and current return). D1 limits the maximum developed voltage to ~700 mV. This prevents (potentially damaging) forward bias of components on loaded boards in case the short being sniffed unexpectedly disappears.

The current return side of B consists of the (approximately) known resistance (44 mΩ) of a 41-inch length of 24 AWG copper wire. The resulting 44 m x 0.05 A = ~2 mV drop provides a null reference for the A1a voltage to current amplifier. We’ll discuss that more shortly (no pun?).

The probe voltage mode signal is converted to current mode by transconductance amplifier Q1/A1a, the associated resistor network, and range selection switch S1. R6 provides static-discharge protection for A1’s input pin while developing only uV of offset from A1’s pA-level bias current. S1 provides two frequency/resistance ranges: 100 Hz/Ω and 10 kHz/Ω.

The shorts-sniffing process consists of sliding probe C along the problematic path on the PWB while listening to the resulting audio output. Its frequency rises or falls with the resistance between the probe contact and the Kelvin connection B rises or falls. Maximum resolution results if a quick initial nulling of offset voltage is done via Null pot R1 adjustment. It provides up to ±2 mV of input offset adjustment to cancel the op-amp offset for a zero (or near) Hz output when probe C is held at the point of excitation current entry to the PWB under test. Of course, you won’t hear the actual fundamental frequency when oscillation is that slow, but only the (annoying) buzz of the square wave rising and falling edges.

The A1b (more or less symmetrical) triwave/squarewave oscillator itself is built around the 2way current mirror comprising Q2, Q3, and D2 as described in this earlier DI: “A two-way mirror—current mirror that is.”

The mirror sources current into timing cap C1, linearly ramping it up, when A1b’s pin 7 is positive, and sinks current when pin 7 is low, ramping it down. The resulting 1Vpp triwave on C1 and the squarewave on pin 7 are approximately symmetrical. 

Its actual frequency can be over the range from the subsonic to the ultrasonic, but of course (by definition), little information will be relayed to your ear by either. Thence cometh the utility of range switch S1.

Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974.  They have included best Design Idea of the year in 1974 and 2001.

Related Content

The post Simple shorts sniffer appeared first on EDN.

A real-world approach for AI-driven semiconductor manufacturing

Втр, 02/24/2026 - 11:57

The semiconductor manufacturing industry faces an unprecedented data challenge. For the newest devices, test programs can contain over a million test items, generating gigabytes of data per chip across probe, assembly, and test operations. The largest deployments have reached the multi-petabyte range, creating a fundamental problem: traditional business intelligence tools simply cannot handle semiconductor-scale data with millions of columns and rows.

Public comments from three semiconductor executives sum up the challenge. “As a result of the increased complexity of advanced packaging, the amount of manufacturing and test data that semiconductor companies need to analyze has increased sixfold since 2022,” recently commented Mike Campbell, Qualcomm’s chief supply chain Officer.

At the same event, Aziz Safa, corporate VP and GM of Intel Foundry Automation, had this to say: “We have 600 petabytes of data across Intel. The challenge that we have is to be able to run algorithms in the areas where we need that data to solve problems.”

And John Kibarian, CEO of PDF Solutions, mirrored those remarks. In many cases, he said, no more than 5% of the collected semiconductor manufacturing data is used in analytics. Yet more than ever, access to timely analytics is critical to quickly ramp the yield of new advanced process nodes or ensure the quality of complex packages. In this context, it’s critical to find new innovative ways to scale the ability to analyze semiconductor data.

One comprehensive strategy includes a plan to enhance the capability of a data platform, already widely used across the semiconductor industry, to address this challenge by combining scalable analytics infrastructure with advanced AI capabilities, including large language models (LLMs) and autonomous agents.

This approach represents a fundamental rethinking of how semiconductor manufacturers can extract actionable insights from massive, complex datasets.

The scalability problem

Traditional business intelligence (BI) tools face critical limitations in semiconductor manufacturing environments. They rely on local memory, which severely restricts analysis and machine learning capabilities. They also lack computational and organizational scalability often related to the specific characteristics of semiconductor data that may have hundreds of thousands or even millions of parameters to analyze.

Think of a table with a million columns and hundreds of thousands of rows. Visualizing this type of dataset in a traditional data analytics or BI tool has reached its limit, and this approach will not address the future needs of an industry where data size and complexity keep increasing.

Typically, engineers develop bespoke scripts based on summary statistics disconnected from the original data sources, and these scripts are typically served without the infrastructure for robust sharing across the organization.

One answer is a new parallel and distributed data architecture with dynamic partitioning. Rather than bringing raw data to the client for analysis, the system keeps data in the server layer and delivers only the visualizations needed by users. This thin-client approach enables the system to scale dynamically based on current needs by caching in the data layer for faster access and pre-configured analytics running continuously across all available data.

The results are striking. Benchmark testing shows approximately 25-fold performance improvements on typical large test programs with the ability to work with one million test items and beyond, a scale of analysis previously impossible.

The system achieves this through parallelizable performance across both rows (individual die) and columns (test parameters), combining static compute nodes with burst cloud computing for cost-effective scaling to extremely large datasets.

Deploying AI models at scale across enterprise

Deploying AI in semiconductor manufacturing requires more than just training models; it demands a complete operational infrastructure. The infrastructure’s architectural strategy addresses three major operational challenges: deployment bottlenecks caused by manual handoffs and brittle integrations; data friction from building custom pipelines instead of leveraging existing systems; and governance risks from poor lineage between production models and training parameters.

One tool gaining market traction used by data scientists from code to production for semiconductor data is focused on deploying models at the edge. Add-on capabilities include the ability for engineers to add their own models.

An enterprise-grade model registry will enable model lifecycle governance, tracking, and sharing, with full data traceability ensuring that any model’s training inputs are always known.

Breaking down data silos

One of the most significant challenges in semiconductor manufacturing is the fragmentation of critical data across isolated systems. Yield data sits in one place, design diagnosis information in another, and equipment telemetry in yet another. This fragmentation blocks the correlation of volume yield data with physical layout features and prevents engineers from connecting specific process excursions with final yield outcomes.

One solution is extensive data integration efforts via a platform extending beyond traditional manufacturing analytics supported by a semiconductor-specific end-to-end data model.

Central to this effort is the development of a semiconductor-specific semantic data layer that maps the complex relationships between yield, design, process, and tool data. This allows alignment and linking data across domains and sources in the data platform. It also allows LLMs to interpret disparate data types as a unified whole rather than struggling with disconnected information sources.

Workflows as the foundation

A key architectural decision in the platform is to treat workflows as the internal language of the system. Every analytic operation—whether rules, machine learning pipelines, or batch analytics—is expressed as a workflow.

This provides several critical benefits. Workflows serve as the long-term memory of the system, capturing not just results, but the complete methodology used to achieve them. They can be created from learn mode, through LLMs, manually, or programmatically, and can be embedded within larger workflows for maximum reusability. Engineers may never need to directly interact with a workflow, but the capability is there when needed.

Critically, workflows act as semiconductor-specific content and context, encoding best practices as reusable playbooks. They provide transparency into how results are achieved and serve as guardrails for AI reasoning, helping prevent the hallucinations that can occur when LLMs operate without domain constraints.

The agentic LLM platform

The goal is to enable engineers to interact with manufacturing data at a higher level of abstraction. Rather than requiring deep technical knowledge of query languages and data structures, the result is a system where engineers can ask natural language questions and receive actionable insights.

Achieving this vision requires a “Semantic, Agentic, and Secure” infrastructure. The semantic layer is built on domain expertise, creating semiconductor-native knowledge graphs that encode the fundamental data hierarchy of manufacturing. This anchors LLM reasoning in the structural reality of manufacturing data, eliminating ambiguity and providing the ground truth context needed to prevent hallucinations.

For example, the system understands that CV refers to Characterization Vehicle, that yield represents the results of die binning, and that the data hierarchy flows from lot to wafer to die to package. It knows that common analytical tasks include yield trending, bin Pareto analysis, and univariate screening. This enables engineers to ask questions like “Show me the yield trend over the last week” or “What is the root cause of low yield in lot XX?” and receive meaningful, accurate responses.

The platform integrates a model context protocol for a truly agentic system. Rather than just summarizing text or answering questions, the system can autonomously plan and execute complete workflows from raw data ingestion through complex plot generation.

To ensure reliability and transparency, any agentic tasks are executed using scalable analytics workflows. They can be viewed, saved, and modified by engineers at any time to ensure total transparency of LLMs actions.

To ensure the sensitivity of semiconductor manufacturing data, a fully air-gapped, on-premises LLM infrastructure option, designed for intellectual property sovereignty, can be added. This ensures that sensitive yield data and proprietary models never leave secure firewalls, eliminating reliance on third-party cloud providers.

The path forward

A platform like this requires thorough research and development on technology selection, validation and tuning, engaging a large group of architects, developers, quality assurance specialists, designers, and product managers.

This type of platform addresses the critical industry challenge: de-risking AI adoption by securely scaling execution and maximizing return on investment from legacy data, while simultaneously future-proofing infrastructure for the rapidly emerging age of LLMs and autonomous agents.

By combining massive-scale data processing, an operational enterprise, intelligent data integration, and agentic LLM capabilities, all grounded in deep semiconductor domain expertise, the industry can be transformed. The platform can identify how value is extracted from the exponentially growing volumes of manufacturing data.

The approach suggests a future where engineers spend less time wrestling with data infrastructure and more time solving the complex yield and quality challenges that define success in semiconductor manufacturing.

Peter L. Kostka is a Vancouver-based technology entrepreneur with a track record of scaling complex deep-tech concepts into successful commercial outcomes. Currently, he serves as the director of product management for AI at PDF Solutions, where he spearheads the AI technology roadmap and leads rapid prototyping for semiconductor and battery manufacturing sectors.

Editor’s Note

Presentations by Qualcomm’s Mike Campbell (“AI-Driven Innovation in the Semiconductor Industry”) and Intel’s Aziz Safa (“Enabling AI/ML strategy using the PDF Suite”) were given at the 2025 PDF Solutions Users Conference.

John Kibarian’s “Revolutionizing Semiconductor Collaboration: The Emergence of AI-Driven Industry Platforms” keynote was presented at SEMICON West 2025.

Related Content

The post A real-world approach for AI-driven semiconductor manufacturing appeared first on EDN.

Probing a USB analog audio adapter

Пн, 02/23/2026 - 21:14

How do engineers squeeze all the necessary circuitry (and what is it?) into one of these devices, and do so this inexpensively?

With the demise of analog audio line out, headphone (output-only), and headset (adding mic-in) jacks in modern electronics devices—computers, smartphones, tablets, and the like—alternative methods of connecting analog audio sources and destinations are becoming increasingly common. Bluetooth-based wireless mating is certainly one option:

but the audio peripheral must also be battery-powered (and therefore potentially charge-drained when you try to use it) in this case. And quality can also be hit-and-miss depending on the lossy codec options supported (and selected) at both ends of the connection, not to mention degradation resulting from other spectrum-overlapping broadcasters.

Diminutive wired adapters

The other common option involves instead leveraging the digital audio (plus power, along with other functions) connections that are still present in these devices. Admittedly, the Earstudio ES100 MK2 shown above can alternatively operate this way, too:

but that’s not the prevalent use case for this particular peripheral, which, anyway, is also no longer seemingly available for sale (I’ve got its successor queued up to discuss in the future). Plus, it was bulky and priced at $99; the Apple Lightning-to-3.5mm Headphone Adapter, shown below as usual (as well as with photos that follow) accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

was only $9 when Apple was selling it (when I caught wind of the pending closeout, I bought up not only the one shown above but also a few others before inventory was depleted), not to mention being self-powered over Lightning and delivering remarkably solid audio performance (and squeezing in not only the ADC and DAC but also the necessary MFi certification circuitry).

Now that Apple has transitioned its devices to USB-C, both it and Google, along with others, offer(ed, in Google’s case) diminutive, cost-effective, and performant USB-C-based successors:

I found a two-pack of them on sale for $2.09 the other day, believe it or not:

And my wife even bought me a balanced headphones-supportive USB-C adapter for Christmas!

Size-simplified dissection

That said, with iFixit’s “rough” teardown results as a guide (after seeing how challenging a community member’s experience was, iFixit staff stuck with x-ray analysis for their own coverage), I was loath to tackle the dissection of one of these diminutive devices myself. Instead, today I’ll be showcasing something a “bit” bigger, albeit presumably based on the same fundamental building blocks; Sabrent’s USB to 3.5mm Jack Audio Adapter, which claims to support up-to 24-bit and 96 kHz high-res audio and cost me only $6.98 on Amazon last summer:

As the above stock photo shows, and unlike one of the earlier adapters that merges both headphone and microphone functions on a common connector, this one (akin to a computer sound card, which is its target use case) splits them into two jacks; a stereo one for audio out (96 dB SNR claimed) and a separate one for the mono audio input (90 dB). Plus, the manufacturer conveniently provided a preparatory conceptual cross-section diagram, too:

From past similar experience, however, I’ve learned that such graphics don’t necessarily match reality, so I’m still going to dig inside going to satisfy my curiosity. Some box shots to start:

Open sesame:

Inside is the adapter, safely ensconced by rubberized foam padding:

along with a few snippets of literature:

The one on the left is just the usual legal gobbledygook, in multiple languages:

Here’s our patient, first the body:

Now both ends:

See, two connectors!

Don’t overcomplicate the disassembly

The body is a mix of plastic and aluminum…I didn’t realize at first:

that the latter went all the way around the outside:

No, Brian, there’s no screw holding the chassis pieces together; it’s a single-piece assembly from the start:

Duh:

That’s much easier:

With the front panel now popped off:

the PCB now pushes right out the front, following right behind it. Connectors on top:

And…whaddya know…for a pleasant change, the C-Media CM3271 USB audio controller shown in the earlier conceptual diagram actually matches what’s on the PCB underside!

It’s no longer listed on the supplier’s website, but I still found a datasheet (PDF).

I still don’t know how other USB audio adapter manufacturers squeeze all the necessary electronics into their even more diminutive devices, but I’m also still not confident that I would have gotten the answer to that question if I’d tried (versus simply obliterating the product in the process). I’m happy with this alternative approach and end result, and I hope you are too. Agree or disagree, let me know what you think in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Probing a USB analog audio adapter appeared first on EDN.

SAW filters made simple: A quick front-end primer

Пн, 02/23/2026 - 03:06

Surface acoustic wave (SAW) filters may sound exotic, but they are everyday workhorses in wireless front-ends. Compact, cost-effective, and reliable, they shape signals with precision while keeping designs simple.

This quick primer walks through the basics—what they do, why they matter, and how they fit into modern communication systems.

SAW filter fundamentals

SAW filters exploit the piezoelectric effect to convert electrical signals into acoustic waves and back again. At their core, they consist of two interdigital transducers (IDTs) patterned on a piezoelectric substrate. The input IDT launches acoustic waves from the incoming electrical signal, while the output IDT reconverts those waves into an electrical signal.

Together, they form a bidirectional transversal filter. Absorbers are placed at the ends of the substrate to suppress unwanted reflections, ensuring clean signal transmission and stable filter response.

Figure 1 Drawing illustrates the basic architecture of a SAW filter, with input/output IDTs transducing signals across a piezoelectric substrate, while absorbers suppress reflections. Source: Author

Note that the wave produced by the output transducer represents only half of the full signal. Thus, if a 3-dB loss is observed at the output, the combined insertion loss of the input and output transducers amounts to 6 dB.

Each transducer consists of periodic interdigital electrodes connected to two busbars, which link to the electrical source or load. The electrode length governs amplitude, electrode position sets phase, and electrode wavelength defines the operating frequency of the SAW filter.

On a historic note, surface acoustic waves were first described by Lord Rayleigh in 1885 and are therefore often called Rayleigh waves. In his classic paper, Rayleigh predicted their propagation properties, noting that SAWs contain both longitudinal and vertical shear components that couple with the medium at the surface.

Their energy is confined to the substrate surface. Because SAWs are accompanied by electrostatic fields, electroacoustic conversion can be achieved through IDTs. Shaped like crossed fingers, these electrodes launch and receive the waves, forming the basis of modern SAW devices.

At its core, a SAW filter operates by converting electrical energy into acoustic energy on a piezoelectric substrate. This process is driven by two interdigital transducers: the input transducer generates acoustic waves from the incident electrical signal, and the output transducer reconverts them into electrical energy.

Because each transducer launches waves equally in the +X and –X directions, the device functions as a bidirectional transversal filter. Since only half of the launched wave (+X direction) is useful, a 3-dB loss is observed. Taken together, the input and output transducers yield a total insertion loss of 6 dB.

SAW filter applications

Due to their excellent selectivity, low insertion loss, and compact size, SAW filters have become indispensable across modern RF systems. In mobile communication devices such as smartphones, base stations, and repeaters, they suppress interference and maintain clean signal channels.

Wireless LAN and Bluetooth modules rely on them to preserve frequency integrity and reduce crosstalk, while GPS receivers use SAW filters for precise frequency selection that enhances location accuracy. In broadcasting and television tuners, they improve signal quality and selectivity.

Beyond consumer electronics, SAW filters are widely adopted in IoT devices, automotive electronics, and satellite communication systems, where their reliability and small footprint make them a cornerstone of high-performance RF design.

As a familiar practical example, I remember 38.9 MHz SAW filters were a staple in television receivers, serving as intermediate‑frequency (IF) filters in tuner modules. They provided sharp selectivity for separating video and audio signals, ensuring clear picture and sound quality. In fact, paired designs often used a 38.9 MHz SAW filter for the video IF and a companion filter around 33.4 MHz for the audio IF, enabling precise audio separation in PAL/SECAM systems.

Beyond TVs, the same frequency was also used in audio IF stages of broadcast receivers and set‑top boxes, where the compact size and stable response of SAW filters made them a reliable choice for consumer electronics.

Below figure shows a niche and potentially legacy 38.9 MHz SAW filter used in PAL/SECAM television receivers as the video IF filter. In these systems, the filter provides sharp selectivity to isolate the video carrier, while a companion SAW filter at 33.4 MHz is employed for the audio channel.

Figure 2 A 38.9-MHz SAW filter shows its pinout and package design for television receiver applications. Source: Author

Together, this pair enabled precise separation of picture and sound in analog TV tuners, with the compact package and stable frequency response making SAW filters the standard choice in consumer television receivers.

As a quick aside, dual-output SAW filters were also in use at that time, designed to handle both picture and sound carriers simultaneously. The picture IF carrier was set at 38.90 MHz, while the sound IF carrier was offset at 33.4 MHz, reflecting the 5.5 MHz spacing defined in PAL/SECAM systems.

SAW filter practice pointers

This session offers some practical pointers on working with SAW filters, based on their established role in communication and signal-processing systems.

Recall that SAW filters operate on the principle of the piezoelectric effect: an applied voltage induces a mechanical wave on a crystal, while mechanical pressure conversely produces a change in potential difference. When an RF voltage is applied to the input transducers, it generates an acoustic surface wave that travels across the crystal to the output transducer, where it’s reconverted into an electrical signal.

By carefully designing the electrodes—typically comb-shaped with interlocking fingers—engineers can tailor frequency transmission characteristics through precise control of finger size, number, and spacing.

Compared with conventional filters that rely on coils and capacitors, SAW filters are smaller, more affordable, and offer superior long-term stability. They require no tuning and deliver significantly better performance, which explains their widespread adoption in color television sets and video recorders worldwide.

Beyond these, SAW components are also integral to satellite receivers, cordless phones, mobile devices, automotive keyless entry systems, garage door openers, and numerous other applications.

Next, a SAW resonator is a key component in low-cost 433 MHz RF modules. It’s used in the transmitter module as a precise, fixed-frequency oscillator to ensure stable operation at 433.92 MHz within the unlicensed ISM band.

Figure 3 SAW resonator enables a compact, low-cost architecture for 433-MHz RF transmission. Source: Author

Getting into the criteria for choosing a SAW filter, many specifications must be carefully evaluated. Key parameters include the center frequency, bandwidth, insertion loss, and out-of-band rejection, since these directly determine how well the filter isolates the desired signal from interference. Group delay and passband flatness are also critical for maintaining signal integrity, especially in communication systems where timing accuracy affects bit error rates.

Designers must further consider package size, environmental stability, and repeatability, ensuring the filter performs reliably under temperature variations and mechanical stress. Finally, cost, availability, and compliance with regulatory standards often guide the final choice, balancing performance with practical constraints.

Figure 4 A sample datasheet snip highlights the operating conditions and electrical characteristics of a randomly picked 480-MHz SAW filter. Source: ESC Inc.

Side note: The ECS-D480A 480 MHz SAW filter is now obsolete, yet it remains a useful reference for understanding how compact SAW devices were once applied in RF systems. At this frequency, such filters were typically deployed in satellite receiver intermediate-frequency stages, where sharp band-pass selectivity was critical after down-conversion.

They also found roles in wireless communication front-ends and certain measurement instruments, valued for their ability to provide narrowband filtering and suppress adjacent channel interference. Do not panic about this obsolescence—SAW filters are still widely available today from multiple vendors, offered in both thru-hole and, more commonly, SMD form for modern RF and wireless applications.

And, integrated SAW filters enable multi-channel usage within a single radio front-end, allowing several selective paths to be consolidated into one compact device. This integration reduces board space, simplifies design, and supports efficient handling of multiple frequency bands in modern receivers.

There are voltage-controlled SAW oscillators (VCSOs) as well, which add electrical tunability to the otherwise fixed-frequency concept. By applying a control voltage, their oscillation frequency can be shifted, making them valuable in agile radios, test instruments, and wireless platforms that demand dynamic channel agility and adaptive interference suppression.

Moreover, SAW filters operate along the surface of the substrate, making them well-suited for mid-band frequencies and compact designs. Around the early 2000s, bulk acoustic wave (BAW) filters were introduced, driving acoustic waves through the bulk of the material to reach higher operating frequencies and stronger power handling.

In practice, SAW devices remained the mainstay for intermediate-frequency stages and mid-band wireless, while BAW devices gradually took hold in high-frequency front-ends such as LTE, 5G, and Wi-Fi.

Next steps

As it seems, SAW filters carry a distinctive experimental appeal in ham radio, where their sharp selectivity and compact footprint make them ideal for signal-chain exploration—even though their primary role has long been in commercial systems.

Anyway, they are not a casual undertaking for hobbyists: working at these frequencies demands care, proper instrumentation, and patience. Still, salvaged parts from old TV boards and consumer gear can provide a practical gateway into serious tinkering.

While this serves as a quick wrap-up—with more to explore another time—it’s clear that engineers are naturally drawn to SAW filters for their importance in frequency-domain design and their resonance with ham radio practice. Yet curious builders should not hesitate—experiment, learn, and share. The community thrives on grassroots exploration, and your work could well spark the next wave of practical insights.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post SAW filters made simple: A quick front-end primer appeared first on EDN.

Burning power lines

Птн, 02/20/2026 - 15:00

There was one heck of a scary item in the news recently. The following screenshots were taken from a video that was recorded in the teeth of recent inclement weather. Overhead power lines had actually caught fire.

Figure 1 Overhead power lines that caught fire during inclement weather in Brooklyn, NY.

It looks to me like the photographer captured an exact moment in the center image of high winds, where we can see points of simultaneous ignition of what I suspect was flammable insulation material that had surrounded a copper center conductor. I further suspect that decades of weathering had caused that insulation material to deteriorate so that when high winds brought wires into contact, those wires set off sparking that resulted in the insulation material being ignited.

At one time, I read about overhead power line fires being a threat as a result of monk parrots making nests up there. I’ve seen those birds, and I’ve seen some of their enormous nests as well, but this situation clearly had nothing to do with those birds. This situation was strictly man-made.

This incident took place in Brooklyn, NY, but it seems likely that danger of this sort is widespread around the nation and around the world.

John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

The post Burning power lines appeared first on EDN.

Logarithmic amplifiers: A quick tour of theory and practice

Птн, 02/20/2026 - 11:08

In this post, we will take a gentle dive into logarithmic amplifiers—commonly known as log amps—those quietly powerful circuits that work behind the scenes to decode exponential signals and tame wide dynamic ranges.

Log amps: Basics and building blocks

To set the stage, a logarithmic amplifier is an electronic circuit that produces an output voltage proportional to the logarithm of its input signal, whether voltage or current. By using the exponential electrical behavior of semiconductor junctions—typically diodes or bipolar junction transistors—log amps offer an elegant way to compress signals that span a wide dynamic range, such as those from photodiodes, radio frequency detectors, or audio sensors, into a more manageable scale.

Coming to log amp architecture, these specialized circuits produce an output voltage proportional to the logarithm of the input signal amplitude. There are three fundamental architectures commonly employed to realize log amps: the basic diode log amp, the successive detection log amp, and the true log amp, which is implemented using cascaded semi-limiting amplifiers.

In the simplest form, the diode log amp exploits the exponential current–voltage relationship of a silicon diode. Since the voltage across a diode is proportional to the logarithm of the current flowing through it, placing the diode in the feedback path of an inverting operational amplifier allows the circuit to generate an output voltage proportional to the logarithm of the input current.

Figure 1 Circuit diagram illustrates the basic setup of an op-amp-based logarithmic amplifier with a diode. Source: Author

However, this basic configuration suffers from limited dynamic range and strong temperature dependence. These issues are commonly addressed by using diode-connected transistors (see figure below) or matched transistor pairs with temperature-compensation techniques, which extend the usable range and stabilize the logarithmic response.

Figure 2 Circuit diagram depicts the basic setup of an op-amp-based logarithmic amplifier with a diode-connected transistor. Source: Author

Here, note that the base of the transistor is grounded, effectively matching the virtual ground at the op-amp’s inverting input.

Successive detection log amps improve performance by using a chain of detectors that progressively measure signal levels, providing better accuracy and wider dynamic range.

True log amps, on the other hand, employ cascaded semi-limiting amplifiers to approximate the logarithmic response more faithfully across a broad frequency spectrum, making them particularly useful in RF and instrumentation applications.

Beyond their circuit topologies, log amps are distinguished by performance factors such as dynamic range, accuracy, bandwidth, and temperature stability. Simple diode-based designs are attractive for their ease of implementation, but they quickly run into limits of precision and thermal drift.

Integrated log amp ICs and true log architectures, by contrast, deliver superior linearity, wider operating ranges, and better stability across frequency and temperature. These strengths make log amps indispensable in real-world applications: compressing optical signals from photodiodes, measuring RF power levels in communication systems, shaping audio dynamics in compressors and level meters, and handling biomedical signals that span several orders of magnitude.

In each case, the ability to tame wide-ranging inputs into a manageable scale is what makes the logarithmic amplifier such a versatile tool.

When it comes to practical design, selecting the right log amp architecture depends on the signal environment and accuracy requirements. For low-frequency or moderate dynamic-range applications, a diode-connected transistor stage may suffice, if temperature compensation is included.

In RF systems, successive detection log amps are often favored for their speed and wide bandwidth, while true log amps excel when precise linearity across many decades of input is critical. Designers must also weigh trade-offs in noise performance, offset errors, and calibration complexity, as these factors directly influence measurement fidelity. Ultimately, the choice of implementation reflects a balance between simplicity, precision, and the demands of the target application.

Log amps in practice

Having explored the basics, let us now step briefly into the practical ground for a quick walk. Logarithmic amplifiers are not only found in professional instrumentation but also accessible to hobbyists and makers who enjoy experimenting with signal compression. For engineers, log amp ICs and modules provide reliable building blocks for RF measurement, optical detection, or audio dynamics.

For makers, evaluation boards and simple circuits using diode-connected transistors offer approachable ways to see logarithmic behavior firsthand without complex design overhead. While these options are not exhaustive, they illustrate how log amps move from textbook principles into real hardware, serving both the precision needs of engineers and the curiosity of hobbyists.

As a quick recall, logarithmic amplifiers can be grouped into diode-based designs that rely on the exponential I–V characteristic of diodes, transistor-based circuits that exploit the exponential base-emitter relationship in BJTs for greater precision, and multi-stage demodulating log amps that cascade gain and detector stages to achieve very wide dynamic ranges in RF and IF measurement.

Another group relates to the specialized DC/baseband-demodulating log amps that extend operation all the way down to DC, making them particularly useful for envelope detection, accurate power measurement, and wideband or baseband signal analysis.

Back to the lineup of popular log amp ICs, the trend is clear: newer designs lean heavily on high-speed, precision CMOS and BiCMOS technology, while many classic bipolar parts are being retired. The AD606 and TL441 devices now sit in the legacy category; TI lists the TL441 as active for existing designs but not recommended for fresh projects, and AD606 has largely been replaced by newer RF-focused families.

On the other hand, TI’s LOG114, LOG200, and the high-speed LOG300 remain in full production, serving demanding optical and medical sensing applications with wide dynamic range. Analog Devices also continues to back the AD8307 and AD8310 devices, which have become go-to choices for RF power measurement, thanks to their stability, accuracy, and broad availability.

Log-amp modules built around AD606 can still be found from a few niche suppliers, but they are increasingly rare and best suited for maintaining older RF projects. For newcomers or experimenters, modules based on the AD8307 and AD8310 are far more practical picks.

They are widely available, inexpensive, and offer excellent stability across frequency and temperature, making them ideal for getting your hands wet with RF power measurement, signal monitoring, or even DIY spectrum-related builds. Their straightforward interfaces and robust documentation also make them a clever starting point for hobby labs and quick prototypes.

Figure 3 Readily available modules like the AD8307 RF log detector simplify RF power measurement for engineers and hobbyists alike. Source: Author

Now recall that the classic diode/op-amp (or transistor/op-amp) log amplifier suffers from limited frequency response, particularly at low signal levels. For higher-frequency applications, designers turn instead to detector-based and true log architectures.

While these differ in detail, they share a common principle: rather than relying on a single amplifier with a logarithmic transfer characteristic, they employ a cascade of similar linear stages, each with well-defined large-signal behavior, to achieve accurate logarithmic response.

Closing line

Let me say this plainly: after experimenting with discrete log-amp circuits, the most straightforward integrated step for hobbyists is the classic DC log-amp application—measuring light intensity. Optical logging setups are easily built by placing a photodiode at the input of the log amp, and a device such as MAX4206 makes a practical choice in this case.

This post focused on logarithmic amplifiers; I have not covered antilog amplifiers here, leaving that exploration to readers who wish to dive deeper. If you have worked with log amps—or even experimented with photodiode setups—share your experiences, design tips, or favorite chips to help fellow engineers and hobbyists refine their own signal-logging projects.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Logarithmic amplifiers: A quick tour of theory and practice appeared first on EDN.

Automotive high-side driver withstands cold crank

Чтв, 02/19/2026 - 18:27

ST’s VNQ9050LAJ 4-channel high-side driver controls 12-V automotive ground-connected loads via a 3-V and 5-V CMOS-compatible interface. Operating from 4-V to 28-V with typical RDS(on) as low as 50 mΩ per channel, the device remains active during cold-crank events until the supply falls to the 2.7-V (max) undervoltage shutdown threshold. This performance supports compliance with LV124 (Rev. 2013) requirements for low-voltage operation and automotive transients.

Based on ST’s VIPower-M09 technology, the driver protects resistive, capacitive, and inductive loads. Integrated current sensing uses an on-chip current mirror with a sense FET that tracks the main power FET, enabling accurate load monitoring. The sensed current is available at an external pin, where a resistor converts it to a proportional voltage for continuous diagnostics and fault detection.

The VNQ9050LAJ offers robust protection and diagnostics for 12‑V automotive loads. It features integrated current sensing for overload, short-circuit, and open-load detection. The driver also includes overvoltage clamping, thermal-transient limiting, and configurable latch-off for overtemperature or power limitation, with a dedicated fault-reset pin. Additional protections—such as electrostatic discharge, loss-of-ground, loss-of-VCC, and reverse-battery—ensure reliable operation under extreme conditions.

The VNQ9050LAJ is in production in a thermally enhanced Power-SSO16 package, priced from $1.09 each for 1000-piece orders.

VNQ9050LAJ product page 

STMicroelectronics

The post Automotive high-side driver withstands cold crank appeared first on EDN.

Embedded capacitors improve AI/HPC power delivery

Чтв, 02/19/2026 - 18:27

Empower has launched three embedded silicon capacitors (ECAPs) for AI and high-performance computing (HPC) processors. The portfolio includes the EC2005P (9.34 μF in a 2×2-mm package), EC2025P (18.68 μF in a 4×2-mm package), and EC2006P (36.8 μF in a 4×4-mm package). These components are designed for integration into processor substrates to support elevated current density and fast transient load demands.

As AI and HPC workloads increase, conventional board-mounted capacitors struggle to maintain low impedance and fast response. These ECAP devices provide high capacitance density with ultralow equivalent series inductance (ESL) and resistance (ESR), improving power delivery network (PDN) performance when embedded close to the die. Tight dimensional tolerances ensure compatibility with advanced packaging flows.

The ECAP portfolio also supports vertical power delivery architectures, including Empower’s Crescendo platform, to reduce loop inductance and system footprint. The devices provide a scalable approach for integrating silicon capacitance directly within processor packages.

The EC2005P, EC2025P, and EC2006P ECAPs are now in mass production. Learn more about the ECAP portfolio here.

Empower Semiconductor 

The post Embedded capacitors improve AI/HPC power delivery appeared first on EDN.

Samsung leads with HBM4 DRAM performance

Чтв, 02/19/2026 - 18:27

Samsung has begun mass production and commercial shipments of its HBM4 DRAM, marking what it describes as an industry first. Built on Samsung’s 6th-generation 10-nm-class DRAM process with a 4-nm logic base die, this high-bandwidth memory is optimized for performance, reliability, and energy efficiency in AI, HPC, and datacenter applications.

Samsung’s HBM4 delivers a consistent transfer speed of 11.7 Gbps — roughly 46% faster than the 8-Gbps industry standard and a 1.22× improvement over the 9.6-Gbps maximum of HBM3E. Memory bandwidth per single stack reaches up to 3.3 TB/s, a 2.7× increase over HBM3E. Current 12-layer stacking enables capacities from 24 GB to 36 GB, with future 16-layer stacks projected to expand offerings up to 48 GB.

To handle the doubled data I/Os from 1024 to 2048 pins, advanced low-power techniques were applied to the core die. Samsung’s HBM4 improves power efficiency by 40% via low-voltage TSVs and optimized power distribution, offers 10% better thermal resistance, and increases heat dissipation by 30% over HBM3E, ensuring reliable high-performance operation.

For more details on this announcement, see Samsung’s press release. Explore the broader HBM portfolio here.

Samsung Semiconductor 

The post Samsung leads with HBM4 DRAM performance appeared first on EDN.

Software accelerates 3D interconnect design

Чтв, 02/19/2026 - 18:27

The Keysight Chiplet 3D Interconnect Designer automates the design of 3D interconnects for chiplet and 3DIC advanced packages. By removing time-consuming manual steps, the tool streamlines the optimization of complex interconnect structures—including vias, transmission lines, solder balls, and micro-bumps—while ensuring signal and power integrity in densely packed systems.

Part of Keysight’s EDA portfolio, the software provides a pre-layout workflow for advanced multi-die integration, UCIe compliance, automated routing, and robust simulation capabilities. It handles complex geometries—including hatched or waffled ground planes—that are critical for addressing manufacturing and fabrication constraints, particularly in silicon interposers and bridges.

The software can operate independently or alongside Keysight’s other EDA tools, enabling teams to seamlessly incorporate 3D interconnect workflows into their existing design environments.

To learn more about the Keysight Chiplet 3D Interconnect Designer (W3510E) and request a quote, visit the product page linked below.

W3510E product page 

Keysight Technologies 

The post Software accelerates 3D interconnect design appeared first on EDN.

Navitas tightens SiC losses with refined TAP

Чтв, 02/19/2026 - 18:27

Navitas Semiconductor has announced its 5th-generation GeneSiC platform featuring high-voltage trench-assisted planar (TAP) SiC MOSFETs, describing it as a significant advancement over previous generations. The new 1200-V MOSFET line complements Navitas’ ultra-high-voltage 2.3-kV and 3.3-kV devices based on its 4th-generation GeneSiC technology.

The latest generation incorporates the company’s most compact TAP architecture to date, combining planar-gate ruggedness with trench-enabled performance gains to improve efficiency and long-term reliability. It targets high-voltage applications including AI data centers, grid and energy infrastructure, and industrial electrification.

Compared with the prior 1200-V devices, the new generation delivers a 35% improvement in RDS(on) × QGD figure of merit, reducing switching losses and enabling cooler, higher-frequency operation. About a 25% improvement in QGD/QGS ratio, together with a stable high threshold voltage (VGS,TH ≥ 3 V), strengthens switching robustness and improves immunity to parasitic turn-on in high-noise environments.

Navitas expects to introduce products based on its 5th-generation technology in the coming months. For additional information, contact a Navitas representative or email info@navitassemi.com.

Navitas Semiconductor

The post Navitas tightens SiC losses with refined TAP appeared first on EDN.

Using integration and differentiation in an oscilloscope

Чтв, 02/19/2026 - 15:00

Modern digital oscilloscopes offer a host of analysis capabilities since they digitize and store input waveforms for analysis. Most oscilloscopes offer basic math operations such as addition, subtraction, multiplication, division, ratio, and the fast Fourier transform (FFT). Mid- and high-end oscilloscopes offer advanced math functions such as differentiation and integration. These tools let you solve differential equations that you probably hated in your days as an engineering student. They are used the same way today in your oscilloscope measurements. Here are a few examples of oscilloscope measurements that require differentiation and integration.

Measuring current through a capacitor based on the voltage across it

 The current through a capacitor can be calculated from the voltage across it using this equation:

The current through a capacitor is proportional to the rate of change, or derivative, of the voltage across it.  The constant of proportionality is the capacitance.  A simple circuit can be used to show how this works (Figure 1).

Figure 1 A signal generator supplies a sine wave as Vin(t).  The oscilloscope measures the voltage across the capacitor. Source: Art Pini

In this simple series circuit, the current can be measured by dividing the voltage across the resistor by its value.  The oscilloscope monitors the voltage across the capacitor, Vc(t), and the voltage Vin(t). Taking the difference of these voltages yields the voltage across the resistor. The current through the resistor is calculated by rescaling the difference by multiplying by the reciprocal of the resistance. The voltage across the capacitor is acquired and differentiated. The rescale function multiplies the derivative by the capacitance to obtain the current through the capacitor (Figure 2).

Figure 2 Computing the current in the series circuit using two different measurements. Source: Art Pini

Vin(t) is the top trace in the figure; it is measured as 477.8 mV RMS by measurement parameter P1, and it has a frequency of 1 MHz. Below it is Vc(t), the voltage across the capacitor, with a value of 380.2 mV RMS, as read in parameter P2. The third trace from the top, math trace F1, is the current based on the voltage drop across the resistor, which is measured as 5.718 mA RMS in parameter P3. The bottom trace, F2, shows the capacitor current, Ic(t), at 5.762 mA.   

Parameter P6 reads the phase difference between the capacitor current and voltage traces F2 and M2, respectively. The phase is 89.79°, which is very close to the theoretically expected 90°.

Parameters P7 through P9 use parameter math to calculate the percentage difference between the currents measured by the two different measurements. It is 0.7%, which is respectable for the component tolerances used. Comparing the two current waveforms, we can see the differences (Figure 3).

Figure 3 Comparing the current waveforms from the two different measurement processes. Source: Art Pini

The two current measurement processes are very similar.  Differentiating the capacitor voltage is somewhat noisier. This is commonly observed when using the derivative math function.  The derivative is calculated by dividing the difference between adjacent sample values by the sample time interval. The difference operation tends to emphasize noise, especially when the rate of change of the signal is low, as on the peaks of the sine wave. The noise spikes at the peaks of the derivative signal are obvious.  Maximizing the signal-to-noise ratio of differentiated waveforms is good practice. This can be done by filtering the signal before the math operation using the noise filters in the input channel.

Measuring current through an inductor based on the voltage across it.

A related mathematical operation, integration, can be used to determine the current through an inductor from the integral of the inductor’s voltage.

Another series circuit, this time with an inductor, illustrates the mathematical operations performed on the oscilloscope (Figure 4).

Figure 4 A signal generator supplies a sine wave as Vin(t).  The oscilloscope measures the voltage across the inductor, IL(t). Source: Art Pini

The oscilloscope is configured to integrate the voltage across the inductor, VL(t), and rescale the integral by the reciprocal of the inductance. Changing the units to Amperes completes the process (Figure 5).

Figure 5 Calculating the current in the series circuit using Ohm’s law with the resistor and integrating the inductor voltage. Source: Art Pini

This process also produces similar results.  The series current calculated from the resistor voltage drop is 6.625 mA, while the current calculated by integrating the inductor voltage is 6.682 mA, a difference of 0.057 mA. The phase difference between the inductor current and voltage is -89.69°.

The integration setup requires adding a constant of integration, thereby imposing an initial condition on the current. Since integration is a cumulative process, any offset will generate a ramp function. The constant in the integration setup must be adjusted to produce a level response if the integration produces a waveform that slopes up or down.

Magnetic measurements hysteresis plots

The magnetic properties of inductors and transformers can be calculated from the voltage across and the current through the inductor. The circuit in Figure 4, with appropriate input and resistance settings, can be used. Based on these inputs, the inductor’s magnetic field strength, usually represented by the symbol H, can be calculated from the measured current.

Where: H is the magnetic field strength in Amperes per meter (A/m)

IL is the current through the inductor in Amperes

n is the number of turns of wire about the inductor core

l is the magnetic path length in meters        

The oscilloscope calculates the magnetic field strength by rescaling the measured capacitor current. 

The magnetic flux density, denoted B, is computed from the voltage across the inductor.

Where B is the magnetic flux density in Teslas

VL is the voltage across the inductor

n is the number of turns of wire about the inductor core

A is the cross-sectional area of the magnetic core in meter2

The flux density is proportional to the integral of the inductor’s voltage. The constant or proportionality is the reciprocal of the product of the number of turns and the magnetic cross-sectional area. These calculations are built into most oscilloscope power analysis software packages, which use them to display the magnetic hysteresis plot of an inductor (Figure 6).

Figure 6 A power analysis software package calculates B and H from the inductor voltage and current and the geometry of the inductor. Source: Art Pini

The analysis software prompts the user for the inductor geometry, including n, A, and l. It integrates the inductor voltage (top trace) and scales the integral using the constants to obtain the flux density B (second trace from the top). The current (third trace from the top) is rescaled to obtain the magnetic field strength (Bottom trace. The flux density (B) is plotted against the field strength (H) to obtain the hysteresis diagram.

Area within and X-Y plot

Many applications involving cyclic phenomena result in the need to determine the area enclosed by an X-Y plot. The magnetic hysteresis plot is an example. The area inside a hysteresis plot represents the energy loss per cycle per unit volume in a magnetic core. The area within an X-Y plot can be calculated based on the X and Y signals.  The oscilloscope acquires both traces as a function of time, t. The variables can be changed in the integral to calculate the area based on the acquired traces:

Note that both integration and differentiation are involved in this calculation. To implement this on an oscilloscope, we need to differentiate one trace, multiply it by the other, and integrate the result.  The integral, evaluated over one cycle of the periodic waveform, equals the area contained within the X-Y plot. Here is an example using an XY plot that is easy to check (Figure 7).

Figure 7 Using a triangular voltage waveform and a square wave current waveform, the X-Y plot is a rectangle. Source: Art Pini

The area enclosed by a rectangular X-Y plot is easy to calculate based on the cursor readouts, which measure the X and Y ranges. The relative cursors are positioned at diagonally opposed corners, and the amplitude readouts for the cursors for each signal appear in the respective dialog boxes. The X displacement, the rectangle’s base, is 320.31 mV, and the Y displacement, the rectangle’s height, is 297.63 mA.  The area enclosed within the rectangle is the product of the base times the height, or 95.33 mW.

Taking the derivative of the voltage signal on channel 1 yields a square wave. Multiplying it by the current waveform in channel 2 and integrating the product yields a decaying ramp (Figure 8).

Figure 8 The integrated product is measured over one input waveform cycle to obtain the area within the X-Y plot. Source: Art Pini

The area of the X-Y plot is read as the difference in the amplitudes at the cursor locations. This is displayed in the dialog box for the math trace F2, where the integral was calculated. The difference is 95.28 mW, which is almost identical to the product of the base and height. The advantage of this method is that it works regardless of the shape of the X-Y plot. 

Practical examples

These are just a few practical examples of applying an oscilloscope’s integration and differentiation math to common electrical measurements that yield insights into a circuit’s behavior that are not directly measurable.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

The post Using integration and differentiation in an oscilloscope appeared first on EDN.

Preemptive utilities shutdown oversight: Too much, too little, or just right?

Чтв, 02/19/2026 - 15:00

Preventing fires and other calamities by proactively shutting off power in advance of inclement weather is dependent on forecast precision; customers’ needs should also be a considered factor.

Following up on my prior blog post, wherein I detailed my “interesting” mid-November, I’ll now share that mid-December was “interesting” as well, albeit for a different reason.

I’ve mentioned before that my residence in the Rocky Mountain foothills west of Denver, CO, is no stranger to inclement weather. Mid-year monsoon storms are a regular presence, for example, such as a September 2024 example that, like 2014 and 2015 predecessors, zapped various electronic devices, leaving them useful only as teardown patients going forward.

Everyone knows it’s Windy

More generally, it tends to be “breezy” here, both on a sustained and (especially) gusty basis. See for example the multi-day screenshots I snagged as I was preparing to work on this writeup:

That said, mid-December 2025 was especially crazy. On Monday, December 15, Xcel Energy began warning of a potential preparatory outage beginning that same Wednesday, initially affecting approximately a half-million customers but downgraded a day later to roughly 50,000 (including us), along with additional potential outages as conditions both in-advance warranted and ended up being the case, resulting from high wind damage (the affected total that day ended up being 100,000+). Indeed, we ended up losing power, the result of a controlled shutoff beginning late Wednesday morning the 17th, and we also subsequently experienced extremely high winds at our location.

Here’s a screenshot I grabbed right at the initially forecasted 73 mph gust peak that evening:

and another, a couple of hours later, once the intensity had begun to dissipate, indicating that the actual peak gust at my location had been 85 mph:

Thursday the 18th was comparatively calm, and our residence power was briefly restored starting at 5:30 that evening. Early the next morning, however, the electricity went down again due to another Xcel Energy-initiated controlled shutoff, in advance of another extremely high-winds day. We got our power back to stay on Saturday evening the 20th at around 5 pm. That said, Xcel’s service to the affected region wasn’t fully restored until well into the following week.

Legal and fiscal precedent

Here’s some historical background on why Xcel Energy might have made this preparatory shutoff decision, and to this degree. On December 30, 2021, a grass fire in Boulder County, Colorado (north of me), later referred to as the Marshall Fire, started and was subsequently fueled by 115 mph peak wind gusts:

The fire caused the evacuation of 37,500 people, killed two people, and destroyed more than 1,000 structures to become the most destructive fire in Colorado history.

Xcel Energy was later assigned responsibility for one of the fire’s two root causes, although Wikipedia’s entry points out that it “was neither caused by criminal negligence nor arson.”

In June 2023, Boulder County Sheriff Curtis Johnson announced that the fire’s causes had been found. He said that the fire was caused by two separate occurrences: “week-old embers on Twelve Tribes property and a sparking Xcel Energy power line.”

Wikipedia goes on to note that “Xcel Energy has faced more than 200 lawsuits filed by victims of the fire.” Those lawsuits were settled en masse two-plus years later, and less than three months ahead of the subsequent wind-related incident I’m documenting today:

On September 24, 2025, just ahead of trial, the parties reached a settlement. Pursuant to the agreement, Xcel will pay $640 million without admitting liability for the Marshall Fire. The settlement, which also includes Qwest Corp. and Teleport Communications America, resolves claims brought by individual plaintiffs, insurance companies, and public entities impacted by the fire. The resolution avoids what was anticipated to be a lengthy trial. No additional details regarding the settlement have been disclosed at this time.

A providential outcome (for us, at least)

The prolonged outage, I’m thankful to say, only modestly affected my wife and me. We stuck it out at the house through Wednesday night, but given that the high winds precluded us from using our fireplaces as an alternative heart source (high winds also precluded the use of solar cell banks to recharge our various EcoFlow portable power setups, a topic which I’ll explore in detail in a subsequent post), we ended up dropping off our dog at a nearby kennel and heading “down the hill” the next (Thursday) morning to a still-powered hotel room for a few days:

Thanks in no small part to the few-hour electric power restoration overnight on Thursday, plus the cold outside temperatures, we ended up only needing to toss the contents of our kitchen refrigerator. The food in its freezer, plus that in both the refrigerator/freezer and standalone chest freezer in the garage, all survived. And both the house and its contents more generally made it through the multiple days of high winds largely unscathed.

Likely unsurprising to you, the public outcry at Xcel Energy’s shutoff decision, including but not limited to its extent and duration, has been heated. Some of it—demanding that the utility immediately bury all of its power lines, and at no cost to customers—is IMHO fundamentally, albeit understandably (we can’t all be power grid engineers, after all) ignorant. See, for example, my recent photograph of a few of the numerous high voltage lines spanning the hills above Golden:

There’s also grousing about the supposed inflated salaries of various Xcel Energy executives, for example, along with the as-usual broader complaints about Xcel Energt and other regulated monopolies.

That all said, I realize that other residents had it much worse off than us; they weren’t able to, and/or couldn’t afford to, relocate to a warm, electrified hotel room as we did, for example. Their outage(s) may have lasted longer than ours. They might have needed to throw out and replace more (and maybe all) of their refrigerated and frozen food (the same goes for grocery stores, restaurants, and the like). And their homes, businesses, and other possessions might have been damaged and/or destroyed by the high winds as well. All of it fueling frustration.

Results-rationalized actions?

But that all said, at the end of the day I haven’t heard of any fires resulting from the mid-December high winds, or for that matter the more recent ones primarily in the eastern half of the state and elsewhere (the two screenshots I shared at the beginning of this writeup showed the more modest effects at my particular location) that prompted another proactive shutdown. And of course, weather forecasting is an inexact science at best, so Xcel Energy’s conservative potential over-estimation of how large a region to shut down and for how long may at least somewhat understandable, particularly in light of the recent sizeable settlement it just paid out.

In closing, I’m curious to hear what you think. Was Xcel Energy too pessimistic with its decisions and actions? Or maybe too optimistic? And is there anything it could do better and/or more to both in-advance predict and in-the-moment react to conditions in the air and on the ground, as well as to repair and revive service afterwards?

To wit, while I intended the word “oversight” in this write-up’s title to reference the following definition option:

  • Supervision; watchful care.

I realized in looking up the word that two other definition options are also ironically listed:

  • An omission or error due to carelessness.
  • Unintentional failure to notice or consider; lack of proper attention.

Which one(s) apply in this case? Let me know your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Preemptive utilities shutdown oversight: Too much, too little, or just right? appeared first on EDN.

AI is stress-testing processor architectures and RISC-V fits the moment

Чтв, 02/19/2026 - 09:17

Every major computing era has been defined not by technology, but by a dominant workload—and by how well processor architectures adapted to it.

The personal computer era rewarded general-purpose flexibility, allowing x86 to thrive by doing many things well enough. The mobile era prioritized energy efficiency above all else, enabling Arm to dominate platforms where energy, not raw throughput, was the limiting factor.

AI is forcing a different kind of transition. It’s not a single workload. It’s a fast-moving target. Model scale continues to expand through sparse and mixture-of-experts techniques that stress memory bandwidth and data movement as much as arithmetic throughput. Model architectures have shifted from convolutional networks to recurrent models to transformers and continue evolving toward hybrid and emerging sequence-based approaches.

Deployment environments span battery-constrained edge devices, embedded infrastructure, safety-critical automotive platforms, and hyperscale data centers. Processing is spread across a combination of GPUs, CPUs, and NPUs where compute heterogeneity is a given.

The timing problem

Modern AI workloads demand new operators, execution patterns, precision formats, and data-movement behaviors. Supporting them requires coordinated changes across instruction sets, microarchitectures, compilers, runtimes, and developer tooling. Those layers rarely move in lockstep.

Precision formats illustrate the challenge. The industry has moved from FP32 to FP16, BF16, INT8, and now FP8 variants. Incumbent architectures continue to evolve—Arm through SVE and SVE2, x86 through AVX-512 and AMX—adding vector and matrix capabilities.

But architectural definition is only the first step. Each new capability must propagate through toolchains, be validated across ecosystems, and ship in production silicon. Even when specifications advance quickly, ecosystem-wide availability unfolds over multiple product generations.

The same propagation dynamic applies to support sparsity, custom memory-access primitives, and heterogeneous orchestration. When workloads shift annually—or faster—the friction lies both in defining new processor capabilities and in aligning the full stack around them.

Figure 1 AI imposes multi-axis stress on processor architectures.

Traditional ISA evolution cycles—often measured in years from specification to broad silicon availability—were acceptable when workloads evolved at similar timescales. But they are structurally misaligned with AI’s rate of change. The problem is that architectural models optimized for long-term stability are now being asked to track the fast-paced and relentless reinvention of workloads.

The core issue is not performance. It’s timing.

Differentiate first, standardize later

Historically, major processor architectures have standardized first and deployed later, assuming hardware abstractions can be fully understood before being locked in. AI reverses that sequence. Many of the most important lessons about precision trade-offs, data movement, and execution behavior emerge in the development phase, while the models are still evolving.

Meta’s MTIA accelerator (MTIA ISCA23/MTIA ISCA25) makes use of custom instructions within its RISC-V–based processors to support recommendation workloads. That disclosure reflects a broader reality in AI systems: workload-specific behaviors are often discovered during product development rather than anticipated years in advance.

Figure 2 MTIA 2i architecture comprises an 8×8 array of processing elements (PEs) connected via a custom network-on-chip.

Figure 3 Each PE comprises two RISC-V processor cores and their associated peripherals (on the left) and a set of fixed-function units specialized for specific computations or data movements (on the right).

The MTIA papers further describe a model—a hardware co-design process in which architectural features, model characteristics, and system constraints evolved together through successive iterations. In such environments, the ability to introduce targeted architectural capabilities early—and refine them during development—becomes an engineering requirement rather than a roadmap preference.

In centrally governed compute architectures, extension priorities are necessarily coordinated across the commercial interests of the stewarding entity and its licensees. That coordination has ensured coherence, backward compatibility, and ecosystem stability across decades.

It also means the pace and priority of architectural change reflect considerations that extend beyond any single vendor’s system needs and accumulate costs associated with broader needs, legacy, and compatibility.

The question is whether a tightly coupled generational cadence—and a centrally coordinated roadmap—remains viable when architectural optimization across a vast array of use cases must occur within the product development cycle rather than between them.

RISC-V decouples differentiation from standardization. A small, stable base ISA provides software continuity. Modular extensions and customizations allow domain-specific capabilities within product cycles. This enables companies and teams to innovate and differentiate before requiring broad consensus.

In other words, RISC-V changes the economics of managing architectural risk. Differentiation at the architecture level can occur without destabilizing the broader software base, while long-term portability is preserved through eventual convergence.

Matrix-oriented capabilities illustrate this dynamic. Multiple vendors independently explored matrix acceleration techniques tailored to their specific requirements. Rather than fragmenting permanently, those approaches are informing convergence through RISC-V International’s Integrated Matrix Extensions (IME), Vector Matrix Extensions (VME), and Attached Matrix Extensions (AME) working groups.

The result is a path toward standardized matrix capabilities shaped by multiple deployment experiences rather than centralized generational events that need consensus ahead of time.

Standardization profiles such as RVA23 extend this approach, defining compatible collections of processor extensions while preserving flexibility beneath the surface.

In practical product terms, this structural difference shows up in development cadence. In many established architectural models, product teams anchor around a stable processor core generation and address new workload demands by attaching increasingly specialized accelerators.

Meaningful architectural evolution often aligns with major roadmap events, requiring coordinated changes across hardware resources, scheduling models, and software layers. By contrast, RISC-V’s base-and-extension model allows domain-specific capabilities to be introduced incrementally on top of a stable ISA foundation.

Extensions can be validated and supported in software without requiring a synchronized generational reset. The distinction is not about capability; it’s about where, when, and how innovation occurs in the product cycle.

From inference silicon to automotive

This difference becomes apparent in modern inference silicon.

Architectural requirements—tightly coupled memory hierarchies, custom data-movement patterns, mixed-precision execution, and accelerator-heavy fabrics—are often refined during silicon development.

Take the case of D-Matrix, which has selected a RISC-V CPU for vector compute and orchestration, memory, and workload distribution management for its 3DIMC in-memory compute inference architecture. In architectures where data movement and orchestration dominate energy and latency budgets, the control plane must adapt alongside the accelerator. Architectural flexibility in the control layer reduces development iteration friction during early product cycles.

The tension between architectural stability and workload evolution is especially visible in automotive.

ISO 26262 functional safety qualification can take years, and vehicle lifecycles span a decade or more. Yet advanced driver assistance systems (ADAS) depend on perception models that are continuously evolving with improved object detection, sensor fusion, and self-driving capabilities. As a result, the automotive industry faces a structural tension: freeze the architecture and risk falling behind or update continuously and requalify repeatedly.

A stable, safety-certified RISC-V foundation paired with controlled extensions offers one way to balance those forces—architectural continuity where validation demands it, and differentiation where workloads require it.

This approach has industry backing. Bosch, NXP, Qualcomm, Infineon, and STMicroelectronics have formed Quintauris specifically to standardize RISC-V profiles for automotive, targeting exactly this combination of long-term architectural stability with application-layer adaptability.

The fact that this represents hardware suppliers, microcontroller vendors, and system integrators simultaneously reflects how broadly the industry has recognized the problem and the approach.

A moment defined by engineering reality

RISC-V’s expanding role in AI is not a rejection of incumbent architectures, which continue to deliver performance and compatibility across a wide range of systems. It reflects a shift in engineering constraints highlighted by AI’s pace.

When workloads evolve faster than architectural generations, adaptability becomes an economic variable. The architecture that prevails is not necessarily the one that runs today’s models fastest. It’s the one that can adjust when those models change.

Legacy processor architectures provide broad stability across generations. RISC-V adds a structural advantage in adaptation velocity—the ability to accommodate differentiation within the product cycle, absorb lessons from deployment, and converge toward standardization—without forcing system architects to wait for generational events. It can adapt to tomorrow’s workloads and course-correct without breaking yesterday’s software.

Marc Evans is director of business development and marketing at Andes Technology USA, a founding premier member of RISC-V International. He is also the organizer of RISC-V Now! (www.riscv-now.com) to be held in Silicon Valley on April 20-21, 2026, a conference focused on the practical lessons of deploying RISC-V at commercial scale across AI, automotive, and data centers.

Special Section: AI Design

The post AI is stress-testing processor architectures and RISC-V fits the moment appeared first on EDN.

Silly simple supply sequencing

Срд, 02/18/2026 - 15:00

Frequent contributor R. Jayapal recently shared an interesting Design Idea (DI) for power supply control and sequencing in MCU-based applications that combine analog and digital circuitry: “Short push, long push for sequential operation of multiple power supplies.”

The application becomes challenging when there’s a requirement to have the digital side powered up and stable for a programmable interval (typically approximately a second or two) before the analog comes online.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Since Jayapal had already published a fine digital solution to the problem, I’ve taken the liberty of attempting an (almost painfully) simple analog version using an SPDT switch for ON/OFF control and RC time constants, and Schmidt triggers for sequencing.  Figure 1 shows how it works.

Figure 1 Simple analog supply sequencing accomplished using an SPDT switch for ON/OFF control and RC time constants, and Schmidt triggers for sequencing. 

Switching action begins with S1 in the OFF position and both C1 and C2 timing caps discharged.  This holds U1 pin 1 at 15 V and pin 3 at 0 V.  The latter holds enhancement-mode PFET Q1’s gate at 15 V, so both the transistor and the 15-Vout rail are OFF.  Meanwhile, the former holds NFET Q2’s gate at zero and therefore Q2 and the 5-Vout rail are likewise OFF.  No power flows to the connected loads.

Figure 2 shows what happens when S1 is flipped to ON.

Figure 2 Power sequence timing when S1 is flipped to ON, connecting C2 near ground through R3.

Moving S1 from OFF to ON connects C2 near ground through R3, charging it to the Schmidt trigger low-going threshold in about R3C2 = 1 ms.  This reverses U1 pin 2 to 15 V, placing a net forward bias of 10 V on NFET Q2, turning on Q2, the 5-Vout rail, and connected loads. Thus, they will remain as long as S1 stays ON.

Meanwhile, back at the ranch, the reset of C1 has been released, allowing it to begin charging through R1. Nothing much else happens until it reaches U1’s ~10-V threshold, which requires roughly T1 = ln(3)R1C1 = 2.2 seconds for the component values shown. Of course, almost any desired interval can be chosen with different values. When R1C1 times out, U1pin4 snaps low, PFET Q1 turns ON, and 15-Vout goes live. Turn ON sequencing is therefore complete. 

The right side of Figure 2 shows what happens when S1 is flipped to OFF.

Firstly, C1 is promptly discharged through R3, turning off Q1 and 15-Vout, putting it and whatever it powers to sleep.  Then C2 begins ramping from near zero to 15 V, taking T2 = ln(3)R2C2 = 2.2 seconds to get to U1’s threshold.  When it completes the trip, pin 2 goes low, turning Q2 and 5-Vout OFF.  Turn OFF sequencing is therefore complete. 

Marginal details of the design include the two 4148 diodes whose purpose is to make the sequencer’s response to losing and regaining the input rail voltage orderly, and to do so regardless of whether S1 is ON or OFF when/if they happen.  Note that MOSFETs should be chosen for adequate current handling capacities. Note that since Q1 has 15 V of gate/source drive and Q2 gets 10 V, neither needs to be a sensitive logic-level device.

Figure 3 shows some alternative implementation possibilities for U1’s triggers in case using a hextuple device with 4 sections unused seems inconvenient or wasteful.

 Figure 3 Alternative Schmidt trigger possibilities.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Silly simple supply sequencing appeared first on EDN.

Сторінки