EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 47 min ago

Electronic design with mechanical manufacturing in mind

Fri, 12/19/2025 - 17:02

Electronics design engineers spend substantial effort on schematics, simulation, and layout. Yet, a component’s long-term success also depends on how well its physical form aligns with downstream mechanical manufacturing processes.

When mechanical design for manufacturing (DFM) is treated as an afterthought, teams can face tooling changes, line stoppages, and field failures that consume the budget and schedule. Building mechanical constraints into design decisions from the outset helps ensure that a concept can transition smoothly from prototype to production without surprises.

The evolving electronic prototyping landscape

Traditional rigid breadboards and perfboards still have value, but they often fall short when a device must conform to curved housings of wearable formats. Engineers who prototype only on flat, rigid platforms may validate electrical behavior while missing mechanical interactions such as strain, connector access, and housing interface.

Scientists are responding with prototype approaches that behave more like the eventual product. For example, MIT researchers, who developed the flexible breadboard called FlexBoard, tested the material by bending it 1,000 times and found it to be fully functional even after repeated deformation.

This bidirectional flexibility allowed the platform to wrap around curved surfaces. It also gave designers a more realistic way to evaluate electronics for wearables, robotics and embedded sensing, where hardware rarely follows a simple planar shape. As these flexible platforms mature, they encourage engineers to think of mechanical behavior not as a late-stage limitation but as a design parameter from the very first version.

Integrating mechanical processes in design

Once a prototype proves the concept, the conversation quickly shifts toward how each part will be manufactured at scale. At this stage, the schematic on paper must reconcile with press stroke limits, tool access, wall thickness, and fixturing. Designing components with specific processes in mind reduces the risk of discovering later that geometry cannot be produced within the budget or timeline.

Precision metal stamping

Metal stamping remains a core process for electrical contacts, terminals, EMI shields, and mini brackets. It excels when parts repeat across high volumes and require consistent form and dimensional control.

A key example is progressive stamping, in which a coil of metal advances through a die set, where multiple stations perform operations in rapid sequence. It strings steps together, so finished features emerge with high repeatability and narrow dimensional spread, making the process suitable for high-volume component manufacturing.

Early collaboration with stamping specialists is beneficial. Material thickness, bend radii, burr direction, and grain orientation all influence tool design and reliability. Features such as stress-relief notches or coined contact areas can often be integrated into the strip layout with little marginal cost once they are considered before the tool is built.

CNC machining

CNC machining often becomes the preferred option where only a few pieces are necessary or shapes are more complicated. It supports complex 3D forms, small production runs, and late-stage changes with fewer up-front tooling costs compared to stamping.

Machined aluminum or copper heatsinks, custom connector housings, and precision mounting blocks are common examples. Designers who plan for machining will benefit from consistent wall thicknesses, accessible tool paths, and tolerances that fit the machine’s capability.

Advanced materials for component durability

The manufacturing method is only part of the process. The base material choice can determine whether a design survives thermal cycles, vibrations, and electrostatic exposure over years of service. Recent work in advanced and responsive materials provides design teams with additional tools to manage these threats. Self-healing polymers and composites are notable examples.

Some of these materials incorporate conductive fillers that redirect electrostatic charge. By steering current away from a single microscopic region, the structure avoids excessive local stress and preserves its functionality for a longer period. For applications such as wearables and portable electronics, this behavior can support longer service intervals and a greater perceived quality.

Engineers are also evaluating high-temperature polymers, filled elastomers, and nanoengineered coatings for use in flexible and stretchable electronics. Each material brings trade-offs in cost, process compatibility, recyclability, and performance. Considering those alongside mechanical processes and board layout helps establish a coherent path from prototype through volume production.

The next generation of electronic products demands a perspective that merges circuit behavior with how parts will be formed, assembled, and protected in real-world environments. Flexible prototyping platforms, process-aware designs for stamping and machining, and careful selection of advanced materials all contribute to this mindset.

When mechanical manufacturing is considered from the get-go, design teams position their work to run reliably on production lines and in the hands of end users.

Ellie Gabel is a freelance writer and associate editor at Revolutionized.

Related Content

The post Electronic design with mechanical manufacturing in mind appeared first on EDN.

The DiaBolical dB

Fri, 12/19/2025 - 15:00

Engineers and technicians who work with oscilloscopes are used to seeing waveforms that plot a voltage versus time. Almost all oscilloscopes these days include the Fast Fourier Transform (FFT) to view the acquired waveform in the frequency domain, similar to a spectrum analyzer.

In the frequency domain, the waveforms plot amplitude versus frequency. This view of the signal uses a different scaling. The default vertical scaling of the frequency domain is dBm, or decibels relative to one milliwatt, as shown in Figure 1.

Figure 1 An oscilloscope’s spectrum display (lower grid) uses default vertical units of dBm to display power versus frequency. (Source: Art Pini)

The FFT displays the signal’s frequency spectrum as either power or voltage versus frequency. The default dBm scale measures signal power; alternative units include voltage-based magnitude. In its various forms, the decibel has long confused well-trained technical professionals accustomed to the time domain.  If dB is a mystery to you, this article covers the basics you need to know.

The dB was originally a measure of relative power in telephone systems. The unit of measure was named the Bel after Alexander Graham Bell.  The decibel (dB) is one-tenth of a Bel and is more commonly used in practice. The definition of the decibel is for electrical applications:

dB = 10 log10 (P2/P1)

Where P1 and P2 are the two power levels being compared.

There are a few key points to note. The first is that the dB is a relative measurement; it measures the ratio of two power levels, P1 and P2, in this example. The second thing is that the dB scale is logarithmic.  The log scale is non-linear, emphasizing low-amplitude signals and compressing higher-amplitude signals. This scaling is particularly useful in the frequency domain, where signals tend to exhibit large dynamic ranges.

Based on this definition, some common power ratios and their equivalent dB values are shown in Table 1.

P2/P1

dB

2:1

3

4:1

6

10:1

10

100:1

20

1:2

-3

1:4

-6

1:10

-10

1:100

-20

Table 1 Common power ratios and the equivalent decibel values. (Source: Art Pini)

The decibel can also compare root power levels, such as the volt.  The definition of the decibel for voltage ratios derived from the definition for power ratios is:

dB = 10 [Log10 (V22/R)/(V12/R)]
= 10 Log10 (V2/V1)2
= 20 log10 (V2/V1)

Where V1 and V2 are the two voltage levels being compared, and R is the terminating resistance.

This derivation utilizes the fact that exponentiation in a logarithm is equivalent to multiplication. The variable R, the terminating resistance (usually 50 Ω), is canceled in the math but still can affect decibel measurements when different resistance values are involved

The voltage-based definition of dB yields the following dB values for these voltage ratios, as shown in Table 2. 

V2/V1

dB

2:1

6

4:1

12

10:1

20

100:1

40

1:2

-6

1:4

-12

1:10

-20

1:100

-40

Table 2 Common voltage ratios and their equivalent decibel values. (Source: Art Pini)

Relative and absolute measurements

As we have seen, the decibel is a relative measure that compares two power or voltage levels.  As such, it is perfect for characterizing transmission gain or loss and is used extensively in scattering (s) parameter measurements.

An absolute measurement can be made by referencing the measurement to a known quantity. The standard reference values in electronic applications are the milliwatt (dBm), the microvolt (dBmV), and the volt (dBV). 

The decibel is used in various other applications, such as acoustics. The sound pressure level in acoustic applications is also measured in dB, and the standard reference is 20 microPascals (μPa).

Using dBm

Based on the definition of dB for power ratios and using 1 mW (0.001 Watt) as the reference, dBm is calculated as:

 dBm = 10 log10 (P2/0.001)

Where P2 is the power of the signal being measured

 Converting from measured power in dBm to power in watts uses the same equation in reverse.

P2 =0.001*10(dBm/10)

For example, the power level in watts (W) for the highest spectral peak is given by the first measure table entry in Figure 1: -5.8 dBm at 5 MHz. The power, in watts, is calculated as follows:

P2 = 0.001 * 10(-5.8 /10)
 P2= 2.63*10-4 W =233 mW

Common power levels and their equivalent dBm values are shown in Table 3.

Power Level

dBm

1 mW

0

2 mW

3

0.5 mW

-3

10 mW

10

0.1 mW

-10

100 mW

20

0.01 mW

-20

1 W

30

10 W

40

100 W

50

1000 W

60

Table 3 common power levels and their equivalent dBm values. (Source: Art Pini)

The calculation of absolute voltage values for voltage-based decibel measurements is similar. To calculate the voltage level for a decibel value in dBV, the equation is:

V2 = 1 * 10(dBV/20)

 For a measured dBV value of 0.3 dBV, the equivalent voltage level is:

 V2 = 1 * 10(0.3/20)
V2 = 1.035 volts

Converting from dBV to dBmV is a scaling or multiplication operation.  So, if you remember the characteristics of logarithms, multiplication within the logarithm becomes addition, and division becomes subtraction. The conversion requires a simple additive constant as derived below:

dBmV = 20 Log10(V2/1×10-6)
dBmV = 20 Log10(V2/1) – 20 Log (1-6)

But:

dBV= 20 log10 (V2/1)
 dBmV = dBV + 120

 A little basic algebra and the reverse operation is:

dBV = dBmV – 120

What if the source impedance isn’t 50 Ω?

Typically, RF work utilizes cables and terminations with a characteristic impedance of 50 Ω. In video, the standard impedance is 75 Ω; in audio, it is 600 Ω.  Reading dBm and matching the source calibration to a 50 Ω input oscilloscope requires adjustments.

First, it is standard practice to terminate sources with their characteristic impedances. A 75-Ω or 600-Ω system signal source requires an appropriate impedance-matching device to connect to a 50-Ω measuring instrument.  The most common is the simple resistive impedance-matching pad (Figure 2).

Figure 2 This schematic of a typical 600 to 50 Ω impedance matching pad reflects a 600 Ω load to the source and provides a 50 Ω source impedance for the measuring instrument. (Source: Art Pini)

The matching pad presents a 600-Ω load to the signal source, while the instrument sees a 50-Ω source, so both devices present the expected impedances. This decreases signal losses by minimizing reflections. The impedance pad is a voltage divider with an insertion loss of 16.63 dB, which must be compensated for in the measurement instrument.

The next step is where the terminating resistances come into play. If the source and load impedances differ, this difference must be considered, as it affects the decibel readings. Going back to the basic definition of decibel:

dB = 10 Log10 [(V22/R2)/(V12/R1)]

Consider how the impedance affects the voltage level equivalent to the one-milliwatt power reference level. The reference voltages equivalent to the one-milliwatt power reference differ between 50 and 600 Ω sides of the measurements:

Pref = .001 Watt = Vref600 2/ 600 = Vref502/50
dBm600 = 10 LOG10 [ (V22) / (Vref6002)]
= 10 LOG10 [(V22) / (Vref502/50/600)]
=10 LOG10 [(50/600) (V22/ (Vref502)]
=10 LOG10 [ (V22)/(Vref502)] + 10 LOG10(50/600)
dBm600 = dBm50 – 10.8

The dBm reading on the 50-Ω instrument is 10.8 dB higher than that on the 600-Ω source because the reference power level is different for the two load impedances.  

The oscilloscope’s rescale operation can scale the spectrum display to dBm referenced to 600 Ω. Assuming a 600-Ω to 50-Ω impedance matching pad, with an insertion loss of 16.63 dB, is used, and the above-mentioned -10.8 dB correction factor is added, the net scaling factor is 5.83 dB must be added to the FFT spectrum as shown in Figure 3.

Figure 3 Using the rescale function of the oscilloscope to recalibrate the instrument to read spectrum levels in dBm relative to a 600-Ω source. (Source: Art Pini)

The 600-Ω source is set to output a zero-dBm signal level. A 600-Ω to 50-Ω impedance matching pad with an insertion loss of 16.63 dB properly terminates the signal source into the oscilloscope’s 50-Ω input termination.  The oscilloscope’s rescale function is applied to the FFT of the acquired signal, adding 5.83 dB to the signal’s spectrum display.  This yields a near-zero dBm reading at 5 MHz.

The measurement parameter P1 measures the RMS input to the oscilloscope, showing the attenuation of the external impedance matching pad. The peak-to-peak (P2) and peak voltage (P3) readings are also measured.  The peak level of the 5 MHz signal spectrum (P4) of near zero dBm (22 milli-dB).  The uncorrected peak spectrum level (P5) is -5.8 dBm

The vertical scale of the spectrum display is now calibrated to match the 600-Ω source. Note that the signal at 5 MHz reads 0 dBm, which matches the signal source setting of 0 dBm (0.774 Vrms) into the expected 600-Ω load.

The decibel

Due to its large dynamic range, the decibel is a useful unit of measure and is used in various applications, mainly in the frequency domain. Converting between linear and logarithmic scaling takes some getting used to and possibly a lot of math.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

The post The DiaBolical dB appeared first on EDN.

Semiconductor technology trends and predictions for 2026

Fri, 12/19/2025 - 11:00

As we look ahead to 2026, we see intelligence increasingly being embedded within physical products and everyday interactions. This shift will be powered by rapid adoption of digital identity technologies such as near-field communication (NFC) alongside AI and agentic AI tools that automate workflows, improve efficiency, and accelerate innovation across the product lifecycle.

The sharp rise in NFC adoption—with 92% of brands already using or planning to use it in products in the next year—signals appetite to unlock the true value of the connected world. Enabling intelligence in new places gives brands the opportunity to bridge physical and digital experiences for positive social, commercial, and environmental outcomes.

Regulatory milestones, such as the phased rollout of the EU Digital Product Passport, along with sustainability pressures and the need to ensure transparency to drive trust will be key catalysts for edge and item-level AI.

In the year ahead, companies will unlock significant benefits in customer experience, sustainability, compliance, and supply chain efficiency by embedding intelligence from the edge to individual items and devices.

Let’s dig deeper into the technology trends shaping 2026.

  1. Edge AI is the fastest growing frontier in semiconductors

Driven by the shift from pure inference to on-device training and continuous, adaptive learning, 2026 will see strong growth in edge AI demand. Specialized chips such as low-power machine learning accelerators, sensor-integrated chips, and memory-optimized chips will be used in consumer electronics, smart cities, and industrial IoT.

Next, new packaging approaches will become the proving ground for performance, cost efficiency, and miniaturization in intelligent edge devices.

  1. Item-level intelligence is accelerating digital transformation

Intelligence will not stop at the device. Over the next 12 months, low-cost sensing, NFC, and edge AI will push computation down to individual items.

The capability to gather real-time data at item level in a move away from batch data, combined with AI, will enable personalized experiences, automation, and predictive analytics across smart packaging, healthcare and wellness products, retail, and logistics. Applications include real-time tracking, AI-driven personalization, automated supply chain optimization, predictive maintenance, and dynamic authentication.

This marks a fundamental shift as every item becomes a data node and source of intelligence.

  1. Connected consumer experiences are driving breakthrough NFC adoption

NFC adoption is accelerating alongside the explosion of connected consumer experiences—from wearables and hearables to smart packaging, digital keys and wellness applications. NFC will become a central enabler of trust, personalization, and seamless connectivity.

Figure 1 NFC has become a key enabler in personalization-centric connectivity. Source: Pragmatic Semiconductor

As consumers increasingly expect intelligent product interaction, for example, to track provenance or engage with wellness apps to build a personalized profile and derive usable insights, the opportunity for NFC is clear. Brands will favor ultra-low-cost and thin NFC solutions—where flexible and ultra-thin semiconductors excel—to deliver frictionless, high-quality consumer experiences.

  1. Heterogeneous integration will unlock design innovation

Heterogeneous integration through chiplets, interposers, and die stacking will become the preferred approach for achieving higher density and improved yields. This is a key enabler for miniaturization and differentiated form factors in facilitating customization for edge AI.

At the same time, the rise of agentic AI-driven EDA tools will lower design barriers and fuel cost-effective innovation through natural language tools. This will ignite startup growth and increase demand for agile, cost-effective foundry design services.

  1. Compliance shifts from cost to competitive advantage

New regulatory frameworks such as Digital Product Passports, circularity, and Extended Producer Responsibility (EPR) will require authentication, traceability, and lifecycle visibility. Rather than a burden, this presents a strategic opportunity for competitive advantage and market expansion.

Embedded digital IDs with NFC capability allow businesses to secure product authentication, meet compliance and governance expectations, and unlock new value in consumer engagement. As compliance moves from paper systems to embedded intelligence, the opportunity will expand across consumer goods, industrial components, and supply chains.

  1. Energy constraints are driving efficiencies in semiconductor manufacturing

As semiconductor manufacturing scales to serve AI demand, growing energy consumption in data centers is forcing industry to focus on power-efficient architectures. This is accelerating a shift away from centralized compute toward fully distributed sensing and intelligence at the edge. Edge AI architectures are designed to process data locally rather than transmit it upstream and will be essential to sustaining AI growth without compounding energy constraints.

Figure 2 Semiconductor manufacturing will increasingly adopt circular design principles such as reuse, recycling, and recoverability. Source: Pragmatic Semiconductor

The capability to establish and scale domestic manufacturing will also play a critical role in cutting embedded emissions and enabling more sustainable and efficient supply chains. Semiconductor manufacturing facilities, known as foundries, will be evaluated on their energy and material efficiency, supported by circular design principles such as reuse, recycling, and recoverability.

Companies that can demonstrate strong environmental commitments will gain long-term competitive advantage, attracting customers, partners, and skilled talent.

Intelligence right to the edge

These trends point toward a definitive shift as intelligence moves dynamically into the physical world. Compute will become increasingly distributed and identity embedded, unlocking efficiencies and delivering real-time insights into the fabric of products, infrastructure, and supply chains.

Semiconductor manufacturing will sit at the heart of the next phase of digital transformation. Flexible and ultra-thin chip technologies will enable new classes of innovations, from emerging form factors such as wearables and hearables to higher functional density in constrained spaces, alongside more carbon-efficient manufacturing models.

The implications for businesses are clear. Companies can accelerate innovation, deepen consumer engagement, and turn compliance into a source of competitive advantage. Those that embed connected technologies into their 2026 strategy will be those that are best positioned to take advantage of the digital transformation opportunities ahead.

Richard Price is co-founder and chief technology officer of Pragmatic Semiconductor.

 

 

Related Content

The post Semiconductor technology trends and predictions for 2026 appeared first on EDN.

An off-the-shelf digital twin for software-defined vehicles

Thu, 12/18/2025 - 16:01

The complexity of vehicle hardware and software is rising at an unprecedented rate, so traditional development methodologies are no longer sufficient to manage system-level interdependencies among advanced driver assistance systems (ADAS), autonomous driving (AD), and in-vehicle infotainment (IVI) functions.

That calls for a new approach, the one that enables automotive OEMs and tier 1s to speed the development of software-defined vehicles (SDVs) with early full-system, virtual integration that mirrors real-world vehicle hardware. That will accelerate both application and low-level software development for ADAS, AD, and IVI and remove the need for design engineers to build their own digital twins before testing software.

It will also reduce time-to-market for critical applications from months to days. Siemens EDA has unveiled what it calls a virtual blueprint for digital twin development. PAVE360 Automotive, a digital twin software, is pre-integrated as an off-the-shelf offering to address the escalating complexity of automotive hardware and software integration.

While system-level digital twins for SDVs using existing technologies can be complex and time-consuming to create and validate, PAVE360 Automotive aims to deliver a fully integrated, system-level digital twin that can be deployed on day one. That reduces the time, effort, and cost required to build such environments from scratch.

Figure 1 PAVE360 Automotive is a cloud-based digital twin that accelerates system-level development for ADAS, autonomous driving, and infotainment. Source: Siemens EDA

“The automotive industry is at the forefront of the software-defined everything revolution, and Siemens is delivering the digital twin technologies needed to move beyond incremental innovation and embrace a holistic, software-defined approach to product development,” said Tony Hemmelgarn, president and CEO, Siemens Digital Industries Software.

Siemens EDA’s digital twin—a cloud-based off-the-shelf offering—allows design engineers to jumpstart vehicle systems development from the earliest phases with customizable virtual reference designs for ADAS, autonomous driving, and infotainment. Moreover, the cloud-based collaboration unifies development with a single digital twin for all design teams.

The Arm connection

Earlier, Siemens EDA joined hands with Arm to accelerate virtual environments for Arm Cortex-A720AE in 2024 and Arm Zena Compute Subsystems (CSS) in 2025. Now Siemens EDA is integrating Arm Zena CSS with PAVE360 Automotive to enable design engineers to start building on Arm-based designs faster and more seamlessly.

Figure 2 Here is how PAVE360’s digital twin works alongside the Arm Zena CSS platform for AI-defined vehicles. Source: Siemens EDA

On the other hand, access to Arm Zena CSS in a digital twin environment such as PAVE360 Automotive can accelerate software development by up to two years. “With Arm Zena CSS available inside Siemens’ pre-integrated PAVE360 Automotive environment, partners can not only customize their solutions leveraging the unique flexibility of the Arm architecture but also validate and iterate much earlier in the development cycle,” said Suraj Gajendra, VP of products and solutions for Physical AI Business Unit at Arm.

PAVE360 Automotive, now made available to key customers, is planned for general availability in February 2026. It will be demonstrated live at CES 2026 in the Auto Hall on 6–9 January 2026.

Related Content

The post An off-the-shelf digital twin for software-defined vehicles appeared first on EDN.

Wide-range tunable RC Schmitt trigger oscillator

Thu, 12/18/2025 - 15:00

In this Design Idea (DI), the classic Schmitt-trigger-based RC oscillator is “hacked” and analyzed using the simulation software QSPICE. You might reasonably ask why one would do this, given that countless such circuits reliably and unobtrusively clock away all over the world, even in space.

Well, problems arise when you want the RC oscillator to be tunable, i.e., replacing the resistor with a potentiometer. Unfortunately, the frequency is inversely proportional to the RC time constant, resulting in a hyperbolic response curve.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Another drawback is the limited tuning range. For high frequencies, R can become so small that the Schmitt trigger’s output voltage sags unacceptably.

The oscillator’s current consumption also increases as the potentiometer resistance decreases. In practice, an additional resistor ≥1 kΩ must always be placed in series with the potentiometer.

The potentiometer’s maximum value determines the minimum frequency. For values >100 kΩ, jitter problems can occur due to hum interference when operating the potentiometer, unless a shielded enclosure is used.

RC oscillator

Figure 1 shows an RC oscillator modeled with QSPICE’s built-in behavioral Schmitt trigger. It is parameterized as a TLC555 (CMOS 555) regarding switching thresholds and load behavior.

Figure 1: An RC oscillator modeled with QSPICE’s built-in behavioral Schmitt trigger, it is a TLC555 (CMOS 555).

Figure 2 displays the typical triangle wave input and the square wave output. At R1=1 kΩ, the output voltage sag is already noticeable, and the average power dissipation of R1 is around 6 mW, roughly an order of magnitude higher than the dissipation of a low-power CMOS 555.

Figure 2 The typical triangle wave input and the square wave output, where the average power dissipation of R1 is around 6 mW.

Frequency response wrt potentiometer resistance

Next, we examine the oscillator’s frequency response as a function of the potentiometer resistance. R1 is simulated in 100 steps from 1 kΩ to 100 kΩ using the .step param command.

The simulation time must be long enough to capture at least one full period even at the lowest frequency; otherwise, the period duration cannot be measured with the .meas command.

However, with a 3-decade tuning range, far too many periods would be simulated at high frequencies, making the simulation run for a very long time.

Fortunately, QSPICE has a new feature that allows a running simulation to be aborted, after which the new simulation for the next parameter step is executed. The abort criterion is a behavioral voltage source called AbortSim(). It’s not the most elegant or intuitive feature, but it works.

Schmitt trigger oscillator

Figure 3 shows our Schmitt trigger oscillator, but this time with the parameter stepping of R1, the .meas commands for period and frequency measurement, and an auxiliary circuit that triggers AbortSim(). My idea was to build a counter clocked by the oscillator. After a small number of clock pulses—enough for one period measurement—the simulation is aborted.

Figure 3 Schmitt trigger oscillator, this time, with the parameter stepping of R1, the .meas commands for period and frequency measurement, and an auxiliary circuit that triggers AbortSim().

I first tried a 3-stage ripple counter with behavioral D-flops. This worked but wasn’t optimal in terms of computation time.

The step voltage generator in the box in Figure 2 is faster and easier to adjust. A 10-ns monostable is triggered by V(out) of the oscillator and sends short current pulses via the voltage-controlled current source to capacitor C3. The voltage across C3 triggers AbortSim() at >= 0.5V.

The constant current and C3 are selected so that the 0.5 V threshold is reached after 3 clock cycles of the oscillator, thus starting the next measurement.

Note that the simulation time in the .tran command is set to 5 s, which is never reached due to AbortSim().

The entire QSPICE simulation of the frequency response takes the author’s PC a spectacular 1.5 s, whereas previously with LTSPICE (without the abort criterion) it took many minutes.

Figure 4 shows the frequency (FREQ) versus potentiometer resistance (RPOT) curve in a log plot, interpolated over 100 measurement points.

Figure 4 Frequency versus potentiometer resistance curve in a log plot, interpolated over 100 measurement points.

Final circuit hack

Now that we have the simulation tools for fast frequency measurement, we finally get to the circuit hack in Figure 5. We expand the circuit in Figure 1 with a resistor R2=RPOT in series with C1.

Figure 5 Hacked Schmitt trigger oscillator with an expanded Figure 1 circuit that includes R2=RPOT in series with C1.

Figure 6 illustrates what happens: for R2=0 (blue trace), we see the familiar triangle wave. When R2 is increased (magenta trace), a voltage divider:

V(out)/V(R2) = (R1+R2)/R1

is created if we momentarily ignore V(C1). V(R2) is thus a scaled-down V(out) square wave signal, to which the V(C1) triangle wave voltage is now added.

Figure 6 The typical triangle wave input with the output now reaching very high frequencies without excessively loading V(OUT).

Because the upper and lower switching thresholds of the Schmitt trigger are constant, V(C1) reaches these thresholds faster as V(R2) increases. The more V(R2) approaches the Schmitt trigger hysteresis VHYST, the smaller the V(C1) triangle wave becomes, and the frequency increases.

At V(R2)=VHYST, the frequency would theoretically become infinite. This condition in the original circuit in Figure 1 would mean R1=0, leading to infinitely high I(out). The circuit hack thus allows very high frequencies without excessively loading V(OUT)!

The problem of the steep frequency rise towards infinity at the “end” of the potentiometer still remains. To fix this, we would need a potentiometer that changes its value significantly at the beginning of its range and only slightly at the end. This is easily achieved by wiring a much smaller resistor in parallel with the potentiometer.

Fixing steep frequency rise

In Figure 7, we see a second hack: R1 has been given a very large value.

Figure 7 Giving R1 a large value keeps the circuit’s current consumption low, allowing RPOT to be dimensioned independently of R1.

This keeps the circuit’s current consumption low, especially at high frequencies. The square wave voltage at RPOT is now taken directly from V(OUT) via a separate voltage divider. This allows RPOT to be dimensioned independently of R1.

In the example, I used a common 100 kΩ potentiometer. The remaining resistors are effectively in parallel with the potentiometer regarding AC signals and set the desired characteristic curve.

Despite all measures, the frequency increase is still quite steep at the end of the range, so a 1 kΩ trimmer is recommended for practical application to conveniently set the maximum frequency.

Figure 8 shows the frequency curve of the final circuit. Compared to the curve of the original circuit in Figure 4, a significantly flatter curve profile is evident, along with a larger tuning range.

Figure 8 Frequency versus potentiometer resistance curve in a log plot, interpolated over 100 measurement points, showing a flatter curve profile and larger tuning range.

Uwe Schüler is a retired electronics engineer. When he’s not busy with his grandchildren, he enjoys experimenting with DIY music electronics.

 Related Content

The post Wide-range tunable RC Schmitt trigger oscillator appeared first on EDN.

Enabling a variable output regulator to produce 0 volts? Caveat, designer!

Wed, 12/17/2025 - 15:00

For some time now, many of EDN’s Design Ideas (DIs) have dealt with ground-referenced, single-power-supplied voltage regulators whose outputs can be configured to produce zero or near-zero volts [1][2].

In this mode of operation, regulation in response to an AC signal is problematic. This is because the regulator output voltage can’t be more negative than zero. For the many regulators with totem pole outputs, at zero volts, we could hope for the ground-side MOSFET to be indefinitely enabled, and the high side disabled. But that’s not a regulator, it’s a switch.

Wow the engineering world with your unique design: Design Ideas Submission Guide

There might be some devices that act this way when asked to produce 0 volts, but in general, the best that could be hoped for is that the output is simply disabled. In such a case, a load that is solely an energy sink would pull the voltage to ground (woe unto any that are energy sources!).

But is it lollipops and butterflies all the way down to and including zero volts? I decided to test one regulator to see how it behaves.

Testing the regulator

A TPS54821EVM-049 evaluation module employs a TPS54821 buck regulator. I’ve configured its PCB for 6.3-V out and connected it to an 8-Ω load. I’ve also connected a function generator through a 22 kΩ resistor to the regulator’s V_SNS (feedback) pin.

The generator is set to produce a 360 mVp-p square wave across the load. It also provides a variable offset voltage, which is used to set the minimum voltage Vmin of the regulator output’s square-wave. Figure 1 contains several screenshots of regulator operation while it’s configured for various values of Vmin.

Figure 1 Oscilloscope screenshot with Vmin set to (a) 400 mV, (b) 300 mV, (c) 200 mV, (d) 100 mV, (e) 30 mV, (f) 0 mV, (g)  below 0 mV. See text for further discussion. The scales of each screenshot are 100mV and 1mS per large division. An exception is (g), whose timescale is 100µS per large division.

As can be seen, the output is relatively clean when Vmin is 400 mV, but gets progressively noisier as it is reduced in 100mV steps down to 100mV (Figures 1a – 1d).

But the real problems start when Vmin is set to about 30 mV and some kind of AC signal replaces what would preferably be a DC one; the regulator is switching between open and closed-loop operation (Figure 1e).  

We really get into the swing of things when Vmin is set to 0 mV and intermittent signals of about 150 mVp-p arise and disappear (Figure 1f). As the generator continues to be changed in the direction that would drive the regulator output more negative if it were capable, the amplitude of the regulator’s ringing immediately following the waveform’s falling edge increases (Figure 1g). Additionally, the overshoot of its recovery increases.

Why isn’t it on the datasheet?

This behavior might or might not disturb you. But it exists. And there are no guarantees that things would not be worse with different lots of TPS54821 or other switcher or linear regulator types altogether. These could be operating with different loads, feedback networks, and input voltage supplies with varying DC levels and amounts of noise.

There might be a very good reason that typical datasheets don’t discuss operation with output voltages below their references—it might not be possible to specify an output voltage below which all is guaranteed to work as desired. Or maybe it is.

But if it is, then why aren’t such capabilities mentioned? Where is there an IC manufacturer’s datasheet whose first page does not promise to kiss you and offer you a chocolate before you go to bed? (That is, list every possible feature of a product to induce you to buy it.)

Finding the lowest guaranteed output level

Consider a design whose intent is to allow a regulator to produce a voltage near or at zero. Absent any help from the regulator’s datasheet, I’m not sure I’d know how to go about finding a guaranteed output level below which bad things couldn’t happen.

But suppose this could be done. The “Gold-Plated” [1] DI was updated under this assumption. It provides a link to a spreadsheet that accepts the regulator reference voltage and its tolerance, a minimum allowed output voltage, a desired maximum one, and the tolerance of the resistors to be used in the circuit.

It calculates standard E96 resistor values of a specified precision along with the limits of both the maximum and the minimum output voltage ranges [3].  

“Standard” regulator results

A similar spreadsheet has been created for the more general “standard” regulator circuit in Figure 2. That latter can be found at [4].

Figure 2 The “standard” regulator in which a reference voltage Vext, independent of the regulator, is used in conjunction with Rg2 to drive the regulator output to voltages below its reference voltage. For linear regulators, L1 is replaced with a short.

The spreadsheet [4] was run with the following requirements in Figure 3.

Figure 3 Sample input requirements for the spreadsheet to calculate the resistor values and minimum and maximum output voltage range limits for a Standard regulator design.

The spreadsheet’s calculated voltage limits are shown in Figure 4.

Figure 4 Spreadsheet calculations of the minimum and maximum output voltage range limits for the requirements of Figure 3.

A Monte Carlo simulation was run 10000 times. The limits were confirmed to be close to and within the calculated ones (Figure 5).

Figure 5 Monte Carlo simulation results confirming the limits were consistent with the calculated ones.

A visual of the Monte Carlo results is helpful (Figure 6).

Figure 6 A graph of the Monte Carlo minimum output voltage range and the maximum one for the standard regulator. See text.

The minimum range is larger than the maximum range. This is because two large signals with tolerances are being subtracted to produce relatively small ones. The signals’ nominal values interfere destructively as intended. Unfortunately, the variations due to the tolerances of the two references do not:

OUT = Vref · ( 1 + Rf/Rg1 + Rf/Rg2 ) – Vext · PWM · Rf/Rg2

“Gold-Plated” regulator results

When I released the “Gold-Plated” DI whose basic concept is seen in Figure 7, I did so as a lark. But after applying the aforementioned “standard” regulator’s design criteria to the Gold-Plated design’s spreadsheet [3], it became apparent that the Gold-Plated design has a real value—its ability to more greatly constrain the limits of the minimum output voltage range.

Figure 7 The basic concept of the Gold-Plated regulator. K = 1 + R3/R4 .

The input to the Gold-Plated spreadsheet is shown in Figure 8.

Figure 8 The inputs to the Gold-Plated spreadsheet.

Its calculations of the minimum and maximum output voltage range limits are shown in Figure 9.

Figure 9 The results for the “Gold-Plated” spreadsheet showing maximum and minimum voltage range limits when PWM inputs are at minimum and maximum duty cycles.

The limits resulting from its 10000 run Monte Carlo simulation were again confirmed to be close to and within those calculated by the spreadsheet:

Figure 10 Monte Carlo simulation results of the Gold-Plated spreadsheet, confirming the limits were consistent with the calculated ones.

Again, a visual is helpful, with the Gold-Plated results on the left and the Standard on the right.

 Figure 11 Graphs of the Monte Carlo simulation results of the Gold-Plated (left) and Standard (right) designs. The minimum voltage range of the Gold-Plated design is far smaller than that of the Standard.

The Standard regulator’s minimum range magnitude is 161 mV, while that of the Gold-Plated version is only 33 mV. The Gold-Plated’s advantage will increase as the desired Vmin approaches 0 volts. Its benefits are due to the fact that only a single reference is involved in the subtraction of terms:

OUT = Vref · ( 1 + Rf/Rg1 + Rf/Rg2 · PWM · ( 1 – K ) )

Belatedly, another advantage of the Gold-Plated was discovered: When a load is applied to any regulator, its output voltage falls by a small amount, causing a reduction of ΔV at the Vref feedback pin.

In the Gold-Plated, there is an even larger reduction at the output of its op-amp because of its gain. The result is a reduced drop across Rg2. This acts to increase the output voltage, improving load regulation.

In contrast, while the Standard regulator also sees a ΔV drop at the feedback pin, the external regulator voltage remains steady. The result is an increase in the drop across Rg2, further reducing the output voltage and degrading load regulation.

Summing up

The benefits of the Gold-Plated design are clear, but it’s not a panacea. Whether a Gold-Plated or  Standard design is used, designers still must address the question: How low should you go? Caveat, designer!

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content/References

  1. Gold-Plated PWM-control of linear and switching regulators
  2. Accuracy loss from PWM sub-Vsense regulator programming
  3. Gold-Plated DI Github
  4. Enabling a variable output regulator to produce 0 volts DI Github

The post Enabling a variable output regulator to produce 0 volts? Caveat, designer! appeared first on EDN.

Why memory swizzling is hidden tax on AI compute

Wed, 12/17/2025 - 13:47

Walk into any modern AI lab, data center, or autonomous vehicle development environment, and you’ll hear engineers talk endlessly about FLOPS, TOPS, sparsity, quantization, and model scaling laws. Those metrics dominate headlines and product datasheets. If you spend time with the people actually building or optimizing these systems, a different truth emerges: Raw arithmetic capability is not what governs real-world performance.

What matters most is how efficiently data moves. And for most of today’s AI accelerators, data movement is tangled up with something rarely discussed outside compiler and hardware circles, that is, memory swizzling.

Memory swizzling is one of the biggest unseen taxes paid by modern AI systems. It doesn’t enhance algorithmic processing efficiency. It doesn’t improve accuracy. It doesn’t lower energy consumption. It doesn’t produce any new insight. Rather, it exists solely to compensate for architectural limitations inherited from decades-old design choices. And as AI models grow larger and more irregular, the cost of this tax is growing.

This article looks at why swizzling exists, how we got here, what it costs us, and how a fundamentally different architectural philosophy, specifically, a register-centric model, removes the need for swizzling entirely.

The problem nobody talks about: Data isn’t stored the way hardware needs it

In any AI tutorial, tensors are presented as ordered mathematical objects that sit neatly in memory in perfect layouts. These layouts are intuitive for programmers, and they fit nicely into high-level frameworks like PyTorch or TensorFlow.

The hardware doesn’t see the world this way.

Modern accelerators—GPUs, TPUs, and NPUs—are built around parallel compute units that expect specific shapes of data: tiles of fixed size, strict alignment boundaries, sequences with predictable stride patterns, and arranged in ways that map into memory banks without conflicts.

Unfortunately, real-world tensors never arrive in those formats. Before the processing even begins, data must be reshaped, re-tiled, re-ordered, or re-packed into the format the hardware expects. That reshaping is called memory swizzling.

You may think of it this way: The algorithm thinks in terms of matrices and tensors; the computing hardware thinks in terms of tiles, lanes, and banks. Swizzling is the translation layer—a translation that costs time and energy.

Why hierarchical memory forces us to swizzle

Virtually, every accelerator today uses a hierarchical memory stack whose layers, from the top-down, encompass registers; shared or scratchpad memory; L1 cache, L2 cache, sometimes even L3 cache, high-bandwidth memory (HBM), and, at the bottom of the stack, the external dynamic random-access memory (DRAM).

Each level has different size, latency, bandwidth, access energy consumption, and, rather important, alignment constraints. This is a legacy of CPU-style architecture where caches hide memory latency. See Figure 1 and Table 1.

Figure 1 See the capacity and bandwidth attributes of a typical hierarchical memory stack in all current hardware processors. Source: VSORA

Table 1 Capacity, latency, bandwidth, and access energy dissipation of a typical hierarchical memory stack in all current hardware processors are shown here. Source: VSORA

GPUs inherited this model, then added single-instruction multiple-thread (SIMT) execution on top. That makes them phenomenally powerful—but also extremely sensitive to how data is laid out. If neighboring threads in a warp don’t access neighboring memory locations, performance drops dramatically. If tile boundaries don’t line up, tensor cores stall. If shared memory bank conflicts occur, everything waits.

TPUs suffer from similar constraints, just with different mechanics. Their systolic arrays operate like tightly choreographed conveyor belts. Data must arrive in the right order and at the right time. If weights are not arranged in block-major format, the systolic fabric can’t operate efficiently.

NPUs-based accelerators—from smartphone chips to automotive systems—face the same issues: multi-bank SRAMs, fixed vector widths, and 2D locality requirements for vision workloads. Without swizzling, data arrives “misaligned” for the compute engine, and performance nosedives.

In all these cases, swizzling is not an optimization—it’s a survival mechanism.

The hidden costs of swizzling

Swizzling takes time, sometimes a lot

In real workloads, swizzling often consumes 20–60% of the total runtime. That’s not a typo. In a convolutional neural network, half the time may be spent doing NHWC ↔ NCHW conversions; that is, two different ways of laying out 4D tensors in memory. In a transformer, vast amounts of time are wasted into reshaping Q/K/V tensors, splitting heads, repacking tiles for GEMMs, and reorganizing outputs.

Swizzling burns energy and energy is the real limiter

A single MAC consumes roughly a quarter of a picojoule. Moving a value from DRAM can cost 500 picojoules. Moving data from a DRAM dissipates in the ballpark of 1,000 times more energy than performing a basic multiply-accumulate operation.

Swizzling requires reading large blocks of data, rearranging them, and writing them back. And this often happens multiple times per layer. When 80% of your energy budget goes to moving data rather than computing on it, swizzling becomes impossible to ignore.

Swizzling inflates memory usage

Most swizzling requires temporary buffers: packed tiles, staging buffers, and reshaped tensors. These extra memory footprints can push models over the limits of L2, L3, or even HBM, forcing even more data movement.

Swizzling makes software harder and less portable

Ask a CUDA engineer what keeps him up at night. Ask a TPU compiler engineer why XLA is thousands of pages deep in layout inference code. Ask anyone who writes an NPU kernel for mobile why they dread channel permutations.

It’s swizzling. The software must carry enormous complexity because the hardware demands very specific layouts. And every new model architecture—CNNs, LSTMs, transformers, and diffusion models—adds new layout patterns that must be supported.

The result is an ecosystem glued together by layout heuristics, tensor transformations, and performance-sensitive memory choreography.

How major architectures became dependent on swizzling

  1. Nvidia GPUs

Tensor cores require specific tile-major layouts. Shared memory is banked, avoiding conflicts requires swizzling. Warps must coalesce memory accesses; otherwise, efficiency tanks. Even cuBLAS and cuDNN, the most optimized GPU libraries on Earth, are filled with internal swizzling kernels.

  1. Google TPUs

TPUs rely on systolic arrays. The flow of data through these arrays must be perfectly ordered. Weights and activations are constantly rearranged to align with the systolic fabric. Much of XLA exists simply to manage data layout.

  1. AMD CDNA, ARM Ethos, Apple ANE, and Qualcomm AI engine

Every one of these architectures performs swizzling. Morton tiling, interleaving, channel stacking, etc. It’s a universal pattern. Every architecture that uses hierarchical memory inherits the need for swizzling.

A different philosophy: Eliminating swizzling at the root

Now imagine stepping back and rethinking AI hardware from first principles. Instead of accepting today’s complex memory hierarchies as unavoidable—the layers of caches, shared-memory blocks, banked SRAMs, and alignment rules—imagine an architecture built on a far simpler premise.

What if there were no memory hierarchy at all? What if, instead, the entire system revolved around a vast, flat expanse of registers? What if the compiler, not the hardware, orchestrated every data movement with deterministic precision? And what if all the usual anxieties—alignment, bank conflicts, tiling strategies, and coalescing rules—simply disappeared because they no longer mattered?

This is the philosophy behind a register-centric architecture. Rather than pushing data up and down a ladder of memory levels, data simply resides in the registers where computation occurs. The architecture is organized not around the movement of data, but around its availability.

That means:

  • No caches to warm up or miss
  • No warps to schedule
  • No bank conflicts to avoid
  • No tile sizes to match
  • No tensor layouts to respect
  • No sensitivity to shapes or strides, and therefore no swizzling at all

In such a system, the compiler always knows exactly where each value lives, and exactly where it needs to be next. It doesn’t speculate, prefetch, tile, or rely on heuristics. It doesn’t cross its fingers hoping the hardware behaves. Instead, data placement becomes a solvable, predictable problem.

The result is a machine where throughput remains stable, latency becomes predictable, and energy consumption collapses because unnecessary data motion has been engineered out of the loop. It’s a system where performance is no longer dominated by memory gymnastics—and where computing, the actual math, finally takes center stage.

The future of AI: Why a register-centric architecture matters

As AI systems evolve, the tidy world of uniform tensors and perfectly rectangular compute tiles are steadily falling away. Modern models are no longer predictable stacks of dense layers marching in lockstep. Instead, they expand in every direction: They ingest multimodal inputs, incorporate sparse and irregular structures, reason adaptively, and operate across ever-longer sequences. They must also respond in real time for safety-critical applications, and they must do so within tight energy budgets—from cars to edge devices.

In other words, the assumptions that shaped GPU and TPU architectures—the expectation of regularity, dense grids, and neat tiling—are eroding. The future workloads are simply not shaped the way the hardware wants them to be.

A register-centric architecture offers a fundamentally different path. Because it operates directly on data where it lives, rather than forcing that data into tile-friendly formats, it sidesteps the entire machinery of memory swizzling. It does not depend on fixed tensor shapes.

It doesn’t stumble when access patterns become irregular or dynamic. It avoids the costly dance of rearranging data just to satisfy the compute units. And as models grow more heterogeneous and more sophisticated, such an architecture scale with their complexity instead of fighting against it.

This is more than an incremental improvement. It represents a shift in how we think about AI compute. By eliminating unnecessary data movement—the single largest bottleneck and energy sink in modern accelerators—a register-centric approach aligns hardware with the messy, evolving reality of AI itself.

Memory swizzling is the quiet tax that every hierarchical-memory accelerator pays. It is fundamental to how GPUs, TPUs, NPUs, and nearly all AI chips operate. It’s also a growing liability. Swizzling introduces latency, burns energy, bloats memory usage, and complicates software—all while contributing nothing to the actual math.

One register-centric architecture eliminates swizzling at the root by removing the hierarchy that makes it necessary. It replaces guesswork and heuristics with deterministic dataflow. It prioritizes locality without requiring rearrangement. It lets the algorithm drive the hardware, not vice versa.

As AI workloads become more irregular, dynamic, and power-sensitive, architectures that keep data stationary and predictable—rather than endlessly reshuffling it—will define the next generation of compute.

Swizzling was a necessary patch for the last era of hardware. It should not define the next one.

Lauro Rizzatti is a business advisor to VSORA, a technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.

 

Related Content

The post Why memory swizzling is hidden tax on AI compute appeared first on EDN.

Ignoring the regulator’s reference

Tue, 12/16/2025 - 15:00

DAC control (via PWM or other source) of regulators is a popular design topic here in editor Aalyia Shaukat’s Design Ideas (DIs) corner. There’s even a special aspect of this subject that frequently provokes enthusiastic and controversial (even contentious) exchanges of opinion.

It’s the many and varied possibilities for integrating the regulator’s internal voltage reference into the design. The discussion tends to be extra energetic (and the resulting circuitry complex) when the design includes generating output voltages lower than the regulator’s internal reference.

Wow the engineering world with your unique design: Design Ideas Submission Guide

What can be done to make the discussion less complicated (and heated)?

An old rule of thumb suggests that when one facet of a problem makes the solution complex, sometimes a simple (and better) solution can be found by just ignoring that facet. So, I decided, just for fun, to give it a try with the regulator reference problem. Figure 1 shows the result.

Figure 1 DAC control of a regulator while ignoring its internal voltage reference where Vo = Vomax*(Vdac/Vdacmax). *±0.1%

Figure 1’s simple theory of operation revolves around the A1 differential amplifier.

Vo = Vomax(Vdac/Vdacmax)
If Vdacmax >= Vomax then R1a = R5/((Vdacmax/Vomax) – 1), omit R1b
If Vomax >= Vdacmax then R1b = R2/((Vomax/Vdacmax) – 1), omit R1a

A1 subtracts (suitably attenuated versions) of the control input signal (Vdac) from U1’s output voltage (Vo) and integrates the difference in the R3C3 feedback pair. The resulting negative feedback supplied to U1’s Vsense pin is independent of the Vsense voltage and is therefore independent of U1’s internal reference.

With the contribution of accuracy (and inaccuracy) from U1’s internal reference thus removed, the problem of integrating it into the design is therefore likewise removed. 

Turns out the potential for really good precision is actually improved by ignoring the regulator reference, because they’re seldom better than 1% anyway.

With the Figure 1 circuit, accuracy is ultimately limited only by the DAC and very high precision DACs can be assembled at reasonable cost. For an example see, “A nice, simple, and reasonably accurate PWM-driven 16-bit DAC.”

Another nice feature is that Figure 1 leaks no pesky bias current into the feedback network. This bias is typically scores of microamps and can prevent the output from getting any closer than tens of millivolts to a true zero when the output load is light. No such problem exists here, unless picoamps count (hint: they don’t).

And did I mention it’s simple? 

Oh yeah. About R6, depending on the voltage supplied to A1’s pin 8 and the absmax rating of U1’s Vsense pin, the possibility of an overvoltage might exist. If so, adjust the R4R6 ratio to prevent that. Otherwise, omit R6.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Ignoring the regulator’s reference appeared first on EDN.

Expanding power delivery in systems with USB PD 3.1

Tue, 12/16/2025 - 15:00
Microchip MCP19061 USB DCP board.

The Universal Serial Bus (USB) started out as a data interface, but it didn’t take long before progressing to powering devices. Initially, its maximum output was only 2.5 W; now, it can deliver up to 240 W over USB Type-C cables and connectors, processing power, data, and video. This revision is known as Extended Power Range (EPR), or USB Power Delivery Specification 3.1 (USB PD 3.1), introduced by the USB Implementers Forum. EPR uses higher voltage levels (28 V, 36 V, and 48 V), which at 5 A will deliver power of 140 W, 180 W, and 240 W, respectively.

USB PD 3.1 has an adjustable voltage supply mode, allowing for intermediate voltages between 9 V and the highest fixed voltage of the charger. This allows for greater flexibility by meeting the power needs of individual devices. USB PD 3.1 is backward-compatible with previous USB versions including legacy at 15 W (5 V/3 A) and the standard power range mode of below 100 W (20 V/5 A).

The ability to negotiate power for each device is an important strength of this specification. For example, a device consumes only the power it needs, which varies depending on the application. This applies to peripherals, where a power management process allows each device to take only the power it requires.

The USB PD 3.1 specification found a place in a wide range of applications, including laptops, gaming stations, monitors, industrial machinery and tools, small robots and drones, e-bikes, and more.

Microchip USB PD demo board

Microchip provides a USB PD dual-charging-port (DCP) demonstration application, supporting the USB PD 3.1 specification. The MCP19061 USB PD DCP reference board (Figure 1) is pre-built to show the use of this technology in real-life applications. The board is fully assembled, programmed, and tested to evaluate and demonstrate digitally controlled smart charging applications for different USB PD loads, and it allows each connected device to request the best power level for its own operation.

Microchip MCP19061 USB DCP board.Figure 1: MCP19061 USB DCP board (Source: Microchip Technology Inc.)

The board shows an example charging circuit with robust protections. It highlights charge allocation between the two ports as well as dynamically reconfigurable charge profile availability (voltage and current) for a given load. This power-balancing feature between ports provides better control over the charging process, in addition to delivering the right amount of power to each device.

The board provides output voltages from 3 V to 21 V and output currents from 0.5 A to 3 A. Its maximum input voltage range is from 6 V to 18 V, with 12 V being the recommended value.

The board comes with firmware designed to operate with a graphical user interface (GUI) and contains headers for in-circuit serial programming and I2C communication. An included USB-to-serial bridging board (such as the BB62Z76A MCP2221A breakout board USB) with the GUI allows different configurations to be quickly tested with real-world load devices charging on the two ports. The DCP board GUI requires a PC with Microsoft Windows operating system 7–11 and a USB 2.0 port. The GUI then shows parameter and board status and faults and enables user configuration.

DCP board components

Being a port board with two ports, there are two independent USB PD channels (Figure 2), each with their own dedicated analog front end (AFE). The AFE in the Microchip MCP19061 device is a mixed-signal, digitally controlled four-switch buck-boost power controller with integrated synchronous drivers and an I2C interface (Figure 3).

Diagram showing two independently managed USB PD channels on the Microchip MCP19061-powered DCP board.Figure 2: Two independently managed USB PD channels on the MCP19061-powered DCP board (Source: Microchip Technology Inc.) Block diagram of the Microchip MCP19061 four-switch buck-boost device.Figure 3: Block diagram of the MCP19061 four-switch buck-boost device (Source: Microchip Technology Inc.)

Moreover, one of the channels features the Microchip MCP22350 device, a highly integrated, small-format USB Type-C PD 2.0 controller, whereas the other channel contains a Microchip MCP22301 device, which is a standalone USB Type-C PD port controller, supporting the USB PD 3.0 specification.

The MCP22350 acts as a companion PD controller to an external microcontroller, system-on-chip or USB hub. The MCP22301 is an integrated PD device with the functionality of the SAMD20 microcontroller, a low-power, 32-bit Arm Cortex-M0+ with an added MCP22350 PD media access control and physical layer.

Each channel also has its own UCS4002 USB Type-C port protector, guarding from faults but also protecting the integrity of the charging process and the data transfer (Figure 4).

Traditionally a USB Type-C connector embeds the D+/D– data lines (USB2), Rx/Tx for USB3.x or USB4, configuration channel (CC) lines for charge mode control, sideband-use (SBU) lines for optional functions, and ground (GND). The UCS4002 protects the CC and D+/D– lines for short-to-battery. It also offers battery short-to-GND (SG_SENS) protection for charging ports.

Integrated switching VCONN FETs (VCONN is a dedicated power supply pin in the USB Type-C connector) provide overvoltage, undervoltage, back-voltage, and overcurrent protection through the VCONN voltage. The board’s input rail includes a PMOS switch for reverse polarity protection and a CLC EMI filter. There are also features such as a VDD fuse and thermal shutdown, enabled by a dedicated temperature sensor, the MCP9700, which monitors the board’s temperature.

Block diagram of the Microchip UCS4002 USB port protector device.Figure 4: Block diagram of the UCS4002 USB port protector device (Source: Microchip Technology Inc.)

The UCS4002 also provides fault-reporting configurability via the FCONFIG pin, allowing users to configure the FAULT# pin behavior. The CC, D+/D –, and SG_SENS pins are electrostatic-discharge-protected to meet the IEC 61000-4-2 and ISO 10605 standards.

The DCP board includes an auxiliary supply based on the MCP16331 integrated step-down switch-mode regulator providing a 5-V voltage and an MCP1825 LDO linear regulator providing a 3.3-V auxiliary voltage.

Board operation

The MCP19061 DCP board shows how the MCP19061 device operates in a four-switch buck-boost topology for the purpose of supplying USB loads and charging them with their required voltage within a permitted range, regardless of the input voltage value. It is configured to independently regulate the amount of output voltage and current for each USB channel (their individual charging profile) while simultaneously communicating with the USB-C-connected loads using the USB PD stack protocols.

All operational parameters are programmable using the two integrated Microchip USB PD controllers, through a dynamic reconfiguration and customization of charging operations, power conversion, and other system parameters. The demo shows how to enable the USB PD programmable power supply fast-charging capability for advanced charging technology that can modify the voltage and current in real time for maximum power outputs based on the device’s charging status.

The MCP19061 device works in conjunction with both current- and voltage-sense control loops to monitor and regulate the load voltage and current. Moreover, the board automatically detects the presence or removal of a USB PD–compliant load.

When a USB PD–compliant load is connected to the USB-C Port 1 (on the PCB right side; this is the higher one), the USB communication starts and the MCP19061 DCP board displays the charging profiles under the Port 1 window.

If another USB PD load is connected to the USB-C Port 2, the Port 2 window gets populated the same way.

The MCP19061 PWM controller

The MCP19061 is a highly integrated, mixed-signal four-switch buck-boost controller that operates from 4.5 V to 36 V and can withstand up to 42 V non-operating. Various enhancements were added to the MCP19061 to provide USB PD compatibility with minimum external components for improved calibration, accuracy, and flexibility. It features a digital PWM controller with a serial communication bus for external programmability and reporting. The modulator regulates the power flow by controlling the length of the on and off periods of the signal, or pulse widths.

The operation of the MCP19061 enables efficient power conversion with the capability to operate in buck (step-down), boost (step-up), and buck-boost topologies for various voltage levels that are lower, higher, or the same as the input voltage. It provides excellent precision and efficiency in power conversions for embedded systems while minimizing power losses. Its features include adjustable switching frequencies, integrated MOSFET drivers, and advanced fault protection. The operating parameters, protection levels, and fault-handling procedures are supervised by a proprietary state machine stored in its nonvolatile memory, which also stores the running parameters.

Internal digital registers handle the customization of the operating parameters, the startup and shutdown profiles, the protection levels, and the fault-handling procedures. To set the output current and voltage, an integrated high-accuracy reference voltage is used. Internal input and output dividers facilitate the design while maintaining high accuracy. A high-accuracy current-sense amplifier enables precise current regulation and measurement.

The MCP19061 contains three internal LDOs: a 5-V LDO (VDD) powers internal analog circuits and gate drivers and provides 5 V externally; a 4-V LDO (AVDD) powers the internal analog circuitry; and a 1.8-V LDO supplies the internal logic circuitry.

The MCP19061 is packaged in a 32-lead, 5 × 5-mm VQFN, allowing system designers to customize application-specific features without costly board real estate and additional component costs. A 1-MHz I2C serial bus enables the communication between the MCP19061 and the system controller.

The MCP19061 can be programmed externally. For further evaluation and testing, Microchip provides an MCP19061 dedicated evaluation board, the EV82S16A.

The post Expanding power delivery in systems with USB PD 3.1 appeared first on EDN.

Troubleshooting often involves conflicting symptoms and scenarios

Tue, 12/16/2025 - 11:54

I’ve always regarded debugging and troubleshooting as the most challenging of all hands-on engineering skills. It’s not formally taught; it is usually learned through hands-on experience (often the hard way) and almost every case is different. And it’s a long list of why debugging and troubleshooting are often so difficult.

In some cases, there’s the “aha” moment when the problem is clearly identified and knocked down, but in many other cases, you are “pretty sure” you’ve got the problem but not completely so.

Note that I distinguish between debugging and troubleshooting. The former is when you are working on a breadboard or prototype that is not working and perhaps has never fully worked; it’s in the design phase. The latter is when a tested, solid product with some track record and field exposure misbehaves or fails in use. Each has its own starting points and constraints, but the terms are used interchangeably by many people.

Every engineer or technician has his or her own horror story of an especially challenging situation. It’s especially frustrating when there is no direct, consistent one-to-one link between observed symptoms and root cause(s). There are multiple cause/effect scenarios:

  • Clarity: The single-problem, single-effect situation—generally, the easiest to deal with.
  • Causality: A single problem with multiple causes, where one problem (often not directly visible) triggers a second, more visible one.
  • Correlation: Two apparent problems, with one common cause—or maybe the observed symptoms are unrelated? It’s also easy to have the assumption that correlation implies causality, but that is often not the case.
  • Coincidence: Two apparent problems that appear linked but really have no link at all.
  • Confusion: A problem with contradictory explanations, where the explanation addresses one aspect but does not explain the others.
  • Consistent: The problem is intermittent with no consistent set of circumstances that cause it to occur.

My recent dilemma

Whatever the cause(s) of faults, the most frustrating situation for engineers is where the problem is presumably fixed, but no clear cause (or causes) is found. This happened to me recently with my home heating system, which heats up water for domestic use and for radiator heating. It has one small pump sending heated water to a storage tank and a second small pump sending it to radiators; both pumps do not run at the same time.

One morning, I saw that we lost heat and hot water, so I checked the system (just four years old) and saw that the service-panel circuit breaker with a dedicated line had tripped.

A tripped breaker is generally bad news. My first thought was that perhaps there had been some AC-line glitch during the night, but all other sensitive systems in the house—PCs, network interfaces, and plug-in digital clocks—were fine. Perhaps some solar flare or cosmic particles had targeted just this one AC feed? Very unlikely. I reset the breaker and the system ran for about an hour, then the breaker tripped again.

I called the service team that had installed the system, they came over and they, too, were mystified. The small diagnostic panel display on the system said all was fine. They noted that my thermostat was a 50-year-old mechanical unit, similar to the classic 1953 round Honeywell unit, designed by Henry Dreyfus and now in the permanent display at Cooper Hewitt/Smithsonian Design Museum in New York (Figure 1). These two-wire units, with their bimetallic strip and glass-enclosed mercury-wetted switch, are extremely reliable; millions are still in use after many decades.

 

Figure 1 You have to start somewhere: The first step was to take out a possible but unlikely source of the problem. So, the mercury-wetted metallic-strip thermostat (above) similar to the classic Honeywell unit was replaced with a simple PRO1 T701 electronic equivalent (below). Sources: Cooper Hewitt Museum

While failure of these units is rare, technicians suggested replacing it “just in case.” I said, sure, “why not?” and replaced it with a simple, non-programmable, non-connected electronic unit that emulates the functions of the mechanical/mercury one.

But we knew that was very unlikely to be the actual problem, and the repair technicians could not envision any scenario where a thermostat—on 24-V AC loop with contact closure to energize a mechanical or solid-state relay to call for heat—could induce a circuit breaker to trip. Maybe the original thermostat contacts were “chattering” excessively, thus inducing the motor to cycle on and off rapidly? Even so, that shouldn’t trip a breaker.

Once again, the system ran for about an hour and then the breaker tripped. The techs spend some time adjusting the system’s hot water and heating water pumps; each has a small knob that selects among various operating modes.

Long-story short: the “fixed” system has been working fine for several weeks. But…and it’s a big “but” …they never did actually find a reason for the circuit-breaker tripping. Even if the pumps were not at their optimum settings, that should not cause an AC-line breaker to trip. And why would the system run for several years without a problem.

What does it all mean?

From an engineering perspective, that’s the most frustrating outcome. Now, even though the system is running, it still has me in that “somewhat worried” mental zone. A problem that should not have occurred did occur several times, but now it has gone away for no confirmed reason.

There’s not much that can be done to deal with non-reproducible problems such as this one. Do I need an AC-line monitor, as perhaps that’s the root cause? What sort of other long-term monitoring instrumentation is available for this heating system? How long would you have it “baby-sit” the system?

Perhaps there was an intermittent short circuit in the system’s internal AC wiring that caused the breaker to trip, and the act of opening the system enclosure and moving things around made the intermittent go away? We can only speculate.

Right now, I’m trying to put this frustrating dilemma out of my mind, but it’s not easy. Online troubleshooting guides are useless, as they have generic flowcharts asking, “Is the power on?” “Are the cables and connectors plugged in and solid?”

Perhaps I’ll instead re-read the excellent book, “DeBugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems” by David J. Agans (Figure 2). Although my ability and desire to poke, probe, and swap parts of a home heating system are close to zero.

Figure 2 This book on systematic debugging of electronic designs and products (and software) has many structured and relevant tactics for both beginners and experienced professionals. Source: Digital Library—Association for Computing Machinery

Or perhaps the system just wanted some personal, hands-on attention after four years of faithful service alone in the basement.

Have you ever had a frustrating failure where you poked, pushed, checked, measured, swapped parts, and did more, with the problem eventually going away—yet you really have no idea what the problem was? How did you handle it? Did you accept it and move on or pursue the mystery further?

Related Content

The post Troubleshooting often involves conflicting symptoms and scenarios appeared first on EDN.

The Schiit Modi Multibit: A little wiggling ensures this DAC won’t quit

Mon, 12/15/2025 - 15:00

Sometimes, when an audio component dies, the root cause is something electrical. Other times, the issue instead ends up being fundamentally mechanical.

Delta sigma modulation-based digital-to-analog conversion (DAC) circuitry dominates the modern high-volume audio market by virtue of its ability (among other factors) to harness the high sample rate potential of modern fast-switching and otherwise enhanced semiconductor processes. Quoting from Wikipedia’s introduction to the as-usual informative entry on the topic (which, as you’ll soon see, also encompasses analog-to-digital converters, i.e., ADCs):

Delta-sigma modulation is an oversampling method for encoding signals into low bit depth digital signals at a very high sample-frequency as part of the process of delta-sigma analog-to-digital converters (ADCs) and digital-to-analog converters (DACs). Delta-sigma modulation achieves high quality by utilizing a negative feedback loop during quantization to the lower bit depth that continuously corrects quantization errors and moves quantization noise to higher frequencies well above the original signal’s bandwidth. Subsequent low-pass filtering for demodulation easily removes this high frequency noise and time averages to achieve high accuracy in amplitude, which can be ultimately encoded as pulse-code modulation (PCM).

 Both ADCs and DACs can employ delta-sigma modulation. A delta-sigma ADC encodes an analog signal using high-frequency delta-sigma modulation and then applies a digital filter to demodulate it to a high-bit digital output at a lower sampling-frequency. A delta-sigma DAC encodes a high-resolution digital input signal into a lower-resolution but higher sample-frequency signal that may then be mapped to voltages and smoothed with an analog filter for demodulation. In both cases, the temporary use of a low bit depth signal at a higher sampling frequency simplifies circuit design and takes advantage of the efficiency and high accuracy in time of digital electronics.

Primarily because of its cost efficiency and reduced circuit complexity, this technique has found increasing use in modern electronic components such as DACs, ADCs, frequency synthesizers, switched-mode power supplies and motor controllers. The coarsely-quantized output of a delta-sigma ADC is occasionally used directly in signal processing or as a representation for signal storage (e.g., Super Audio CD stores the raw output of a 1-bit delta-sigma modulator).

Oversampled interpolation vs quantization noise shaping

That said, plenty of audio purists object to the inherent interpolation involved in the delta-sigma oversampling process. Take, for example, this excerpt from the press release announcing Schiit’s $249 first-generation Modi Multibit DAC, today’s teardown patient, back in mid-2016:

Multibit DACs differ from the vast majority of DACs in that they use true 16-20 bit D/A converters [editor note: also known as resistor ladder, specifically R-2R, D/A converters] that can reproduce the exact level of every digital audio sample. Most DACs use inexpensive delta-sigma technology with a bit depth of only 1-5 bits to approximate the level of every digital audio sample, based on the values of the samples that precede and follow it.

Here’s more on the Modi Multibit 1 bill of materials, from the manufacturer:

Modi Multibit is built on Schiit’s proprietary multibit DAC architecture, featuring Schiit’s unique closed-form digital filter on an Analog Devices SHARC DSP processor. For D/A conversion, it uses a medical/military grade, true multibit converter specified down to 1/2LSB linearity, the Analog Devices AD5547CRUZ.

That said, however, plenty of other audio purists object to the seemingly inferior lab testing results for multibit DACs versus delta-sigma alternatives (including those from Schiit itself), particularly in the context of notably higher comparative prices of multibit offerings. Those same detractors, exemplifying one end of the “objectivist” vs “subjectivist” opinion spectrum, would likely find it appropriate that in the “Schiit stack” whose photo I first shared a few months ago (and which I’ll discuss in detail in another post to come shortly):

I coupled a first-generation Modi Multibit (bottom) with a Vali 2++ tube-based headphone amplifier (top), both Schiit devices delivering either “enhanced musicality” (if you’re a subjectivist) or “desultory distortion” (for objectivists). For what it’s worth, I don’t consistently align with either camp; I was just curious to audition the gear and compare the results against more traditional delta-sigma DAC and discrete- or op amp-based amplifier alternatives!

A sideways wiggle did the trick

That said, I almost didn’t succeed in getting the Modi Multibit into the setup at all. My wife had bought it for me off eBay as a Valentine’s Day gift in (claimed gently) used condition back in late January; it took me a few more months to get around to pressing it into service. Cosmetically, it indeed looked nearly brand new. But when I flipped the backside power switch…nothing. This in spite of the fact that the AC transformer feeding the device still seemed to be functioning fine:

The Modi Multibit was beyond the return-for-refund point, and although the seller told me it had been working fine when originally shipped to us, I resigned myself to the seemingly likely reality that it’d end up being nothing more than a teardown candidate. But after subsequently disassembling it, I found nothing scorched or otherwise obviously fried inside. So, on a hunch and after snapping a bunch of dissection photos and then putting it back together and reaching out to the manufacturer to see if it was still out-of-warranty repairable (it was), I plugged it back into the AC transformer and wiggled the power switch back and forth sideways. Bingo; it fired right up! I’m now leaving the power switch in the permanently “on” position and managing AC control to it and other devices in the setup via a separately switchable multi-plug mini-strip:

What follows are the photos I’d snapped when I originally took the Modi Multibit apart, starting with some outside-chassis shots and as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. Front first; that round button, when pressed, transitions between the USB, optical S/PDIF, and RCA (“coaxial”) S/PDIF inputs you’ll see shortly, selectively illuminating the associated front panel LED-of-three to the right of the button at the same time:

Left side:

Back, left-to-right are the:

  • Unbalanced right and left channel analog audio outputs
  • RCA (“coaxial”) digital S/PDIF input
  • Optical digital S/PDIF input
  • USB digital input
  • The aforementioned flaky power switch, and
  • 16V AC input

Transformers vs voltage converters

Before continuing, a few words about the “wall wart”. It’s not, if you haven’t yet caught this nuance, a typical full AC-to-DC converter. Instead, it steps down the premises AC voltage to either 16V (which is actually, you may have noticed from the earlier multimeter photo, more like 20V unloaded) for the Modi Multibit or 24V for the Vali 2++, with the remainder of the power supply circuitry located within the audio component itself:

Fortunately, since the 16V and 24V transformer output plugs are dissimilar, there’s no chance you’ll inadvertently blow up the Modi Multibit by plugging the Vali 2++ wall wart into it!

Onward, right side:

Bottom:

And last but not least, the top, including the distinctive “Multibit” logo, perhaps obviously not also found on delta-sigma-implementing Schiit DACs:

Let’s start here, with those four screw heads you see, one in each corner:

With them removed, the aluminum top piece pops right off:

Next up, the two screw heads on the back panel:

And finally, the three at the steel bottom plate:

At this point, the PCB slides out, although you need to be a bit careful in doing so to prevent the steel frame’s top bracket from colliding with tall components along the PCB’s left edge:

Firmware fixes

Here’s a close-up of the PCB topside’s left half:

AC-to-DC conversion circuitry dominates the far left portion of the PCB. The large IC at the center is C-Media Electronics’ CM6631A (PDF) USB 2.0 high-speed true HD audio processor. Below it is the associated firmware chip, with an “MD218” sticker on top. The original firmware, absent the sticker, had a minor (and effectively inaudible, but you know how picky audiophiles can be) MSB zero-crossing glitch artifact that Schiit subsequently fixed, also sending replacement firmware chips to existing device owners (or alternatively updating them in-house for non-DIYers).

And here’s the PCB topside’s right half:

Now for the other side:

In the bottom left quadrant are two On Semiconductor MC74HC595A (PDF) 8-bit serial-input/serial or parallel-output shift registers with latched 3-state outputs. Above them is the aforementioned “resistor ladder DAC”, Analog Devices’ AD5547. Above it and to either side are a pair of Analog Devices AD8512A (PDF) dual precision JFET amplifiers. And above them is STMicroelectronics’ TL082ID dual op amp.

Shift your eyes to the right, and you’ll not be able to miss the largest IC on this side of the PCB. It’s the Analog Devices ADSP-21478 SHARC DSP, also called out previously in Schiit’s press release. Above it is an AKM Semiconductor AK4113 six-channel 24-bit stereo digital audio receiver chip for the Modi Multibit’s two S/PDIF inputs. And on the other side…

is a SST (now Microchip Technology) 39LF010 1Mbit parallel flash memory, presumably housing the SHARC DSP firmware.

Wrapping up, here are some horizontal perspectives of the front, back, and left-and-right sides:

And that’s “all” I’ve got for you today! In the future, I plan to compare the first-generation Modi Multibit against its second-generation successor, which switches to a Schiit-developed USB interface, branded as Unison and based on a Microchip Technologies controller, and also includes a NOS (non-oversampling) mode option, along with stacking it up against several of Schiit’s delta-sigma DAC counterparts. Until then, let me know your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post The Schiit Modi Multibit: A little wiggling ensures this DAC won’t quit appeared first on EDN.

Building automotive data logging with F-RAM flash combo

Mon, 12/15/2025 - 14:16

Advances in the automotive industry continue to make cars safer, more efficient, and more reliable than ever. As motor vehicles become more advanced, so do the silicon components that serve as the backbone of their advanced features. Case in point: the requirement for and proliferation of data logging systems is an item that has become increasingly prevalent.

In particular, event data recorder (EDR) and data storage system for autonomous driving (DSSAD) have been the source of significant attention due to recent worldwide legislation. While both systems function to provide safe and reliable storage of driving data, there are a few key distinctions (Table 1).

Table 1 Here is a comparison between EDR and DSSAD data loggers. Source: Infineon

As regulations governing automotive data logging evolve, so do the specifications for the associated memory that stores the data. For instance, in the United States, these storage requirements were recently revised to “extend the EDR recording period for timed data metrics from 5 seconds of pre-crash data at a frequency of 2 Hz to 20 seconds of pre-crash data at a frequency of 10 Hz”. These effects will be enforced on September 1, 2027, for most manufacturers with a few exceptions for altered vehicles and small-volume production lines.

Regulations such as these are not unique to the United States but rather echoed worldwide. Recently, the United Nations Economic Commission for Europe (UNECE) has sought to standardize automotive data logging requirements across its constituents with key pieces of regulation. These regulations provide guidelines for EDR in passenger vehicles, heavy-duty vehicles, and DSSAD as it pertains to Automated Lane Keeping Systems (ALKS). As these regulations grow and are adopted by constituents, the demand for hardware storage systems becomes paramount in the automotive industry.

Data storage requirements

The National Highway Traffic Safety Administration (NHTSA) in the United States gives insight into existing EDR solutions, describing them as having “a random-access memory (RAM) buffer the size of one EDR record to locally store data before the data is written to memory. The data is typically stored using electrically erasable programmable read-only memory (EEPROM) or data flash memory”.

This document also provides an overview of concerns and industry feedback regarding the requirement changes. The feedback indicated that “while the proposed 20 seconds of pre-crash data could be recorded by EDRs, some EDRs may require significant hardware and software changes to meet these demands”.

On the other hand, DSSAD requires storage of all events over a set period. While the previously referenced NHTSA document applies only to EDR, a similar solution could fulfill these requirements by buffering incoming signals before transferring them to long-term storage in a non-volatile memory.

Given the strain on current systems from growth in requirements, the quest for an optimized solution becomes pertinent. All things considered, the ideal system must provide power-loss robustness for buffered data and enough space for long-term storage. With these requirements in mind, this article will discuss how using F-RAM and NOR flash together meets the modern challenges of data logging.

Flash F-RAM combo

Ferroelectric random-access memory (F-RAM) stores information in a ferroelectric capacitor. The dipoles within this material—oriented based on the direction of applied charge—maintain their orientation after power is no longer applied. This type of memory is characterized by fast write speeds and high endurance (~1014 cycles).

These characteristics give F-RAM a unique advantage over other non-volatile memory technologies. However, densities for F-RAM are low, ranging from a few kilobits to tens of megabits, limiting its scope for high density applications.

NOR flash is another type of memory which uses a MOSFET to store electric charge within a non-metallic region of the transistor’s gate. This memory is typically more complex in its operation—for instance, the necessity of an erase operation—than F-RAM and may offer additional features such as password protection or a one-time programmable secure silicon region. NOR flash offers small-granular random access for reading but requires large-granular access to write operations.

Multiple bits must be erased simultaneously and then programmed in order to “write” to the device. Thus, timing for device operations is generally slower compared to F-RAM, and endurance is comparatively limited (~106 cycles). However, NOR flash holds the advantage of a larger storage size, with ranges up to a few gigabits.

This article will showcase how F-RAM + NOR flash compares to RAM + NOR flash as a solution for EDR and DSSAD. For this analysis, data was logged in a ring buffer within the front-end device continuously. During event triggers for EDR or conditions where the buffer filled for DSSAD, the information was transferred to the back-end device, where the data was held in long term storage (Figure 1).

Figure 1 The block diagram shows front-end and back-end storage in a logging operation. Source: Infineon

To evaluate performance, Table 2 below uses requirements from both EDR and DSSAD for the comparison. These specifications for EDR and DSSAD were based on the UN regulations for heavy duty vehicles and for ALKS DSSAD requirements, respectively. To discuss how these systems work, it’s important to review the specifications from these documents and highlight the assumptions made by the comparison.

Table 2 The performance comparison between EDR and DSSAD is conducted across multiple technologies. Source: Infineon

For logging systems that use EDR, data elements are required to be logged continuously and transferred to long term storage solely in the case of an event, such as a car accident. A ring buffer in the front-end device accomplishes this effectively. To determine the size of this buffer, the expected EDR data rate is needed alongside the storage time requirements.

From the legislation, required parameters were used as a part of the calculation. The relevant time interval (commonly 20 seconds pre-crash data, 10 seconds post-crash data) and logging frequency (4 Hz, 10 Hz, or single instance) were also utilized for calculations. One important assumption was a fixed EDR data packet size of 12 Bytes.

The UNECE document has no requirement for using a set number of bytes for storage or for storing any information outside of the parameter data. This comparison assumes an 8-byte time stamp of metadata will be included alongside an estimated 3 bytes for parameter data and 1 byte for parameter identification. Using the previous calculations and assumptions, the expected buffer size was calculated to be 790 Kbits.

In the case of the event, the entire buffer would be transferred to the back-end storage. It is required that “the EDR non-volatile memory buffer shall accommodate the data related to at least five different events”. Thus, five events worth of storage were allocated, resulting in a back-end EDR buffer size of 3.95 Mbits.

On the other hand, DSSAD requires all data elements to be stored rather than a set amount of data within an event window. Therefore, a relatively small buffer can be used on the front-end which can be migrated to the back-end device once filled. It was assumed that the buffer must be large enough to store all events if a sector erase is in progress and must be completed before transferring the data to NOR flash.

For this analysis, the maximum DSSAD rate is assumed to be 10 events/second. Furthermore, each packet was estimated to be a fixed size of 25 bytes. This would include the time and date stamp (estimated as 8 bytes) and parameter data (estimated as 1 byte) which are required in the regulation, alongside the GPS coordinates (estimated as 16 bytes), which are not required but are included as metadata in this analysis. This results in a front-end DSSAD buffer size of 5.3 Kbits.

Meanwhile, for the back-end device, an assumption of 6 months of DSSAD data storage was implemented. The size of the back-end buffer is determined by the average expected DSSAD rate multiplied by the 6-month period. Using an estimated DSSAD rate 4 events/minute and the fixed 25-byte packet size, the back-end buffer was calculated to be 210 Mbits.

Memory endurance characteristics

The sum of the buffer sizes determined the required densities for each of the components in Table 2. For this analysis, the F-RAM and NOR flash endurance characteristics were demonstrated by Infineon’s SEMPER NOR flash and EXCELON F-RAM devices. Endurance was assumed to be infinite for the front-end SRAM device, but data packets are considered lost during a power failure as this is a volatile memory.

This comparison model attempted to find the smallest density that could meet a life expectancy endurance requirement of 20 years.  The results of this comparison are shown in Table 2.

As shown in the table, the critical F-RAM + NOR flash solution advantage is in the case of a power loss situation. The worst-case scenario is in a situation where buffered information in volatile RAM without using a back-up battery would result in the loss of significant vehicle data during a crash.

In this case, if an erase is required at the start of an event, the system could lose all 20 seconds of pre-crash data plus the 2.68 seconds of time it takes to perform an erase on a 256-Mb SEMPER NOR device if the power is lost as the erase is completed. The corresponding data lost is calculated based on the assumed EDR and DSSAD data rates over this time.

Despite the high rate of cycling through the ring buffer, the EXCELON F-RAM was able to meet the endurance requirements and match the life expectancy of the RAM + SEMPER NOR flash solution.

As far as other potential front-end solutions, it should be noted that using other non-volatile technologies for the front-end such as EEPROM or RRAM would potentially require higher density requirements due to the lower endurance capabilities compared to F-RAM. Furthermore, the fast write time of EXCELON F-RAM provides proper storage for data packets sent to the front-end device in the immediate time frame prior to a power loss.

Why memory matters in data logging

Given the growth of EDR and DSSAD and the legal implications associated with these systems, reliable data storage is paramount and therefore reflected in legislative requirements. For instance, the requirement of “adequate protection against manipulation like data erasure of stored data such as anti-tampering design”. While there are different ways to secure the logged data on a system, a simple and robust method involves hardware.

The future of autonomous driving depends on logging for legal records, safety, and cutting-edge features. As systems become more complex, memory technologies are frequently challenged for performance, requiring creative solutions to satisfy the requirements.

Kyle Holub is applications engineer at Infineon Technologies.

Related Content

The post Building automotive data logging with F-RAM flash combo appeared first on EDN.

Building high-performance robotic vision with GMSL

Fri, 12/12/2025 - 15:00

Robotic systems depend on advanced machine vision to perceive, navigate, and interact with their environment. As both the number and resolution of cameras grow, the demand for high-speed, low-latency links capable of transmitting and aggregating real-time video data has never been greater.

Gigabit Multimedia Serial Link (GMS), originally developed for automotive applications, is emerging as a powerful and efficient solution for robotic systems. GMSL transmits high-speed video data, bidirectional control signals, and power over a single cable, offering long cable reach, deterministic microsecond-level latency with extremely low bit error rate (BER). It simplifies the wiring harness and reduces the total solution footprint, ideal for vision-centric robots operating in dynamic and often harsh environments.

The following sections discuss where and how cameras are used in robotics, the data and connectivity challenges these applications face, and how GMSL can help system designers build scalable, reliable, and high-performance robotic platforms.

Where are cameras used in robotics?

Cameras are at the heart of modern robotic perception, enabling machines to understand and respond to their environment in real time. Whether it’s a warehouse robot navigating aisles, a robotic arm sorting packages, or a service robot interacting with people, vision systems are critical for autonomy, automation, and interaction.

These cameras are not only diverse in function but also in form—mounted on different parts of the robot depending on the task and tailored to the physical and operational constraints of the platform (see Figure 1).

Figure 1 An example of a multimodal robotic vision system enabled by GMSL. Source: Analog Devices

Autonomy

In autonomous robotics, cameras serve as the eyes of the machine, allowing it to perceive its surroundings, avoid obstacles, and localize itself within an environment.

For mobile robots—such as delivery robots, warehouse shuttles, or agricultural rovers—this often involves a combination of wide field-of-view cameras placed at the corners or edges of the robot. These surround-view systems provide 360° awareness, helping the robot navigate complex spaces without collisions.

Other autonomy-related applications use cameras facing downward or upward to read fiducial markers on floors, ceilings, or walls. These markers act as visual signposts, allowing robots to recalibrate their position or trigger specific actions as they move through structured environments like factories or hospitals.

In more advanced systems, stereo vision cameras or time of flight (ToF) cameras are placed on the front or sides of the robot to generate three-dimensional maps, estimate distances, and aid in simultaneous localization and mapping (SLAM).

The location of these cameras is often dictated by the robot’s size, mobility, and required field of view. On small sidewalk delivery robots, for example, cameras might be tucked into recessed panels on all four sides. On a drone, they’re typically forward-facing for navigation and downward-facing for landing or object tracking.

Automation

In industrial automation, vision systems help robots perform repetitive or precision tasks with speed and consistency. Here, the camera might be mounted on a robotic arm—right next to a gripper or end-effector—and the system can visually inspect, locate, and manipulate objects with high accuracy. This is especially important in pick-and-place operations, where identifying the exact position and orientation of a part or package is essential.

Other times, cameras are fixed above a work area—mounted on a gantry or overhead rail—to monitor items on a conveyor or to scan barcodes. In warehouse environments, mobile robots use forward-facing cameras to detect shelf labels, signage, or QR codes, enabling dynamic task assignments or routing changes.

Some inspection robots, especially those used in infrastructure, utilities, or heavy industry, carry zoom-capable cameras mounted on masts or articulated arms. These allow them to capture high-resolution imagery of weld seams, cable trays, or pipe joints—tasks that would be dangerous or time-consuming for humans to perform manually.

Human interaction

Cameras also play a central role in how robots engage with humans. In collaborative manufacturing, healthcare, or service industries, robots need to understand gestures, recognize faces, and maintain a sense of social presence. Vision systems make this possible.

Humanoid and service robots often have cameras embedded in their head or chest, mimicking the human line of sight to enable natural interaction. These cameras help the robot interpret facial expressions, maintain eye contact, or follow a person’s gaze. Some systems use depth cameras or fisheye lenses to track body movement or detect when a person enters a shared workspace.

In collaborative robot (cobot) scenarios, where humans and machines work side by side, machine vision is used to ensure safety and responsiveness. The robot may watch for approaching limbs or tools, adjusting its behavior to avoid collisions or pause work if someone gets too close.

Even in teleoperated or semi-autonomous systems, machine vision remains key. Front-mounted cameras stream live video to remote operators, enabling real-time control or inspection. Augmented reality overlays can be added to this video feed to assist with tasks like remote diagnosis or training.

Across all these domains, the camera’s placement—whether on a gripper, a gimbal, the base, or the head of the robot—is a design decision tied to the robot’s function, form factor, and environment. As robotic systems grow more capable and autonomous, the role of vision will only deepen, and camera integration will become even more sophisticated and essential.

Robotics vision challenges

As vision systems become the backbone of robotic intelligence, opportunity and complexity grow in parallel. High-performance cameras unlock powerful capabilities—enabling real-time perception, precise manipulation, and safer human interaction—but they also place growing demands on system architecture.

It’s no longer just about moving large volumes of video data quickly. Many of today’s robots must make split-second decisions based on multimodal sensor input, all while operating within tight mechanical envelopes, managing power constraints, avoiding electromagnetic interference (EMI), and maintaining strict functional safety in close proximity to people.

These challenges are compounded by the environments robots face. A warehouse robot may shuttle in and out of freezers, enduring sudden temperature swings and condensation. An agricultural rover may crawl across unpaved fields, absorbing constant vibration and mechanical shock. Service robots in hospitals or public spaces may encounter unfamiliar, visually complex settings, where they must quickly adapt to safely navigate around people and obstacles.

Solve the challenges with GMSL

GMSL is uniquely positioned to meet the demands of modern robotic systems. The combination of bandwidth, robustness, and integration flexibility makes it well-suited for sensor-rich platforms operating in dynamic, mission-critical environments. The following features highlight how GMSL addresses key vision-related challenges in robotics.

High data rate 

The GMSL2 and GMSL3 product families support forward-channel (video path) data rates of 3 Gbps, 6 Gbps, and 12 Gbps, covering a wide range of robotic vision use cases. These flexible link rates allow system designers to optimize for resolution, frame rate, sensor type, and processing requirements (Figure 2).

Figure 2 Sensor bandwidth ranges with GMSL capabilities. Source: Analog Devices

A 3 Gbps link is sufficient for most surround view cameras using 2 MP to 3 MP rolling shutter sensors at 60 frames per second (FPS). It also supports other common sensing modalities, such as ToF sensors and light detection and ranging (LIDAR) units with point-cloud outputs and radar sensors transmitting detection data or compressed image-like returns.

The 6 Gbps mode is typically used for the robot’s main forward-facing camera, where higher resolution sensors (usually 8 MP or more) are required for object detection, semantic understanding, or sign recognition. This data rate also supports ToF sensors with raw output, or stereo vision systems that either stream raw output from two image sensors or output a processed point cloud stream from an integrated image signal processor (ISP). Many commercially available stereo cameras today rely on this data rate for high frame-rate performance.

At the high end, 12 Gbps links enable support for 12 MP or higher resolution cameras used in specialized robotic applications that demand advanced object classification, scene segmentation, or long-range perception. Interestingly, even some low-resolution global shutter sensors require higher speed links to reduce readout time and avoid motion artifacts during fast capture cycles, which is critical in dynamic or high-speed environments.

Determinism and low latency

Because GMSL uses frequency-domain duplexing to separate the forward (video and control) and reverse (control) channels, it enables bidirectional communication with deterministic low latency, without the risk of data collisions.

Across all link rates, GMSL maintains impressively low latency: the added delay from the input of a GMSL serializer to the output of a deserializer typically falls in the lower tens of microseconds—negligible for most real-time robotic vision systems.

The deterministic reverse-channel latency enables precise hardware triggering from the host to the camera—critical for synchronized image capture across multiple sensors, as well as for time-sensitive, event-driven frame triggering in complex robotic workflows.

Achieving this level of timing precision with USB or Ethernet cameras typically requires the addition of a separate hardware trigger line, increasing system complexity and cabling overhead.

Small footprint and low power

One of the key value propositions of GMSL is its ability to reduce cable and connector infrastructure.

GMSL itself is a full-duplex link, and most GMSL cameras utilize the power-over-coax (PoC) feature, allowing video data, bidirectional control signals, and power to be transmitted over a single thin coaxial cable.

This significantly simplifies wiring, reduces the overall weight and bulk of cable harnesses, and eases mechanical routing in compact or articulated robotic platforms (Figure 3).

Figure 3 A typical GMSL camera architecture using the MAX96717. Source: Analog Devices

In addition, the GMSL serializer is a highly integrated device that combines the video interface (for example, MIPI-CSI) and the GMSL PHY into a single chip. The power consumption of the GMSL serializer, typically around 260 mW in 6 Gbps mode, is favorably low compared to alternative technologies with similar data throughput.

All these features will translate to smaller board areas, reduced thermal management requirements (often eliminating the need for bulky heatsinks), and greater overall system efficiency, particularly for battery-powered robots.

Sensor aggregation and video data routing

GMSL deserializers are available in multiple configurations, supporting one, two, or four input links, allowing flexible sensor aggregation architectures. This enables designers to connect multiple cameras or sensor modules to a single processing unit without additional switching or external muxing, which is especially useful in multicamera robotics systems.

In addition to the multiple inputs, GMSL SERDES also supports advanced features to manage and route data intelligently across the system. These include:

  • I2C and GPIO broadcasting for simultaneous sensor configuration and frame synchronization.
  • I2C address aliasing to avoid I2C address conflict in passthrough
  • Virtual channel reassignment allows multiple video streams to be mapped cleanly into the frame buffer inside the systems on chip (SoCs).
  • Video stream duplication and virtual channel filtering, enabling selected video data to be delivered to multiple SoCs—for example, to support both automation and interaction pipelines from the same camera feed or to support redundant processing paths for enhanced functional safety.
Safety and reliability

Originally developed for automotive advanced driver assistance systems (ADAS) applications, GMSL has been field-proven in environments where safety, reliability, and robustness are non-negotiable. Robotic systems, particularly those operating around people or performing mission-critical industrial tasks, can benefit from the same high standards.

Feature/Criteria

GMSL (GMSL2/GMSL3)

USB (for example, USB 3.x)

Ethernet (for example, GigE Vision)

Cable Type

Single coax or STP (data + power + control)

Separate USB + power + general-purpose input/output (GPIO)

Separate Ethernet + power (PoE optional) + GPIO

Max Cable Length

15+ meters with coax

3 m reliably

100 m with Cat5e/Cat6

Power Delivery

Integrated (PoC)

Requires separate or USB-PD

Requires PoE infrastructure or separate cable

Latency (Typical)

Tens of microseconds (deterministic)

Millisecond-level, OS-dependent

Millisecond-level, buffered + OS/network stack

Data Rate

3 Gbps/6 Gbps/12 Gbps (uncompressed, per link)

Up to 5 Gbps (USB 3.1 Gen 1)

1 Gbps (GigE), 10 Gbps (10 GigE, uncommon in robotics)

Video Compression

Not required (raw or ISP output)

Often required for higher resolutions

Often required

Hardware Trigger Support

Built-in via reverse channel (no extra wire)

Requires extra GPIO or USB communications device class (CDC) interface

Requires extra GPIO or sync box

Sensor Aggregation

Native via multi-input deserializer

Typically point-to-point

Typically point-to-point

EMI Robustness

High—designed for automotive EMI standards

Moderate

Moderate to high (depends on shielding, layout)

Environmental Suitability

Automotive-grade temp, ruggedized

Consumer-grade unless hardened

Varies (industrial options exist)

Software Stack

Direct MIPI-CSI integration with SoC

OS driver stack + USB video device class (UVC) or proprietary software development kit (SDK)

OS driver stack + GigE Vision/ GenICam

Functional Safety Support

ASIL-B devices, data replication, deterministic sync

Minimal

Minimal

Deployment Ecosystem

Mature in ADAS, growing in robotics

Broad in consumer/PC, limited industrial options

Mature in industrial vision

Integration Complexity

Moderate—requires SERDES and routing config

Low—plug and play for development High—for production

Moderate—needs switch/router config and sync wiring

Table 1 A comparison between GMSL, USB, and Ethernet in terms of trade-offs in robotic vision. Source: Analog Devices

Most GMSL serializers and deserializers are qualified to operate across a –40°C to +105°C temperature range, with built-in adaptive equalization that continuously monitors and adjusts transceiver settings in response to environmental changes.

This provides system architects with the flexibility to design robots that function reliably in extreme or fluctuating temperature conditions.

In addition, most GMSL devices are ASIL-B compliant and exhibit extremely low BERs. Under compliant link conditions, GMSL2 offers a typical BER of 10–15, while GMSL3, with its mandatory forward error correction (FEC), can reach a BER as low as 10–30. This exceptional data integrity, combined with safety certification, significantly simplifies system-level functional safety integration.

Ultimately, GMSL’s robustness leads to reduced downtime, lower maintenance costs, and greater confidence in long-term system reliability—critical advantages in both industrial and service robotics deployments.

Mature ecosystem

GMSL benefits from a mature and deployment-ready ecosystem, shaped by years of high volume use in automotive systems and supported by a broad network of global ecosystem partners.

This includes a comprehensive portfolio of evaluation and production-ready cameras, compute boards, cables, connectors, and software/driver support—all tested and validated under stringent real-world conditions.

For robotics developers, this ecosystem translates to shorter development cycles, simplified integration, and a lower barrier to scale from prototype to production.

GMSL vs. legacy robotics connectivity

In recent years, GMSL has become increasingly accessible beyond the automotive industry, opening new possibilities for high performance robotic systems.

As the demands on robotic vision grow with more cameras, higher resolution, tighter synchronization, and harsher environments, traditional interfaces like USB and Ethernet often fall short in terms of bandwidth, latency, and integration complexity.

GMSL is now emerging as a preferred upgrade path, offering a robust, scalable, and production-ready solution that is gradually replacing USB and Ethernet in many advanced robotics platforms. Table 1 compares the three technologies across key metrics relevant to robotic vision design.

An evolution in robotics

As robotics moves into increasingly demanding environments and across diverse use cases, vision systems must evolve to support higher sensor counts, greater bandwidth, and deterministic performance.

While legacy connectivity solutions will remain important for development and certain deployment scenarios, they introduce trade-offs in latency, synchronization, and system integration that limit scalability.

GMSL, with its combination of high data rates, long cable reach, integrated power delivery, and bidirectional deterministic low latency, provides a proven foundation for building scalable robotic vision systems.

By adopting GMSL, designers can accelerate the transition from prototype to production, delivering smarter, more reliable robots ready to meet the challenges of a wide range of real-world applications.

Kainan Wang is a systems applications engineer in the Automotive Business Unit at Analog Devices in Wilmington, Massachusetts. He joined ADI in 2016 after receiving an M.S. in electrical engineering from Northeastern University in Boston, Massachusetts. Kainan has been working with 2D/3D imaging solutions from hardware development and systems integrations to application development. Most recently, his work focus has been to expand ADI automotive technologies into other markets beyond automotive.

Related Content

The post Building high-performance robotic vision with GMSL appeared first on EDN.

Hardware security to bolster interconnect IPs for SoCs, chiplets

Fri, 12/12/2025 - 13:38

Hardware security vulnerabilities have greatly expanded the attack surface beyond traditional software exploits, making hardware security assurance crucial in modern system-on-chip (SoC) designs. Chip interconnect specialist Arteris’ acquisition of semiconductor cybersecurity assurance supplier Cycuity is the latest reminder of how hardware security is becoming an inflection point in SoC design.

Arteris delivers data-movement IP hardware and IP block integration software to connect on-chip components and chiplets. On the other hand, Cycuity ensures the security of these semiconductor design building blocks and their interactions. Charles Janac, president and CEO of Arteris, claims that Cycuity’s technology and expertise will add to Arteris’ product portfolio, enabling chip designers to better understand and improve data movement security in chiplets and SoCs.

Figure 1 A security solution, built around a coverage metric tailored for hardware designs, enables chip designers to precisely measure the effectiveness of security protocols. Cycuity

Cycuity’s hardware security solutions prevent vulnerabilities throughout chip development—from IP blocks to RTL design to full systems—with systematic security assurance in software configuration via scalable, repeatable security verification. The San Jose, California-based firm specifies, integrates, and verifies security across a chip’s hardware development lifecycle.

Security is becoming critical to all types of chip designs because the attack potential has expanded to the hardware layer. As a result, silicon vulnerabilities can compromise electronic systems and expose unprotected information. The National Institute of Standards and Technology (NIST) has recently released data showing common vulnerabilities and exposures (CVEs) in hardware grew by more than 15 times over the last five years.

For Arteris’ network-on-chip (NoC) IPs, which provide the backbone for data movement across SoCs and chiplets, Cycuity’s offerings can help mitigate security vulnerabilities throughout the SoC hardware development cycle. They can uncover security weaknesses across firmware, IP blocks, chip subsystems, chiplets, and full SoCs.

Figure 2 This hardware security solution identifies secure design assets and ensures they are properly managed during secure boot. Source: Cycuity

Cycuity—which works closely with leading EDA toolmakers such as Cadence, Siemens EDA, and Synopsys—has its hardware security tools integrated with leading EDA environments. That allows chip designers to identify, verify, and resolve security risks before silicon implementation and production. For instance, they can safeguard against attacks exploiting microarchitectural side channels, logic bugs, third-party and open-source IP, unsecured interconnects, debug backdoors, and supply-chain gaps.

The acquisition deal, subject to regulatory approval, is expected to close in the first quarter of 2026.

Related Content

The post Hardware security to bolster interconnect IPs for SoCs, chiplets appeared first on EDN.

Low-power Wi-Fi 6 MCUs preserve IoT battery life

Thu, 12/11/2025 - 20:11

Renesas has announced the RA6W1 dual-band Wi-Fi 6 wireless MCU, to be followed by the RA6W2 Wi-Fi 6 and BLE combo MCU. Based on an Arm Cortex-M33 CPU running at 160 MHz, these low-power microcontrollers dynamically switch between 2.4-GHz and 5-GHz bands in real time, ensuring a stable, high-speed connection.

The RA6W1 and RA6W2 MCUs use Target Wake Time (TWT) to let IoT devices sleep longer, extending battery life and reducing network congestion. They consume as little as 200 nA to 4 µA in deep sleep and under 50 µA while checking for data, enabling devices to stay connected for a year or more on a single battery. This makes them well-suited for applications requiring real-time control, remote diagnostics, and over-the-air updates— for example, environmental sensors, smart home devices, and medical monitors.

Alongside the RA6W1 and RA6W2 MCUs, Renesas launched two fully integrated modules designed to reduce development time and accelerate time to market. The Wi-Fi 6 (RRQ61001) and Wi-Fi 6/BLE combo (RRQ61051) modules feature built-in antennas, certified RF components, and wireless protocol stacks that comply with global network standards.

The RA6W1 MCU in WLCSP and FCQFN packages, as well as the RRQ61001 and RRQ61051 modules, are available now. The RA6W2 MCU in a BGA package is scheduled for release in Q1 2026.

Renesas Electronics 

The post Low-power Wi-Fi 6 MCUs preserve IoT battery life appeared first on EDN.

Automotive buck converter is I2C-tuned

Thu, 12/11/2025 - 20:11

Optimized for automotive point-of-load (POL) applications, Diodes’s AP61406Q 5.5-V, 4-A synchronous buck converter provides a versatile I2C programming interface. The I2C 3.0-compatible serial interface supports SCL clock rates up to 3.4 MHz and allows configuration of PFM/PWM modes, switching frequencies (1 MHz, 1.5 MHz, 2 MHz, or 2.5 MHz), and output-current limits of 1A, 2 A, 3 A, and 4 A. The output voltage is adjustable in 20-mV increments.

The AP61406Q uses a proprietary gate-driver scheme to suppress switching-node ringing without slowing MOSFET transitions, helping reduce high-frequency radiated EMI. It operates from an input of 2.3 V to 5.5 V and integrates 75-mΩ high-side and 33-mΩ low-side MOSFETs for efficient step-down conversion. Constant on-time (COT) control further minimizes external components, eases loop stabilization, and delivers low output-voltage ripple.

Offered in a W-QFN1520-8/SWP (Type UX) package, the converter is AEC-Q100 qualified for operation from –40°C to +125°C. Its protection suite—including high-side and low-side current-sense protection, UVLO, VIN OVP, peak and valley current limiting, and thermal shutdown—enhances reliability.

AP61406Q product page 

Diodes

The post Automotive buck converter is I2C-tuned appeared first on EDN.

SiC power modules deliver up to 608 A

Thu, 12/11/2025 - 20:11

SemiQ continues to expand its Gen3 QSiC MOSFET portfolio with 1200-V power modules offering high current density and low thermal resistance. The new seven-device lineup includes high-current S3 half-bridge, B2T1 six-pack, and B3 full-bridge modules designed to meet the needs of EV chargers, energy storage systems, and industrial motor drives.

Two of the devices handle currents up to 608 A with a junction-to-case thermal resistance of just 0.07 °C/W in a 62‑mm S3 half-bridge format. The three six-pack modules integrate a three-phase power stage into a compact housing, offering on-resistance from 19.5 mΩ to 82 mΩ, an optimized layout, and minimal parasitic effects. The two full-bridge modules combine current handling up to 120 A with on-resistance as low as 8.6 mΩ and a thermal resistance of 0.28 °C/W.

All parts undergo wafer-level gate-oxide burn-in and are breakdown-tested above 1350 V. Gen3 modules operate at lower gate voltages (18 V/-4.5 V) and reduce both on-resistance and turn-off energy losses up to 30% versus previous generations.

The power modules are available immediately. Explore SemiQ’s entire line of Gen3 MOSFET power modules here.

SemiQ

The post SiC power modules deliver up to 608 A appeared first on EDN.

Handheld analyzers cut through dense RF traffic

Thu, 12/11/2025 - 20:11

With 120-MHz gap-free IQ streaming, Keysight’s N99xxD-Series FieldFox analyzers ensure every signal event is captured. This capability lets users stream and replay complex RF activity to quickly pinpoint issues and verify system performance. The result is deeper analysis and greater confidence that key signal details are not overlooked in the field.

The N99xxD-Series includes 14 handheld models—combo or spectrum analyzers—covering frequencies from 14 GHz to 54 GHz. Each model supports more than 25 software-defined FieldFox applications, including vector network analysis, spectrum and real-time spectrum analysis, noise figure measurement, EMI analysis, pulse signal generation, and direction-finding.

Key capabilities of the N99xxD-Series include:

  • 120-MHz IQ streaming with SFP+ 10-GbE interfaces for uninterrupted data capture
  • Wideband signal analysis and playback for troubleshooting, spectrum monitoring, and interference detection
  • Field-to-lab workflow to recreate real-world signals for lab analysis
  • High RF performance with ±0.1 dB amplitude accuracy without warm-up

A technical overview of Keysight’s FieldFox handheld analyzers and D-Series information can be found here.

Keysight Technologies 

The post Handheld analyzers cut through dense RF traffic appeared first on EDN.

MOSFETs bring 750-V capability to TOLL package

Thu, 12/11/2025 - 20:10

Now in mass production, Rohm’s SCT40xxDLL series of SiC MOSFETs in TOLL (TO-Leadless) packages delivers high power-handling capability in a compact, low-profile form factor. According to ROHM, the TOLL package provides roughly 39% better thermal performance than conventional TO-263-7L packages.

The SCT40xxDLL lineup consists of six devices, each rated for a 750-V maximum drain-source voltage, compared to the 650-V limit typical of standard TOLL packages. This higher voltage rating enables lower gate resistance and a larger safety margin for surge voltages, helping to further reduce switching losses.

In AI servers and compact PV inverters, rising power requirements coincide with pressure to reduce system size, increasing the need for higher-density MOSFETs. In slim totem-pole PFC designs with thickness limits near 4 mm, Rohm’s new devices cut footprint to 11.68×9.9 mm (about 26% smaller) and reduce package height to 2.3 mm, about half that of typical devices.

The 750-V SiC MOSFETs are available from distributors such as DigiKey, Mouser, and Farnell. For details and datasheets, click here.

Rohm Semiconductor 

The post MOSFETs bring 750-V capability to TOLL package appeared first on EDN.

Splitting voltage with purpose: A guide to precision voltage dividers

Thu, 12/11/2025 - 16:59

Voltage division is not just about ratios; it’s about control, clarity, and purpose. This little guide explores precision voltage dividers with quiet confidence, and sheds light on how they shape signal levels, reference points, and measurement accuracy.

A precision voltage divider produces a specific fraction of its input voltage using carefully matched resistive components. It’s designed for accurate, stable voltage scaling—often used to shape signal levels, generate reference voltages, or condition inputs for measurement. Built with low-tolerance resistors, these dividers ensure consistent performance across temperature and time, making them essential in analog design, instrumentation, and sensor interfacing (Figure 1).

Figure 1 Representation of an SOT23 precision resistor-divider illustrates two tightly matched resistors with accessible terminals at both ends and the midpoint. Source: Author

A side note: While the term precision voltage divider broadly refers to any resistor-based circuit that scales voltage, precision resistor-divider typically denotes a tightly matched resistor pair in a single package, for example, SOT23. These integrated devices offer superior ratio accuracy and thermal tracking, making them ideal for reference scaling and threshold setting in precision analog designs.

As an unbiased real-world example, the HVDP08 series from KOA is a thin-film resistor network designed for high-precision, high-voltage divider applications. It supports resistance values up to 51 MΩ, working voltages up to 1,000 V, and resistance ratios as high as 1000:1.

Figure 2 The HVDP08 high-precision, high-voltage divider achieves higher integration while reducing board space requirements and overall assembly overhead. Source: KOA

Similarly, precision decade voltage dividers—specifically engineered for use as input voltage dividers in multimeters and other range-switching instruments—are now widely available. Simply put, precision decade voltage dividers are resistor networks that provide accurate, selectable voltage ratios in powers of ten. One notable example is the EBG Series 1776-X, widely recognized for its precision and reliability.

Figure 3 EBG Series 1776-X precision decade resistors incorporate ceramic protection and laser-trimmed thin films to achieve ultra-tight tolerances. Source: Miba

Moreover, digitally programmable precision voltage dividers—such as the MAX5420 and MAX5421—are optimized for use in digitally controlled gain amplifier configurations. Programmable gain amplifiers (PGAs) allow precise, software-driven control of signal amplification, making them ideal for applications that require dynamic range adjustment, calibration, or sensor interfacing.

Poorman’s precision practice

Precision does not have to be pricey. In this section, we explore how resourceful design choices—clever resistor selection, thoughtful layout, and a dash of calibration—can yield surprisingly accurate voltage dividers without premium components. Whether you are prototyping on a budget or refining a DIY instrument, this hands-on approach proves that precision is within reach.

Achieving precision on a budget starts with clever resistor selection: Choosing resistors with tight tolerances, low temperature coefficients, and stable long-term behavior, even if they are not top-shelf brands. A thoughtful layout ensures minimal parasitic effects; short traces, good grounding, and avoiding thermal gradients all help preserve accuracy. Finally, a dash of calibration—whether through trimming, software correction, or referencing known voltages—can compensate for small mismatches and elevate a humble design into a reliable performer.

While selecting resistors, it’s important to distinguish between absolute and relative tolerance. Absolute tolerance refers to how closely each resistor matches its nominal value, say ±1% of 10 KΩ. Relative tolerance, on the other hand, describes how well matched a pair or group of resistors are to each other, regardless of their deviation from nominal. In voltage dividers, especially precision ones, relative tolerance often matters more. Even if both resistors drift slightly, as long as they drift together, the ratio—and thus the output voltage—remains stable.

As an aside, ratio tolerance refers to how closely a resistor pair maintains its intended resistance ratio, independent of their absolute values. In precision voltage dividers, this metric is key; even if both resistors drift slightly, a tight ratio tolerance ensures the output voltage remains stable. It’s a subtle but critical factor when accuracy depends more on matching than on nominal values.

Having covered the essentials, we now turn to a hands-on example, one that puts theory into practice with accessible components and practical constraints.

Operational amplifier (op-amp) circuits are commonly used to scale the output voltage of digital-to-analog converters (DACs). Two popular configurations—the non-inverting amplifier and the inverting amplifier—can both amplify the signal and adjust its DC offset.

For applications requiring output scaling without offset, the goal is to expand the voltage range of the DAC’s output while maintaining its original polarity. This setup requires the op-amp’s positive supply rail to exceed the desired maximum output voltage.

Figure 4 This output-scaling circuit extends DAC’s voltage range without altering its polarity. Source: Author

Output voltage formula: VOUT = VIN (1 + RF/RG)

Scaling in action

To scale a DAC output from 0–5 V to 0–10 V, a gain of 2.0 is required.

Using a 10K feedback resistor (RF) and a 10K gain resistor (RG), the gain becomes 2. This configuration doubles the DAC’s output voltage while preserving its zero-based reference.

You can also design op-amp circuits to scale and shift the DAC output by a specific DC offset. This is especially useful when converting a unipolar output, for example, 0 V to 2.5 V, into a bipolar range, for instance, –5 V to +5 V. But that’s a story for another day.

Precision voltage dividers may seem straightforward, but their influence on signal integrity and measurement accuracy runs deep. Whether you are working on analog front-ends, reference rails, or sensor inputs, careful resistor selection and layout choices can make or break performance.

Have a go-to divider trick or layout insight? Drop it in the comments and join the conversation.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Splitting voltage with purpose: A guide to precision voltage dividers appeared first on EDN.

Pages