EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 25 min 4 sec ago

Aiding drone navigation with crystal sensing

Wed, 12/24/2025 - 14:31

Designers are looking to reduce the cost of drone systems for a wide range of applications but still need to provide accurate positioning data. This however is not as easy is it might appear.

There are several satellite positioning systems, from the U.S.-backed GPS and European Galileo to NavIC in India and Beidou in China, providing data down to the meter. However, these need to be augmented by an inertial measurement unit (IMU) that provides more accurate positioning data that is vital.

Figure 1 An IMU is vital for the precision control of the drone and peripherals like gimbal that keeps the camera steady. Source: Epson

An IMU is typically a sensor that can measure movement in six directions, along with an accelerometer to detect the amount of movement. The data is then used by the developer of an inertial measurement system (IMS) with custom algorithms, often with machine learning, combined with the satellite data and other data from the drone system.

The IMU is vital for the precision control of the drone and peripherals such as the gimbal that keeps the camera steady, providing accurate positioning data and compensating for the vibration of the drone. This stability can be implemented in a number of ways with a variety of sensors, but providing accurate information with low noise and high stability for as long as possible has often meant the sensor is expensive with high power consumption.

This is increasingly important for medium altitude long endurance (MALE) drones. These aircraft are designed for long flights at altitudes of between 10,000 and 30,000 feet, and can stay airborne for extended periods, sometimes over 24 hours. They are commonly used for military surveillance, intelligence gathering, and reconnaissance missions through wide coverage.

These MALE drones need a stable camera system that is reliable and stable in operation and a wide range of temperatures, providing accurate tagging of the position of any data captured.

One way to deliver a highly accurate IMU with lower cost is to use a piezoelectric quartz crystal. This is well established technology where an oscillating field is applied across the crystal and changes in motion are picked up with differential contacts across the crystal.

For a highly stable IMU for a MALE drone, three crystals are used, one for each axis, stimulated at different frequencies in the kilohertz range to avoid crosstalk. The differential output cancels out noise in the crystal and the effect of vibrations.

Precision engineering of piezoelectric crystals for high-stability IMUs

Using a crystal method provides data with low noise, high stability, and low variability. The highly linear response of the piezoelectric crystal enables high-precision measurement of various kinds of movement over a wide range from slow to fast, allowing the IMU to be used in a broad array of applications.

An end-to-end development process allows the design of each crystal to be optimized for the frequencies used for the navigation application along with the differential contacts. These are all optimized with the packaging and assembly to provide the highly linear performance that remains stable over the lifetime of the sensor.

It uses 25 years of experience with wet etch lithography for the sensors across dozens of patents. That produces yields in the high nineties with average bias variations, down to 0.5% variant from unit to unit.

An initial cut angle on the quartz crystal achieves the frequency balance for the wafer, then the wet etch lithography is applied to the wafer to create a four-point suspended cantilever structure that is 2-mm long. Indentations are etched into the structure for the wire bonds to the outside world.

The four-point structure is a double tuning fork with detection tines and two larger drive tines in the centre. The differential output cancels out spurious noise or other signals.

This is simpler to make than micromachined MEMS structures and provides more long-term stability and less variability across the devices.

The differential structure and low crosstalk allow three devices to be mounted closely together without interfering with each other, which helps to reduce the size of the IMU. A low pass filter helps to reduce any risk of crosstalk.

The six-axis crystal sensor is then combined with an accelerometer for the IMU. For the MALE drone gimbal applications, this accelerometer must have a high dynamic range to handle the speed and vibration effects of operation in the air. The linearity advantage of using a piezoelectric crystal provides accuracy for sensing the rotation of the sensor and does not degrade with higher speeds.

Figure 2 Piezoelectric crystals bolster precision and stability in IMUs. Source: Epson

This commercial accelerometer is optimized to provide the higher dynamic range and sits alongside a low power microcontroller and temperature sensors, which are not common in low-cost IMUs currently used by drone makers.

The microcontroller technology has been developed for industrial sensors over many years and reduces the power consumption of peripherals while maintaining high performance.

The microcontroller is used to provide several types of compensation, including temperature and aging, and so provides a simple, stable, and high-quality output for the IMU maker. Quartz also provides very predictable operation across a wide temperature range from -40 ⁰C to +85 ⁰C, so the compensation on the microcontroller is sufficient and more compensation is not required in the IMU, reducing the compute requirements.

All of this is also vital for the calibration procedure. Ensuring that the IMU can be easily calibrated is key to keeping the cost down and comes from the inherent stability of the crystal.

Calibration-safe mounting

The mounting technology is also key for the calibration and stability of the sensor. A part that uses surface mount technology (SMT), such as a reflow oven, for mounting to a board, which is exposed to high temperatures that can disrupt the calibration and alter the lifetime of the part in unexpected ways.

Instead, a module with a connector is used, so the 1-in (25 x 25 x 12 mm) part can be soldered to the printed circuit board (PCB). This avoids the need to use the reflow assembly for surface mount devices where the PCB passes through an oven, which can upset the calibration of the sensor.

Space-grade IMU design

A higher performance variant of the IMU has been developed for space applications. Alongside the quartz crystal sensor, a higher performance accelerometer developed in-house is used in the IMU. The quartz sensor is inherently impervious to radiation in low and medium earth orbits and is coupled with a microcontroller that handles the temperature compensation, a key factor for operating in orbits that vary between the cold of the night and the heat of the sun.

The sensor is mounted in a hermetically sealed ceramic package that is backfilled with helium to provide higher levels of sensitivity and reliability than the earth-bound version. This makes the quartz-based sensor suitable for a wide range of space applications.

Next-generation IMU development

The next generation of etch technology being explored now promises to enable a noise level 10 times lower than today with improved temperature stability. These process improvements enable cleaner edges on the cantilever structure to enhance the overall stability of the sensor.

Achieving precise and reliable drone positioning requires the integration of advanced IMUs with satellite data. The use of piezoelectric quartz crystals in IMUs for drone systems offers significant benefits, including low noise, high stability, and reduced costs, while commercial accelerometers and optimized microcontrollers further enhance performance and minimize power consumption.

Mounting and calibration procedures ensure long-term accuracy and reliability to provide stable and power-efficient control for a broad range of systems. All of this is possible through the end-to-end expertise in developing quartz crystals, and designing and implementing the sensor devices, from the etch technology to the mounting capabilities.

David Gaber is group product manager at Epson.

Related Content

The post Aiding drone navigation with crystal sensing appeared first on EDN.

Tuneful track-tracing

Tue, 12/23/2025 - 15:00

Another day, another dodgy device. This time, it was the continuity beeper on my second-best DMM. Being bored with just open/short indications, I pondered making something a little more informative.

Perhaps it could have an input stage to amplify the voltage, if any, across current-driven probes, followed by a voltage-controlled tone generator to indicate its magnitude, and thus the probed resistance. Easy! . . . or maybe not, if we want to do it right.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the (more or less) final result, which uses a carefully-tweaked amplifying stage feeding a pitch-linear VCO (PLVCO). It also senses when contact has been made, and so draws no power when inactive.

Most importantly, it produces a tone whose musical pitch is linearly related to the sensed resistance: you can hear the difference between fat power traces and long, thin signal ones while probing for continuity or shorts on a PCB without needing to look at a meter.

Figure 1 A power switch, an amplifying stage with some careful offsets, and a pitch-linear VCO driving an output transducer make a good continuity tester. The musical pitch of the tone produced is proportional to the resistance across the probe tips.

This is simpler than it initially looks, so let’s dismantle it. R1 feeds the test probes. If they are open-circuited, p-MOSFET Q1 will be held off, cutting the circuit’s power (ignoring <10 nA leakage).

Any current flowing through the probes will bring Q1.G low to turn it on, powering the main circuit. That also turns Q2 on to couple the probe voltage to A1a.IN+ via R2. Without Q2, A1a’s input protection diodes would draw current when power was switched off.

R1 is shown as 43k for an indication span of 0 to ~24 Ω, or 24 semitones. Other values will change the range, so, for example, 4k3 will indicate up to 2.4 Ω with 0.1-Ω semitones. Adding a switch gave both ranges. (The actual span is up to ~30 Ω—or 3.0 Ω—but accuracy suffers.) Any other values can be used for different scales; the probe current will, of course, change.

A1a amplifies the probe voltage by 1001-ish, determined by R3 and R4. We are working right down to 0 V, which can be tricky. R5 offsets A2a.IN- by ~5 mV, which is more than the MCP6002’s quoted maximum input offset of 3.5 mV. R2 and R6–8 help to add a slightly greater bias to A1a.IN+ that both null out any offset and set the operating point. This scheme may avert the need for a negative rail in other applications.

Tuning the tones

The A1b section is yet another variant on my basic pitch-linear VCO, the reset pulse being generated by Q4/C3/R13. (For more informative details of the circuit’s general operation, see the original Design Idea.) The ’scope traces in Figure 2 should clarify matters.

Figure 2. Waveforms within the circuit to show its operation while probing different resistances.

This type of PLVCO works best with a control voltage centered between the supply rails and swinging by ±20% about that datum, giving a bipolar range of ~±1 octave. Here, we need unipolar operation, starting around that -20% lowest-frequency point.

Therefore, 0 Ω on the input must give ~0.3 Vcc to generate a ~250 Hz tone; 12 Ω, 0.5 Vcc (for ~500 Hz); and 24 Ω, ~0.7 Vcc (~1 kHz). Anything above ~0.8 Vcc will be out of range—and progressively less accurate—and must be ignored.

The output is now a tone whose pitch corresponds to the resistance across the probes, scaled as one semitone per ohm and spanning two octaves for a 24 Ω range (if R1 is 43k).

The modified exponential ramp on C2 is now sliced by A2b, using a suitable fraction of the control voltage as a reference, to give a “square” wave at its output—truly square at one point only, but it sounds OK, and this approach keeps the circuit simple. A2a inverts A2b’s output, so they form a simple balanced (or bridge-tied load) driver for an earpiece. (There are problems here, but they can wait.)

R9 and R10 reduce A1a’s output a little as high resistances at the input cause it to saturate, which would otherwise stop A1b’s oscillation. This scheme means that out-of-range resistances still produce an audio output, which is maxed out at ~1.6 kHz, or ~30 . Depending on Q1’s threshold voltage, several tens of kΩs across the probes are enough to switch it on—a tad outside our indication range.

Loud is allowed

Now for that earpiece, and those potential problems. Figure 1’s circuit worked well enough with an old but sensitive ~250-Ω balanced-armature mic/’phone but was fairly hopeless when trying to drive (mostly ~32 Ω) earphones or speakers.

For decent volume, try Figure 4, which is beyond crude, but functional. Note the separate battery, whose use avoids excessive drain on the main one while isolating the main circuit from the speaker’s highish currents.

Again, no power is drawn when the unit is inactive. (Reused batteries—strictly, cells—from disposed-of vapes are often still half-full, and great for this sort of thing! And free.) A2a is now spare . . .

Figure 3 A simple, if rather nasty, way of driving a loudspeaker.

Setting-up is necessary, because offsets are unpredictable, but simple. With a 12-Ω resistance across the probes, adjust R7 to give Vcc/2 at A1b.5. Done!

Comments on the components

The MCP6002 dual op-amp is cheap and adequate. (The ’6022 has a much lower offset but a far higher price, as well as drawing more current. “Zero-offset” devices are yet more expensive, and trimmer R7 would probably still be needed.)

Q3, and especially Q1, must have a low RDS(on) and VGS(th); my usual standby ZVP3306As failed on both counts, though ZVN3306As worked well for Q2/4/5. (You probably have your own favorite MOSFETs and low-voltage RRIO op-amps.) To alter the frequency range, change C2. Nothing else is critical.

As noted above, R1 sets the unit’s sensitivity and can be scaled to suit without affecting anything else. With 43k, the probe current is ~70 µA, which should avoid any possible damage to components on a board-under-test.

(Some ICs’ protection diodes are rated at a hopefully-conservative 100 µA, though most should handle at least 10 mA.) R2 helps guard against external voltage insults, as well as being part of the biasing network.

And that newly-spare half of A2? We can use it to make an active clamp (thanks, Bob Dobkin) to limit the swing from A1a rather than just attenuating it. R1 must be increased—51k instead of 43k—because we no longer need extra gain.

Figure 4 shows the circuit. When A2a’s inverting input tries to rise higher than its non-inverting one—the reference point—D1 clamps it to that reference voltage.

Figure 4. An active clamp is a better way of limiting the maximum control voltage fed to the PLVCO.

The slight frequency changes with supply voltage can be ignored; a 20°C temperature rise gave an upward shift of about a semitone. Shame: with some careful tuning, this could otherwise also have done duty as a tuning fork.

“Pitch-perfect” would be an overstatement, but just like the original PLVCO, this can be used to play tunes! A length of suitable resistance wire stretched between a couple of drawing pins should be a good start . . . now, where’s that half-dead wire-wound pot? Trying to pick out a seasonal “Jingle Bells” could keep me amused for hours (and leave the neighbors enraged for weeks).

 Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

The post Tuneful track-tracing appeared first on EDN.

Exploring ceramic resonators and filters

Tue, 12/23/2025 - 10:18

Ceramic resonators and filters occupy a practical middle ground in frequency control and signal conditioning, offering designers cost-effective alternatives to quartz crystals and LC networks. Built on piezoelectric ceramics, these devices provide stable oscillation and selective filtering across a wide range of applications—from timing circuits in consumer electronics to noise suppression in RF designs.

Their appeal lies in balancing performance with simplicity: easy integration, modest accuracy, and reliable operation where ultimate precision is not required.

Getting started with ceramic resonators

Ceramic resonators offer an attractive alternative to quartz crystals for stabilizing oscillation frequencies in many applications. Compared with quartz devices, their ease of mass production, low cost, mechanical ruggedness, and compact size often outweigh the reduced precision in frequency control.

In addition, ceramic resonators are better suited to handle fluctuations in external circuitry or supply voltage. By relying on mechanical resonance, they deliver stable oscillation without adjustment. These characteristics also enable faster rise times and performance that remains independent of drive-level considerations.

Recall that ceramic resonators utilize the mechanical resonance of piezoelectric ceramics. Quartz crystals remain the most familiar resonating devices, while RC and LC circuits are widely used to produce electrical resonance in oscillating circuits. Unlike RC or LC networks, ceramic resonators rely on mechanical resonance, making them largely unaffected by external circuitry or supply-voltage fluctuations.

As a result, highly stable oscillation circuits can be achieved without adjustment. Figure below shows two types of commonly available ceramic resonators.

Figure 1 A mix of common 2-pin and 3-pin ceramic resonators demonstrates their typical package styles. Source: Author

Ceramic resonators are available in both 2-pin and 3-pin versions. The 2-pin type requires external load capacitors for proper oscillation, whereas the 3-pin type incorporates these capacitors internally, simplifying circuit design and reducing component count. Both versions provide stable frequency control, with the choice guided by board space, cost, and design convenience.

Figure 2 Here are the standard circuit symbols for 2-pin and 3-pin ceramic resonators. Source: Author

Getting into basic oscillating circuits, these can generally be grouped into three categories: positive feedback, negative resistance elements, and delay of transfer time or phase. For ceramic resonators, quartz crystal resonators and LC oscillators, positive feedback is the preferred circuit approach.

And the most common oscillator circuit for a ceramic resonator is the Colpitts configuration. Circuit design details vary with the application and the IC employed. Increasingly, oscillation circuits are implemented with digital ICs, often using an inverter gate. A typical practical example (455 kHz) with a CMOS inverter is shown below.

Figure 3 A practical oscillator circuit employing a CMOS inverter and ceramic resonator shows its typical configuration. Source: Author

In the above schematic, IC1A functions as an inverting amplifier for the oscillating circuit, while IC1B shapes the waveform and buffers the output. The feedback resistor R1 provides negative feedback around the inverter, ensuring oscillation starts when power is applied.

If R1 is too large and the input inverter’s insulation resistance is low, oscillation may stop due to loss of loop gain. Excessive R1 can also introduce noise from other circuits, while being too small a value reduces loop gain.

The load capacitors C1 and C2 provide a 180° phase lag. Their values must be chosen carefully based on application, integrated circuit, and frequency. Undervalued capacitors increase loop gain at high frequencies, raising the risk of spurious oscillation. Since oscillation frequency is influenced by loading capacitance, caution is required when tight frequency tolerance is needed.

Note that the damping resistor R2, sometimes omitted, loosens the coupling between the inverter and feedback circuit, reducing the load on the inverter output. It also stabilizes the feedback phase and limits high-frequency gain, helping prevent spurious oscillation.

Having introduced the basics of ceramic resonators (just another surface scratch), we now shift focus to ceramic filters. The deeper fundamentals of resonator operation can be addressed later or explored through further discussion; for now, the emphasis turns to filter applications.

Ceramic filters and their practical applications

A filter is an electrical component designed to pass or block specific frequencies. Filters are classified by their structures and the materials used. A ceramic filter employs piezoelectric ceramics as both an electromechanical transducer and a mechanical resonator, combining electrical and mechanical systems within a single device to achieve its characteristic response.

Like other filters, ceramic filters possess unique traits that distinguish them from alternatives and make them valuable for targeted applications. They are typically realized in bandpass configurations or as duplexers, but not as broadband low-pass or high-pass filters, since ceramic resonators are inherently narrowband.

In practice, ceramic filters are widely used in IF and RF bandpass applications for radio receivers and transmitters. These RF and IF ceramic filters are low-cost, easy to implement, and well-suited for many designs where the precision and performance of a crystal filter are unnecessary.

Figure 4 A mix of ceramic filters presents examples of their available packages. Source: Author

A quick theory talk: A 455-kHz ceramic filter is essentially a bandpass filter with a sharp frequency response centered at 455 kHz. In theory, attenuation at the center frequency is 0 dB, though in practice insertion loss is typically 2–6 dB. As the input frequency shifts away from 455 kHz, attenuation rises steeply.

Depending on the filter grade, the effective passband spans from about 455 kHz ± 2 kHz for narrow designs and up to ±15 kHz for wider types (in theory often cited as ±10 kHz). Signals outside this range are strongly suppressed, with stopband attenuation reaching 40 dB or more at ±100 kHz.

On a related note, ceramic discriminators function by converting frequency variations into voltage signals, which are then processed into audio detection method widely used in FM receivers. FM wave detection is achieved through circuits where the relationship between frequency and output voltage is linear. Common FM detection methods include ratio detection, Foster-Seeley detection, quadrature detection, and differential peak detection.

Now I recall the CDB450C24, a ceramic discriminator designed for FM detection at 450 kHz. Employing piezoelectric ceramics, it provides a stable center frequency and linear frequency-to-voltage conversion, making it well-suited for quadrature detection circuits such as those built with the nostalgic Toshiba TA31136F FM IF detector IC for cordless phones. Compact and cost‑effective, the CDB450C24 exemplifies the role of ceramic discriminators in reliable FM audio detection.

Figure 5 TA31136F IC application circuit shows the practical role of the CDB450C24. Source: Toshiba

As a loosely connected observation, the choice of 450 kHz for ceramic discriminators reflected receiver design practices of the time. AM radios had long standardized on 455 kHz as their intermediate frequency (IF), while FM receivers typically used 10.7 MHz for selectivity.

To achieve cost-effective FM detection, however, many designs employed a secondary IF stage around 450 kHz, where ceramic discriminators could provide stable, narrowband frequency-to-voltage conversion.

This dual-IF approach balanced the high-frequency selectivity of 10.7 MHz with the practical detection capabilities of 450 kHz, making ceramic discriminators like the CDB450C24 a natural fit for FM audio demodulation.

Thus, ceramic filters remain vital for compact, reliable frequency selection, valued for their stability and low cost. Multipole ceramic filters extend this role by combining multiple resonators to sharpen selectivity and steepen attenuation slopes, their real purpose being to separate closely spaced channels and suppress adjacent interference.

Together, they illustrate how ceramic technology continues to balance simplicity with performance across consumer and professional communication systems.

Closing thoughts

Time for a quick pause—but before you step away, consider how ceramic resonators and filters continue to anchor reliable frequency control and signal shaping across modern designs. Their balance of simplicity, cost-effectiveness, and performance makes them a quiet force behind countless applications.

Share your own experiences with these components and keep an eye out for more exploration into the fundamentals that drive today’s electronics.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Exploring ceramic resonators and filters appeared first on EDN.

Beats’ Studio Buds Plus: Tangible improvements, not just marketing fluff

Mon, 12/22/2025 - 15:00

“Plus” tacked onto a product name typically translates into a product-generation extension with little (if any) tangible enhancement. Beats has notably bucked that trend.

I’ve decided that I really like transparent devices:

Not only do they look cool (at least in my opinion; yours might differ), since I can see inside them, I’m able to do “pseudo teardowns” without needing to actually take them apart (inevitably destroying them in the process). Therein my interest in the May 2023-unveiled “Plus” spin of Apple subsidiary Beats’ original Studio Buds earbuds, introduced two years earlier:

Frosty beats solid black

As you can see, these are translucent; it’d be a stretch to also call them transparent. Still, I can discern a semblance of what’s inside both the earbuds and their companion storage-and-charging case. And in combination with Beats’ spec-improvement claims:

along with a thorough and otherwise informative teardown video I found of first-gen units:

I think I’ve got a pretty good idea of what’s inside these.

Cool (again, IMHO) looks and an editorial-coverage angle aside, why’d I buy them? After all, I already owned a first-generation Studio Buds set (at left in the following shots, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, and which you’ll also see in other photos in this piece):

Reviewers’ assertions of significant improvements in active noise cancellation (ANC) and battery life with the second-generation version were admittedly tempting:

and like their forebears (and unlike Apple’s own branded earbuds, that is unless you hack’ em), they’re multi-platform compatible versus Apple ecosystem-only, key for this Android guy:

That all said, I confess that what sealed the deal for me was the 10%-off-$84.95 promo price I came across on Woot back in mid-August. Stack that up against the $169.99 MSRP and you can see why I bit on the bait…I actually sprung for two sets, in fact.

An expanded tip suite

Here’s an official unboxing video:

Followed by my own still shots of the process:

Beats added a fourth tip-size option—extra-small—this time around, and the software utility now supports a “fit test” mode to help determine which tip option is optimum for your ears:

Assuring failsafe firmware upgrades

Upon pairing them with my Google Pixel 7 smartphone, I was immediately alerted to an available firmware update:

However, although the earbuds themselves were still nearly fully charged, lengthy time spent in the box (on the shelf at the retailer warehouse) had nearly drained the cells in the case. I needed to recharge the latter before I was allowed to proceed (wise move, Beats!):

With the case (and the buds within it) now fully charged, the update completed successfully:

Battery variability

The first- and second-generation cases differ in weight by 1 gram (48 vs 49), according to my kitchen scale:

With the second-generation earbuds set incrementing the total by another gram (58 vs 60):

In both cases, I suspect, the weight increment is associated with increased battery capacity. The aforementioned teardown video indicates that the cells in the first-generation case have a capacity of 400 mAh (1.52 Wh @ 3.8V). The frosty translucence in the second-generation design almost (but doesn’t quite) enable me to discern the battery cell markings inside:

But Apple conveniently stamped the capacity on the back this time: 600 mAh, matching the 50% increase statistic in Beats’ promotional verbiage:

The “button” cells in the earbuds themselves supposedly have a 16% higher capacity than those in the first-generation predecessors. Given that the originals, again per the teardown video, had the model name M1254S2, translating to a 3.7V operating voltage and 60 mAh capacity, I’m guessing that these are the same-dimension 70-mAh M1254S3 successors.

Microphone upgrades

As for inherent output sound quality, I can discern no difference between the two generations:

A result with which Soundguys’ objective (vs my subjective) analysis concurs:

That said, I can happily confirm that the ability to discern music details in high ambient noise environments, not to mention to conduct discernible phone conversations (at both ends of the connection), is notably enhanced with the second-generation design. Beats claims that all three microphones are 3x larger this time around, a key factor in the improvement. Here (at bottom left in each case) are the first- and second-generation feedforward microphone port pairs:

Along with the ANC feedback mics alongside the two generations’ speaker ports:

The main “call” mics are alongside the touch-control switch in the “button” assembly still exposed when the buds are inserted in the wearer’s ears:

I’m guessing an integrated audio DSP upgrade was also a notable factor in the claimed “up to 1.6x” improved ANC (along with up to 2x enhanced transparency). The first-gen Studio Buds leveraged a Cirrus Logic CS47L66 (along with a MediaTek MT2821A to implement Bluetooth functionality); reader guesses as to what’s in use this time are welcome in the comments!

The outcome of these mic and algorithm upgrades? Over to Soundguys again for the results!

Venting is relieving

The final update is a bit of an enigma, at least to me. Beats has added what it claims are three acoustic vents to the design. Here’s an excerpt from a fuller writeup on the topic:

You’ve probably noticed how some wearables feel more comfortable than others. That’s where acoustic vents come in. They help equalize pressure, reducing that uncomfortable “plugged ear” sensation you might experience with earbuds or other in-ear devices. By doing this, they make your listening experience not only better but also more natural.

The thing is, though, Beats’ own associated image only shows two added vents:

And that’s all I can find, too:

So…🤷‍♂️

In closing, for your further translucency-blurring visual-inspection purposes, here are some additional standalone images of the right-side earbud, this time standalone and from various positional perspectives:

including another one minus the tip:

and of both the top and bottom of the case:

“Plus” mid-life updates to products are typically little more than new colorway options, or (for smartphones) bigger-sized displays and batteries, but otherwise identical hardware allocations. It’s nice to see Beats do a more substantive “Plus” upgrade with their latest Studio Buds. And the $75-and-change promo price was also very nice. Reader thoughts are as-always welcomed in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Beats’ Studio Buds Plus: Tangible improvements, not just marketing fluff appeared first on EDN.

Tiny LCOS microdisplay drives next-gen smart glasses

Fri, 12/19/2025 - 18:48

Omnivision’s OP03021 liquid crystal on silicon (LCOS) panel integrates the display array, driver, and memory into a low-power, single-chip design. The full-color microdisplay delivers a resolution of 1632×1536 pixels at 90 Hz in a compact 0.26-in. optical format, enabling smart glasses to achieve higher resolution and a wider field of view.

The microdisplay features a 3.0-µm pixel pitch and operates with a 90-Hz field-sequential input using a MIPI C-PHY trio interface. Panel dimensions are just 7.986×25.3×2.116 mm, saving board space in wearables such as augmented reality (AR), extended reality (XR), and mixed-reality (MR) smart glasses and head-mounted displays.

The OP03021 is offered in a compact 30-pin FPCA package. Samples are available now, with mass production scheduled to begin in the first half of 2026. For more information, contact a sales representative here.

OP03021 product page

Omnivision

The post Tiny LCOS microdisplay drives next-gen smart glasses appeared first on EDN.

FMCW LiDAR delivers 4D point clouds

Fri, 12/19/2025 - 18:48

Voyant has announced the Helium family of fully solid-state 4D FMCW LiDAR sensors and modules for simultaneous depth and velocity measurement. Based on a proprietary silicon photonic chip, the platform provides scalable sensing and high-resolution point-cloud data.

Helium employs a dense 2D photonic focal plane array with integrated 2D on-chip beam steering, enabling fully electronic scanning. A 2D array of surface emitters implements FMCW operation in a compact, solid-state architecture with no moving parts.

Key advantages of Helium include:

  • Configurable planar array resolution: 12,000–100,000 pixels
  • FMCW operation with per-pixel radial velocity measurement
  • Software-defined LiDAR enabling adaptive scan patterns and regions of interest
  • Ultra-compact form factor: <150 g mass, <50 cm³ volume

Helium sensors and modules will be available in multiple resolution and range configurations, supporting FoVs ranging from up to 180° wide to narrow long-range optics.

Voyant is offering early access to Helium for collaborators to explore custom chip resolutions, FoVs, module configurations, multi-sensor fusion, and software-defined scanning. To participate or request more information, contact earlyaccess@voyantphotonics.com.

Helium product page 

Voyant Photonics

The post FMCW LiDAR delivers 4D point clouds appeared first on EDN.

Bipolar transistors cut conduction voltage

Fri, 12/19/2025 - 18:48

Diodes has expanded its series of automotive-compliant bipolar transistors with 12 NPN and PNP devices designed to achieve ultra-low VCE(sat). With a saturation voltage of just 17 mV at 1 A and on-resistance as low as 12 mΩ, the DXTN/P 78Q and 80Q series minimize conduction losses by up to 50% versus previous generations, enabling cooler operation and easier thermal management.

The transistors feature collector-emitter voltage ratings (BVCEO) of 30 V, 60 V, and 100 V, and can handle continuous currents up to 10 A (20 A peak), making them suitable for 12‑V, 24‑V, and 48‑V automotive systems. They can be used for gate driving MOSFETs and IGBTs, power line and load switching, low-dropout voltage regulation, DC/DC conversion, and driving motors, solenoids, relays, and actuators.

Rated for continuous operation up to +175°C and offering high ESD robustness (HBM 4 kV, CDM 1 kV), the devices ensure reliable performance in harsh automotive environments. Housed in a compact 3.3×3.3-mm PowerDI3333-8 package, they reduce PCB footprint by up to 75% versus SOT223, while a large underside heatsink delivers low thermal resistance of 4.2°C/W.

The DXTN/P 78Q series is priced from $0.19 to $0.21, while the DXTN/P 80Q series is priced from $0.20 to $0.22, both in 6000-piece quantities. Access product pages and datasheets here.

Diodes

The post Bipolar transistors cut conduction voltage appeared first on EDN.

MLCC powers efficient xEV resonant circuits

Fri, 12/19/2025 - 18:48

Samsung Electro-Mechanics’ CL32C333JIV1PN# high-voltage MLCC is designed for use in CLLC resonant converters targeting xEV applications such as BEVs and PHEVs. The capacitor provides 33 nF at 1000 V in a compact 1210 (3.2×2.5 mm) package, leveraging a C0G dielectric for high stability.

Maintaining capacitance across –55°C to +125°C with minimal sensitivity to temperature and bias, the device is well suited for high-frequency resonant tanks where electrical consistency directly impacts efficiency and control margin. The surface-mount capacitor enables power electronics designers to reduce component count and footprint in high-voltage CLLC resonant converter designs without compromising reliability.

Alongside the CL32C333JIV1PN#, the company offers two additional 1210-size C0G capacitors. The CL32C103JXV3PN# provides 10 nF at 1250 V, while the CL32C223JIV3PN# provides 22 nF at 1000 V. All three devices are manufactured using proprietary fine-particle ceramic and electrode materials, combined with precision stacking processes, and are optimized for EV charging systems.

The CL32C333JIV1PN#, CL32C103JXV3PN#, and CL32C223JIV3PN# are now in mass production.

Samsung Electro-Mechanics 

The post MLCC powers efficient xEV resonant circuits appeared first on EDN.

Dev kit brings satellite connectivity to IoT

Fri, 12/19/2025 - 18:48

Nordic Semiconductor’s nRF9151 SMA Development Kit (DK) helps engineers build cellular IoT, DECT NR+, and non-terrestrial network (NTN) applications. The kit’s onboard nRF9152 SiP module now features updated modem firmware that enables direct-to-satellite IoT connectivity, adding support for NB-IoT NTN in 3GPP Release 17. The firmware also supports terrestrial LTE-M and NB-IoT networks, along with GNSS.

By replacing internal antennas with SMA connectors, the development board allows direct connection to lab equipment or external antennas for precise RF characterization, power measurements, and field testing. Based on an Arduino Uno–compatible form factor, the board features four user-programmable LEDs, four user-programmable buttons, a Segger J-Link OB debugger, a UART interface via a VCOM port, and a USB connection for debugging, programming, and power.

To accelerate prototyping, the DK includes Taoglas antennas for LTE, NTN, and NR+, along with a Kyocera GNSS antenna. It also provides IoT SIM cards and trial data, enabling immediate terrestrial and satellite connectivity through Deutsche Telekom, Onomondo, and Monogoto.

The nRF9151 SMA DK is available now from Nordic’s distribution partners, including DigiKey, Braemac, and Rutronik. The alpha modem firmware can be downloaded free of charge from the product page linked below.

nRF9151 SMA DK product page 

Nordic Semiconductor 

The post Dev kit brings satellite connectivity to IoT appeared first on EDN.

Electronic design with mechanical manufacturing in mind

Fri, 12/19/2025 - 17:02

Electronics design engineers spend substantial effort on schematics, simulation, and layout. Yet, a component’s long-term success also depends on how well its physical form aligns with downstream mechanical manufacturing processes.

When mechanical design for manufacturing (DFM) is treated as an afterthought, teams can face tooling changes, line stoppages, and field failures that consume the budget and schedule. Building mechanical constraints into design decisions from the outset helps ensure that a concept can transition smoothly from prototype to production without surprises.

The evolving electronic prototyping landscape

Traditional rigid breadboards and perfboards still have value, but they often fall short when a device must conform to curved housings of wearable formats. Engineers who prototype only on flat, rigid platforms may validate electrical behavior while missing mechanical interactions such as strain, connector access, and housing interface.

Scientists are responding with prototype approaches that behave more like the eventual product. For example, MIT researchers, who developed the flexible breadboard called FlexBoard, tested the material by bending it 1,000 times and found it to be fully functional even after repeated deformation.

This bidirectional flexibility allowed the platform to wrap around curved surfaces. It also gave designers a more realistic way to evaluate electronics for wearables, robotics and embedded sensing, where hardware rarely follows a simple planar shape. As these flexible platforms mature, they encourage engineers to think of mechanical behavior not as a late-stage limitation but as a design parameter from the very first version.

Integrating mechanical processes in design

Once a prototype proves the concept, the conversation quickly shifts toward how each part will be manufactured at scale. At this stage, the schematic on paper must reconcile with press stroke limits, tool access, wall thickness, and fixturing. Designing components with specific processes in mind reduces the risk of discovering later that geometry cannot be produced within the budget or timeline.

Precision metal stamping

Metal stamping remains a core process for electrical contacts, terminals, EMI shields, and mini brackets. It excels when parts repeat across high volumes and require consistent form and dimensional control.

A key example is progressive stamping, in which a coil of metal advances through a die set, where multiple stations perform operations in rapid sequence. It strings steps together, so finished features emerge with high repeatability and narrow dimensional spread, making the process suitable for high-volume component manufacturing.

Early collaboration with stamping specialists is beneficial. Material thickness, bend radii, burr direction, and grain orientation all influence tool design and reliability. Features such as stress-relief notches or coined contact areas can often be integrated into the strip layout with little marginal cost once they are considered before the tool is built.

CNC machining

CNC machining often becomes the preferred option where only a few pieces are necessary or shapes are more complicated. It supports complex 3D forms, small production runs, and late-stage changes with fewer up-front tooling costs compared to stamping.

Machined aluminum or copper heatsinks, custom connector housings, and precision mounting blocks are common examples. Designers who plan for machining will benefit from consistent wall thicknesses, accessible tool paths, and tolerances that fit the machine’s capability.

Advanced materials for component durability

The manufacturing method is only part of the process. The base material choice can determine whether a design survives thermal cycles, vibrations, and electrostatic exposure over years of service. Recent work in advanced and responsive materials provides design teams with additional tools to manage these threats. Self-healing polymers and composites are notable examples.

Some of these materials incorporate conductive fillers that redirect electrostatic charge. By steering current away from a single microscopic region, the structure avoids excessive local stress and preserves its functionality for a longer period. For applications such as wearables and portable electronics, this behavior can support longer service intervals and a greater perceived quality.

Engineers are also evaluating high-temperature polymers, filled elastomers, and nanoengineered coatings for use in flexible and stretchable electronics. Each material brings trade-offs in cost, process compatibility, recyclability, and performance. Considering those alongside mechanical processes and board layout helps establish a coherent path from prototype through volume production.

The next generation of electronic products demands a perspective that merges circuit behavior with how parts will be formed, assembled, and protected in real-world environments. Flexible prototyping platforms, process-aware designs for stamping and machining, and careful selection of advanced materials all contribute to this mindset.

When mechanical manufacturing is considered from the get-go, design teams position their work to run reliably on production lines and in the hands of end users.

Ellie Gabel is a freelance writer and associate editor at Revolutionized.

Related Content

The post Electronic design with mechanical manufacturing in mind appeared first on EDN.

The DiaBolical dB

Fri, 12/19/2025 - 15:00

Engineers and technicians who work with oscilloscopes are used to seeing waveforms that plot a voltage versus time. Almost all oscilloscopes these days include the Fast Fourier Transform (FFT) to view the acquired waveform in the frequency domain, similar to a spectrum analyzer.

In the frequency domain, the waveforms plot amplitude versus frequency. This view of the signal uses a different scaling. The default vertical scaling of the frequency domain is dBm, or decibels relative to one milliwatt, as shown in Figure 1.

Figure 1 An oscilloscope’s spectrum display (lower grid) uses default vertical units of dBm to display power versus frequency. (Source: Art Pini)

The FFT displays the signal’s frequency spectrum as either power or voltage versus frequency. The default dBm scale measures signal power; alternative units include voltage-based magnitude. In its various forms, the decibel has long confused well-trained technical professionals accustomed to the time domain.  If dB is a mystery to you, this article covers the basics you need to know.

The dB was originally a measure of relative power in telephone systems. The unit of measure was named the Bel after Alexander Graham Bell.  The decibel (dB) is one-tenth of a Bel and is more commonly used in practice. The definition of the decibel is for electrical applications:

dB = 10 log10 (P2/P1)

Where P1 and P2 are the two power levels being compared.

There are a few key points to note. The first is that the dB is a relative measurement; it measures the ratio of two power levels, P1 and P2, in this example. The second thing is that the dB scale is logarithmic.  The log scale is non-linear, emphasizing low-amplitude signals and compressing higher-amplitude signals. This scaling is particularly useful in the frequency domain, where signals tend to exhibit large dynamic ranges.

Based on this definition, some common power ratios and their equivalent dB values are shown in Table 1.

P2/P1

dB

2:1

3

4:1

6

10:1

10

100:1

20

1:2

-3

1:4

-6

1:10

-10

1:100

-20

Table 1 Common power ratios and the equivalent decibel values. (Source: Art Pini)

The decibel can also compare root power levels, such as the volt.  The definition of the decibel for voltage ratios derived from the definition for power ratios is:

dB = 10 [Log10 (V22/R)/(V12/R)]
= 10 Log10 (V2/V1)2
= 20 log10 (V2/V1)

Where V1 and V2 are the two voltage levels being compared, and R is the terminating resistance.

This derivation utilizes the fact that exponentiation in a logarithm is equivalent to multiplication. The variable R, the terminating resistance (usually 50 Ω), is canceled in the math but still can affect decibel measurements when different resistance values are involved

The voltage-based definition of dB yields the following dB values for these voltage ratios, as shown in Table 2. 

V2/V1

dB

2:1

6

4:1

12

10:1

20

100:1

40

1:2

-6

1:4

-12

1:10

-20

1:100

-40

Table 2 Common voltage ratios and their equivalent decibel values. (Source: Art Pini)

Relative and absolute measurements

As we have seen, the decibel is a relative measure that compares two power or voltage levels.  As such, it is perfect for characterizing transmission gain or loss and is used extensively in scattering (s) parameter measurements.

An absolute measurement can be made by referencing the measurement to a known quantity. The standard reference values in electronic applications are the milliwatt (dBm), the microvolt (dBmV), and the volt (dBV). 

The decibel is used in various other applications, such as acoustics. The sound pressure level in acoustic applications is also measured in dB, and the standard reference is 20 microPascals (μPa).

Using dBm

Based on the definition of dB for power ratios and using 1 mW (0.001 Watt) as the reference, dBm is calculated as:

 dBm = 10 log10 (P2/0.001)

Where P2 is the power of the signal being measured

 Converting from measured power in dBm to power in watts uses the same equation in reverse.

P2 =0.001*10(dBm/10)

For example, the power level in watts (W) for the highest spectral peak is given by the first measure table entry in Figure 1: -5.8 dBm at 5 MHz. The power, in watts, is calculated as follows:

P2 = 0.001 * 10(-5.8 /10)
 P2= 2.63*10-4 W =233 mW

Common power levels and their equivalent dBm values are shown in Table 3.

Power Level

dBm

1 mW

0

2 mW

3

0.5 mW

-3

10 mW

10

0.1 mW

-10

100 mW

20

0.01 mW

-20

1 W

30

10 W

40

100 W

50

1000 W

60

Table 3 common power levels and their equivalent dBm values. (Source: Art Pini)

The calculation of absolute voltage values for voltage-based decibel measurements is similar. To calculate the voltage level for a decibel value in dBV, the equation is:

V2 = 1 * 10(dBV/20)

 For a measured dBV value of 0.3 dBV, the equivalent voltage level is:

 V2 = 1 * 10(0.3/20)
V2 = 1.035 volts

Converting from dBV to dBmV is a scaling or multiplication operation.  So, if you remember the characteristics of logarithms, multiplication within the logarithm becomes addition, and division becomes subtraction. The conversion requires a simple additive constant as derived below:

dBmV = 20 Log10(V2/1×10-6)
dBmV = 20 Log10(V2/1) – 20 Log (1-6)

But:

dBV= 20 log10 (V2/1)
 dBmV = dBV + 120

 A little basic algebra and the reverse operation is:

dBV = dBmV – 120

What if the source impedance isn’t 50 Ω?

Typically, RF work utilizes cables and terminations with a characteristic impedance of 50 Ω. In video, the standard impedance is 75 Ω; in audio, it is 600 Ω.  Reading dBm and matching the source calibration to a 50 Ω input oscilloscope requires adjustments.

First, it is standard practice to terminate sources with their characteristic impedances. A 75-Ω or 600-Ω system signal source requires an appropriate impedance-matching device to connect to a 50-Ω measuring instrument.  The most common is the simple resistive impedance-matching pad (Figure 2).

Figure 2 This schematic of a typical 600 to 50 Ω impedance matching pad reflects a 600 Ω load to the source and provides a 50 Ω source impedance for the measuring instrument. (Source: Art Pini)

The matching pad presents a 600-Ω load to the signal source, while the instrument sees a 50-Ω source, so both devices present the expected impedances. This decreases signal losses by minimizing reflections. The impedance pad is a voltage divider with an insertion loss of 16.63 dB, which must be compensated for in the measurement instrument.

The next step is where the terminating resistances come into play. If the source and load impedances differ, this difference must be considered, as it affects the decibel readings. Going back to the basic definition of decibel:

dB = 10 Log10 [(V22/R2)/(V12/R1)]

Consider how the impedance affects the voltage level equivalent to the one-milliwatt power reference level. The reference voltages equivalent to the one-milliwatt power reference differ between 50 and 600 Ω sides of the measurements:

Pref = .001 Watt = Vref600 2/ 600 = Vref502/50
dBm600 = 10 LOG10 [ (V22) / (Vref6002)]
= 10 LOG10 [(V22) / (Vref502/50/600)]
=10 LOG10 [(50/600) (V22/ (Vref502)]
=10 LOG10 [ (V22)/(Vref502)] + 10 LOG10(50/600)
dBm600 = dBm50 – 10.8

The dBm reading on the 50-Ω instrument is 10.8 dB higher than that on the 600-Ω source because the reference power level is different for the two load impedances.  

The oscilloscope’s rescale operation can scale the spectrum display to dBm referenced to 600 Ω. Assuming a 600-Ω to 50-Ω impedance matching pad, with an insertion loss of 16.63 dB, is used, and the above-mentioned -10.8 dB correction factor is added, the net scaling factor is 5.83 dB must be added to the FFT spectrum as shown in Figure 3.

Figure 3 Using the rescale function of the oscilloscope to recalibrate the instrument to read spectrum levels in dBm relative to a 600-Ω source. (Source: Art Pini)

The 600-Ω source is set to output a zero-dBm signal level. A 600-Ω to 50-Ω impedance matching pad with an insertion loss of 16.63 dB properly terminates the signal source into the oscilloscope’s 50-Ω input termination.  The oscilloscope’s rescale function is applied to the FFT of the acquired signal, adding 5.83 dB to the signal’s spectrum display.  This yields a near-zero dBm reading at 5 MHz.

The measurement parameter P1 measures the RMS input to the oscilloscope, showing the attenuation of the external impedance matching pad. The peak-to-peak (P2) and peak voltage (P3) readings are also measured.  The peak level of the 5 MHz signal spectrum (P4) of near zero dBm (22 milli-dB).  The uncorrected peak spectrum level (P5) is -5.8 dBm

The vertical scale of the spectrum display is now calibrated to match the 600-Ω source. Note that the signal at 5 MHz reads 0 dBm, which matches the signal source setting of 0 dBm (0.774 Vrms) into the expected 600-Ω load.

The decibel

Due to its large dynamic range, the decibel is a useful unit of measure and is used in various applications, mainly in the frequency domain. Converting between linear and logarithmic scaling takes some getting used to and possibly a lot of math.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related Content

The post The DiaBolical dB appeared first on EDN.

Semiconductor technology trends and predictions for 2026

Fri, 12/19/2025 - 11:00

As we look ahead to 2026, we see intelligence increasingly being embedded within physical products and everyday interactions. This shift will be powered by rapid adoption of digital identity technologies such as near-field communication (NFC) alongside AI and agentic AI tools that automate workflows, improve efficiency, and accelerate innovation across the product lifecycle.

The sharp rise in NFC adoption—with 92% of brands already using or planning to use it in products in the next year—signals appetite to unlock the true value of the connected world. Enabling intelligence in new places gives brands the opportunity to bridge physical and digital experiences for positive social, commercial, and environmental outcomes.

Regulatory milestones, such as the phased rollout of the EU Digital Product Passport, along with sustainability pressures and the need to ensure transparency to drive trust will be key catalysts for edge and item-level AI.

In the year ahead, companies will unlock significant benefits in customer experience, sustainability, compliance, and supply chain efficiency by embedding intelligence from the edge to individual items and devices.

Let’s dig deeper into the technology trends shaping 2026.

  1. Edge AI is the fastest growing frontier in semiconductors

Driven by the shift from pure inference to on-device training and continuous, adaptive learning, 2026 will see strong growth in edge AI demand. Specialized chips such as low-power machine learning accelerators, sensor-integrated chips, and memory-optimized chips will be used in consumer electronics, smart cities, and industrial IoT.

Next, new packaging approaches will become the proving ground for performance, cost efficiency, and miniaturization in intelligent edge devices.

  1. Item-level intelligence is accelerating digital transformation

Intelligence will not stop at the device. Over the next 12 months, low-cost sensing, NFC, and edge AI will push computation down to individual items.

The capability to gather real-time data at item level in a move away from batch data, combined with AI, will enable personalized experiences, automation, and predictive analytics across smart packaging, healthcare and wellness products, retail, and logistics. Applications include real-time tracking, AI-driven personalization, automated supply chain optimization, predictive maintenance, and dynamic authentication.

This marks a fundamental shift as every item becomes a data node and source of intelligence.

  1. Connected consumer experiences are driving breakthrough NFC adoption

NFC adoption is accelerating alongside the explosion of connected consumer experiences—from wearables and hearables to smart packaging, digital keys and wellness applications. NFC will become a central enabler of trust, personalization, and seamless connectivity.

Figure 1 NFC has become a key enabler in personalization-centric connectivity. Source: Pragmatic Semiconductor

As consumers increasingly expect intelligent product interaction, for example, to track provenance or engage with wellness apps to build a personalized profile and derive usable insights, the opportunity for NFC is clear. Brands will favor ultra-low-cost and thin NFC solutions—where flexible and ultra-thin semiconductors excel—to deliver frictionless, high-quality consumer experiences.

  1. Heterogeneous integration will unlock design innovation

Heterogeneous integration through chiplets, interposers, and die stacking will become the preferred approach for achieving higher density and improved yields. This is a key enabler for miniaturization and differentiated form factors in facilitating customization for edge AI.

At the same time, the rise of agentic AI-driven EDA tools will lower design barriers and fuel cost-effective innovation through natural language tools. This will ignite startup growth and increase demand for agile, cost-effective foundry design services.

  1. Compliance shifts from cost to competitive advantage

New regulatory frameworks such as Digital Product Passports, circularity, and Extended Producer Responsibility (EPR) will require authentication, traceability, and lifecycle visibility. Rather than a burden, this presents a strategic opportunity for competitive advantage and market expansion.

Embedded digital IDs with NFC capability allow businesses to secure product authentication, meet compliance and governance expectations, and unlock new value in consumer engagement. As compliance moves from paper systems to embedded intelligence, the opportunity will expand across consumer goods, industrial components, and supply chains.

  1. Energy constraints are driving efficiencies in semiconductor manufacturing

As semiconductor manufacturing scales to serve AI demand, growing energy consumption in data centers is forcing industry to focus on power-efficient architectures. This is accelerating a shift away from centralized compute toward fully distributed sensing and intelligence at the edge. Edge AI architectures are designed to process data locally rather than transmit it upstream and will be essential to sustaining AI growth without compounding energy constraints.

Figure 2 Semiconductor manufacturing will increasingly adopt circular design principles such as reuse, recycling, and recoverability. Source: Pragmatic Semiconductor

The capability to establish and scale domestic manufacturing will also play a critical role in cutting embedded emissions and enabling more sustainable and efficient supply chains. Semiconductor manufacturing facilities, known as foundries, will be evaluated on their energy and material efficiency, supported by circular design principles such as reuse, recycling, and recoverability.

Companies that can demonstrate strong environmental commitments will gain long-term competitive advantage, attracting customers, partners, and skilled talent.

Intelligence right to the edge

These trends point toward a definitive shift as intelligence moves dynamically into the physical world. Compute will become increasingly distributed and identity embedded, unlocking efficiencies and delivering real-time insights into the fabric of products, infrastructure, and supply chains.

Semiconductor manufacturing will sit at the heart of the next phase of digital transformation. Flexible and ultra-thin chip technologies will enable new classes of innovations, from emerging form factors such as wearables and hearables to higher functional density in constrained spaces, alongside more carbon-efficient manufacturing models.

The implications for businesses are clear. Companies can accelerate innovation, deepen consumer engagement, and turn compliance into a source of competitive advantage. Those that embed connected technologies into their 2026 strategy will be those that are best positioned to take advantage of the digital transformation opportunities ahead.

Richard Price is co-founder and chief technology officer of Pragmatic Semiconductor.

 

 

Related Content

The post Semiconductor technology trends and predictions for 2026 appeared first on EDN.

An off-the-shelf digital twin for software-defined vehicles

Thu, 12/18/2025 - 16:01

The complexity of vehicle hardware and software is rising at an unprecedented rate, so traditional development methodologies are no longer sufficient to manage system-level interdependencies among advanced driver assistance systems (ADAS), autonomous driving (AD), and in-vehicle infotainment (IVI) functions.

That calls for a new approach, the one that enables automotive OEMs and tier 1s to speed the development of software-defined vehicles (SDVs) with early full-system, virtual integration that mirrors real-world vehicle hardware. That will accelerate both application and low-level software development for ADAS, AD, and IVI and remove the need for design engineers to build their own digital twins before testing software.

It will also reduce time-to-market for critical applications from months to days. Siemens EDA has unveiled what it calls a virtual blueprint for digital twin development. PAVE360 Automotive, a digital twin software, is pre-integrated as an off-the-shelf offering to address the escalating complexity of automotive hardware and software integration.

While system-level digital twins for SDVs using existing technologies can be complex and time-consuming to create and validate, PAVE360 Automotive aims to deliver a fully integrated, system-level digital twin that can be deployed on day one. That reduces the time, effort, and cost required to build such environments from scratch.

Figure 1 PAVE360 Automotive is a cloud-based digital twin that accelerates system-level development for ADAS, autonomous driving, and infotainment. Source: Siemens EDA

“The automotive industry is at the forefront of the software-defined everything revolution, and Siemens is delivering the digital twin technologies needed to move beyond incremental innovation and embrace a holistic, software-defined approach to product development,” said Tony Hemmelgarn, president and CEO, Siemens Digital Industries Software.

Siemens EDA’s digital twin—a cloud-based off-the-shelf offering—allows design engineers to jumpstart vehicle systems development from the earliest phases with customizable virtual reference designs for ADAS, autonomous driving, and infotainment. Moreover, the cloud-based collaboration unifies development with a single digital twin for all design teams.

The Arm connection

Earlier, Siemens EDA joined hands with Arm to accelerate virtual environments for Arm Cortex-A720AE in 2024 and Arm Zena Compute Subsystems (CSS) in 2025. Now Siemens EDA is integrating Arm Zena CSS with PAVE360 Automotive to enable design engineers to start building on Arm-based designs faster and more seamlessly.

Figure 2 Here is how PAVE360’s digital twin works alongside the Arm Zena CSS platform for AI-defined vehicles. Source: Siemens EDA

On the other hand, access to Arm Zena CSS in a digital twin environment such as PAVE360 Automotive can accelerate software development by up to two years. “With Arm Zena CSS available inside Siemens’ pre-integrated PAVE360 Automotive environment, partners can not only customize their solutions leveraging the unique flexibility of the Arm architecture but also validate and iterate much earlier in the development cycle,” said Suraj Gajendra, VP of products and solutions for Physical AI Business Unit at Arm.

PAVE360 Automotive, now made available to key customers, is planned for general availability in February 2026. It will be demonstrated live at CES 2026 in the Auto Hall on 6–9 January 2026.

Related Content

The post An off-the-shelf digital twin for software-defined vehicles appeared first on EDN.

Wide-range tunable RC Schmitt trigger oscillator

Thu, 12/18/2025 - 15:00

In this Design Idea (DI), the classic Schmitt-trigger-based RC oscillator is “hacked” and analyzed using the simulation software QSPICE. You might reasonably ask why one would do this, given that countless such circuits reliably and unobtrusively clock away all over the world, even in space.

Well, problems arise when you want the RC oscillator to be tunable, i.e., replacing the resistor with a potentiometer. Unfortunately, the frequency is inversely proportional to the RC time constant, resulting in a hyperbolic response curve.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Another drawback is the limited tuning range. For high frequencies, R can become so small that the Schmitt trigger’s output voltage sags unacceptably.

The oscillator’s current consumption also increases as the potentiometer resistance decreases. In practice, an additional resistor ≥1 kΩ must always be placed in series with the potentiometer.

The potentiometer’s maximum value determines the minimum frequency. For values >100 kΩ, jitter problems can occur due to hum interference when operating the potentiometer, unless a shielded enclosure is used.

RC oscillator

Figure 1 shows an RC oscillator modeled with QSPICE’s built-in behavioral Schmitt trigger. It is parameterized as a TLC555 (CMOS 555) regarding switching thresholds and load behavior.

Figure 1: An RC oscillator modeled with QSPICE’s built-in behavioral Schmitt trigger, it is a TLC555 (CMOS 555).

Figure 2 displays the typical triangle wave input and the square wave output. At R1=1 kΩ, the output voltage sag is already noticeable, and the average power dissipation of R1 is around 6 mW, roughly an order of magnitude higher than the dissipation of a low-power CMOS 555.

Figure 2 The typical triangle wave input and the square wave output, where the average power dissipation of R1 is around 6 mW.

Frequency response wrt potentiometer resistance

Next, we examine the oscillator’s frequency response as a function of the potentiometer resistance. R1 is simulated in 100 steps from 1 kΩ to 100 kΩ using the .step param command.

The simulation time must be long enough to capture at least one full period even at the lowest frequency; otherwise, the period duration cannot be measured with the .meas command.

However, with a 3-decade tuning range, far too many periods would be simulated at high frequencies, making the simulation run for a very long time.

Fortunately, QSPICE has a new feature that allows a running simulation to be aborted, after which the new simulation for the next parameter step is executed. The abort criterion is a behavioral voltage source called AbortSim(). It’s not the most elegant or intuitive feature, but it works.

Schmitt trigger oscillator

Figure 3 shows our Schmitt trigger oscillator, but this time with the parameter stepping of R1, the .meas commands for period and frequency measurement, and an auxiliary circuit that triggers AbortSim(). My idea was to build a counter clocked by the oscillator. After a small number of clock pulses—enough for one period measurement—the simulation is aborted.

Figure 3 Schmitt trigger oscillator, this time, with the parameter stepping of R1, the .meas commands for period and frequency measurement, and an auxiliary circuit that triggers AbortSim().

I first tried a 3-stage ripple counter with behavioral D-flops. This worked but wasn’t optimal in terms of computation time.

The step voltage generator in the box in Figure 2 is faster and easier to adjust. A 10-ns monostable is triggered by V(out) of the oscillator and sends short current pulses via the voltage-controlled current source to capacitor C3. The voltage across C3 triggers AbortSim() at >= 0.5V.

The constant current and C3 are selected so that the 0.5 V threshold is reached after 3 clock cycles of the oscillator, thus starting the next measurement.

Note that the simulation time in the .tran command is set to 5 s, which is never reached due to AbortSim().

The entire QSPICE simulation of the frequency response takes the author’s PC a spectacular 1.5 s, whereas previously with LTSPICE (without the abort criterion) it took many minutes.

Figure 4 shows the frequency (FREQ) versus potentiometer resistance (RPOT) curve in a log plot, interpolated over 100 measurement points.

Figure 4 Frequency versus potentiometer resistance curve in a log plot, interpolated over 100 measurement points.

Final circuit hack

Now that we have the simulation tools for fast frequency measurement, we finally get to the circuit hack in Figure 5. We expand the circuit in Figure 1 with a resistor R2=RPOT in series with C1.

Figure 5 Hacked Schmitt trigger oscillator with an expanded Figure 1 circuit that includes R2=RPOT in series with C1.

Figure 6 illustrates what happens: for R2=0 (blue trace), we see the familiar triangle wave. When R2 is increased (magenta trace), a voltage divider:

V(out)/V(R2) = (R1+R2)/R1

is created if we momentarily ignore V(C1). V(R2) is thus a scaled-down V(out) square wave signal, to which the V(C1) triangle wave voltage is now added.

Figure 6 The typical triangle wave input with the output now reaching very high frequencies without excessively loading V(OUT).

Because the upper and lower switching thresholds of the Schmitt trigger are constant, V(C1) reaches these thresholds faster as V(R2) increases. The more V(R2) approaches the Schmitt trigger hysteresis VHYST, the smaller the V(C1) triangle wave becomes, and the frequency increases.

At V(R2)=VHYST, the frequency would theoretically become infinite. This condition in the original circuit in Figure 1 would mean R1=0, leading to infinitely high I(out). The circuit hack thus allows very high frequencies without excessively loading V(OUT)!

The problem of the steep frequency rise towards infinity at the “end” of the potentiometer still remains. To fix this, we would need a potentiometer that changes its value significantly at the beginning of its range and only slightly at the end. This is easily achieved by wiring a much smaller resistor in parallel with the potentiometer.

Fixing steep frequency rise

In Figure 7, we see a second hack: R1 has been given a very large value.

Figure 7 Giving R1 a large value keeps the circuit’s current consumption low, allowing RPOT to be dimensioned independently of R1.

This keeps the circuit’s current consumption low, especially at high frequencies. The square wave voltage at RPOT is now taken directly from V(OUT) via a separate voltage divider. This allows RPOT to be dimensioned independently of R1.

In the example, I used a common 100 kΩ potentiometer. The remaining resistors are effectively in parallel with the potentiometer regarding AC signals and set the desired characteristic curve.

Despite all measures, the frequency increase is still quite steep at the end of the range, so a 1 kΩ trimmer is recommended for practical application to conveniently set the maximum frequency.

Figure 8 shows the frequency curve of the final circuit. Compared to the curve of the original circuit in Figure 4, a significantly flatter curve profile is evident, along with a larger tuning range.

Figure 8 Frequency versus potentiometer resistance curve in a log plot, interpolated over 100 measurement points, showing a flatter curve profile and larger tuning range.

Uwe Schüler is a retired electronics engineer. When he’s not busy with his grandchildren, he enjoys experimenting with DIY music electronics.

 Related Content

The post Wide-range tunable RC Schmitt trigger oscillator appeared first on EDN.

Enabling a variable output regulator to produce 0 volts? Caveat, designer!

Wed, 12/17/2025 - 15:00

For some time now, many of EDN’s Design Ideas (DIs) have dealt with ground-referenced, single-power-supplied voltage regulators whose outputs can be configured to produce zero or near-zero volts [1][2].

In this mode of operation, regulation in response to an AC signal is problematic. This is because the regulator output voltage can’t be more negative than zero. For the many regulators with totem pole outputs, at zero volts, we could hope for the ground-side MOSFET to be indefinitely enabled, and the high side disabled. But that’s not a regulator, it’s a switch.

Wow the engineering world with your unique design: Design Ideas Submission Guide

There might be some devices that act this way when asked to produce 0 volts, but in general, the best that could be hoped for is that the output is simply disabled. In such a case, a load that is solely an energy sink would pull the voltage to ground (woe unto any that are energy sources!).

But is it lollipops and butterflies all the way down to and including zero volts? I decided to test one regulator to see how it behaves.

Testing the regulator

A TPS54821EVM-049 evaluation module employs a TPS54821 buck regulator. I’ve configured its PCB for 6.3-V out and connected it to an 8-Ω load. I’ve also connected a function generator through a 22 kΩ resistor to the regulator’s V_SNS (feedback) pin.

The generator is set to produce a 360 mVp-p square wave across the load. It also provides a variable offset voltage, which is used to set the minimum voltage Vmin of the regulator output’s square-wave. Figure 1 contains several screenshots of regulator operation while it’s configured for various values of Vmin.

Figure 1 Oscilloscope screenshot with Vmin set to (a) 400 mV, (b) 300 mV, (c) 200 mV, (d) 100 mV, (e) 30 mV, (f) 0 mV, (g)  below 0 mV. See text for further discussion. The scales of each screenshot are 100mV and 1mS per large division. An exception is (g), whose timescale is 100µS per large division.

As can be seen, the output is relatively clean when Vmin is 400 mV, but gets progressively noisier as it is reduced in 100mV steps down to 100mV (Figures 1a – 1d).

But the real problems start when Vmin is set to about 30 mV and some kind of AC signal replaces what would preferably be a DC one; the regulator is switching between open and closed-loop operation (Figure 1e).  

We really get into the swing of things when Vmin is set to 0 mV and intermittent signals of about 150 mVp-p arise and disappear (Figure 1f). As the generator continues to be changed in the direction that would drive the regulator output more negative if it were capable, the amplitude of the regulator’s ringing immediately following the waveform’s falling edge increases (Figure 1g). Additionally, the overshoot of its recovery increases.

Why isn’t it on the datasheet?

This behavior might or might not disturb you. But it exists. And there are no guarantees that things would not be worse with different lots of TPS54821 or other switcher or linear regulator types altogether. These could be operating with different loads, feedback networks, and input voltage supplies with varying DC levels and amounts of noise.

There might be a very good reason that typical datasheets don’t discuss operation with output voltages below their references—it might not be possible to specify an output voltage below which all is guaranteed to work as desired. Or maybe it is.

But if it is, then why aren’t such capabilities mentioned? Where is there an IC manufacturer’s datasheet whose first page does not promise to kiss you and offer you a chocolate before you go to bed? (That is, list every possible feature of a product to induce you to buy it.)

Finding the lowest guaranteed output level

Consider a design whose intent is to allow a regulator to produce a voltage near or at zero. Absent any help from the regulator’s datasheet, I’m not sure I’d know how to go about finding a guaranteed output level below which bad things couldn’t happen.

But suppose this could be done. The “Gold-Plated” [1] DI was updated under this assumption. It provides a link to a spreadsheet that accepts the regulator reference voltage and its tolerance, a minimum allowed output voltage, a desired maximum one, and the tolerance of the resistors to be used in the circuit.

It calculates standard E96 resistor values of a specified precision along with the limits of both the maximum and the minimum output voltage ranges [3].  

“Standard” regulator results

A similar spreadsheet has been created for the more general “standard” regulator circuit in Figure 2. That latter can be found at [4].

Figure 2 The “standard” regulator in which a reference voltage Vext, independent of the regulator, is used in conjunction with Rg2 to drive the regulator output to voltages below its reference voltage. For linear regulators, L1 is replaced with a short.

The spreadsheet [4] was run with the following requirements in Figure 3.

Figure 3 Sample input requirements for the spreadsheet to calculate the resistor values and minimum and maximum output voltage range limits for a Standard regulator design.

The spreadsheet’s calculated voltage limits are shown in Figure 4.

Figure 4 Spreadsheet calculations of the minimum and maximum output voltage range limits for the requirements of Figure 3.

A Monte Carlo simulation was run 10000 times. The limits were confirmed to be close to and within the calculated ones (Figure 5).

Figure 5 Monte Carlo simulation results confirming the limits were consistent with the calculated ones.

A visual of the Monte Carlo results is helpful (Figure 6).

Figure 6 A graph of the Monte Carlo minimum output voltage range and the maximum one for the standard regulator. See text.

The minimum range is larger than the maximum range. This is because two large signals with tolerances are being subtracted to produce relatively small ones. The signals’ nominal values interfere destructively as intended. Unfortunately, the variations due to the tolerances of the two references do not:

OUT = Vref · ( 1 + Rf/Rg1 + Rf/Rg2 ) – Vext · PWM · Rf/Rg2

“Gold-Plated” regulator results

When I released the “Gold-Plated” DI whose basic concept is seen in Figure 7, I did so as a lark. But after applying the aforementioned “standard” regulator’s design criteria to the Gold-Plated design’s spreadsheet [3], it became apparent that the Gold-Plated design has a real value—its ability to more greatly constrain the limits of the minimum output voltage range.

Figure 7 The basic concept of the Gold-Plated regulator. K = 1 + R3/R4 .

The input to the Gold-Plated spreadsheet is shown in Figure 8.

Figure 8 The inputs to the Gold-Plated spreadsheet.

Its calculations of the minimum and maximum output voltage range limits are shown in Figure 9.

Figure 9 The results for the “Gold-Plated” spreadsheet showing maximum and minimum voltage range limits when PWM inputs are at minimum and maximum duty cycles.

The limits resulting from its 10000 run Monte Carlo simulation were again confirmed to be close to and within those calculated by the spreadsheet:

Figure 10 Monte Carlo simulation results of the Gold-Plated spreadsheet, confirming the limits were consistent with the calculated ones.

Again, a visual is helpful, with the Gold-Plated results on the left and the Standard on the right.

 Figure 11 Graphs of the Monte Carlo simulation results of the Gold-Plated (left) and Standard (right) designs. The minimum voltage range of the Gold-Plated design is far smaller than that of the Standard.

The Standard regulator’s minimum range magnitude is 161 mV, while that of the Gold-Plated version is only 33 mV. The Gold-Plated’s advantage will increase as the desired Vmin approaches 0 volts. Its benefits are due to the fact that only a single reference is involved in the subtraction of terms:

OUT = Vref · ( 1 + Rf/Rg1 + Rf/Rg2 · PWM · ( 1 – K ) )

Belatedly, another advantage of the Gold-Plated was discovered: When a load is applied to any regulator, its output voltage falls by a small amount, causing a reduction of ΔV at the Vref feedback pin.

In the Gold-Plated, there is an even larger reduction at the output of its op-amp because of its gain. The result is a reduced drop across Rg2. This acts to increase the output voltage, improving load regulation.

In contrast, while the Standard regulator also sees a ΔV drop at the feedback pin, the external regulator voltage remains steady. The result is an increase in the drop across Rg2, further reducing the output voltage and degrading load regulation.

Summing up

The benefits of the Gold-Plated design are clear, but it’s not a panacea. Whether a Gold-Plated or  Standard design is used, designers still must address the question: How low should you go? Caveat, designer!

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content/References

  1. Gold-Plated PWM-control of linear and switching regulators
  2. Accuracy loss from PWM sub-Vsense regulator programming
  3. Gold-Plated DI Github
  4. Enabling a variable output regulator to produce 0 volts DI Github

The post Enabling a variable output regulator to produce 0 volts? Caveat, designer! appeared first on EDN.

Why memory swizzling is hidden tax on AI compute

Wed, 12/17/2025 - 13:47

Walk into any modern AI lab, data center, or autonomous vehicle development environment, and you’ll hear engineers talk endlessly about FLOPS, TOPS, sparsity, quantization, and model scaling laws. Those metrics dominate headlines and product datasheets. If you spend time with the people actually building or optimizing these systems, a different truth emerges: Raw arithmetic capability is not what governs real-world performance.

What matters most is how efficiently data moves. And for most of today’s AI accelerators, data movement is tangled up with something rarely discussed outside compiler and hardware circles, that is, memory swizzling.

Memory swizzling is one of the biggest unseen taxes paid by modern AI systems. It doesn’t enhance algorithmic processing efficiency. It doesn’t improve accuracy. It doesn’t lower energy consumption. It doesn’t produce any new insight. Rather, it exists solely to compensate for architectural limitations inherited from decades-old design choices. And as AI models grow larger and more irregular, the cost of this tax is growing.

This article looks at why swizzling exists, how we got here, what it costs us, and how a fundamentally different architectural philosophy, specifically, a register-centric model, removes the need for swizzling entirely.

The problem nobody talks about: Data isn’t stored the way hardware needs it

In any AI tutorial, tensors are presented as ordered mathematical objects that sit neatly in memory in perfect layouts. These layouts are intuitive for programmers, and they fit nicely into high-level frameworks like PyTorch or TensorFlow.

The hardware doesn’t see the world this way.

Modern accelerators—GPUs, TPUs, and NPUs—are built around parallel compute units that expect specific shapes of data: tiles of fixed size, strict alignment boundaries, sequences with predictable stride patterns, and arranged in ways that map into memory banks without conflicts.

Unfortunately, real-world tensors never arrive in those formats. Before the processing even begins, data must be reshaped, re-tiled, re-ordered, or re-packed into the format the hardware expects. That reshaping is called memory swizzling.

You may think of it this way: The algorithm thinks in terms of matrices and tensors; the computing hardware thinks in terms of tiles, lanes, and banks. Swizzling is the translation layer—a translation that costs time and energy.

Why hierarchical memory forces us to swizzle

Virtually, every accelerator today uses a hierarchical memory stack whose layers, from the top-down, encompass registers; shared or scratchpad memory; L1 cache, L2 cache, sometimes even L3 cache, high-bandwidth memory (HBM), and, at the bottom of the stack, the external dynamic random-access memory (DRAM).

Each level has different size, latency, bandwidth, access energy consumption, and, rather important, alignment constraints. This is a legacy of CPU-style architecture where caches hide memory latency. See Figure 1 and Table 1.

Figure 1 See the capacity and bandwidth attributes of a typical hierarchical memory stack in all current hardware processors. Source: VSORA

Table 1 Capacity, latency, bandwidth, and access energy dissipation of a typical hierarchical memory stack in all current hardware processors are shown here. Source: VSORA

GPUs inherited this model, then added single-instruction multiple-thread (SIMT) execution on top. That makes them phenomenally powerful—but also extremely sensitive to how data is laid out. If neighboring threads in a warp don’t access neighboring memory locations, performance drops dramatically. If tile boundaries don’t line up, tensor cores stall. If shared memory bank conflicts occur, everything waits.

TPUs suffer from similar constraints, just with different mechanics. Their systolic arrays operate like tightly choreographed conveyor belts. Data must arrive in the right order and at the right time. If weights are not arranged in block-major format, the systolic fabric can’t operate efficiently.

NPUs-based accelerators—from smartphone chips to automotive systems—face the same issues: multi-bank SRAMs, fixed vector widths, and 2D locality requirements for vision workloads. Without swizzling, data arrives “misaligned” for the compute engine, and performance nosedives.

In all these cases, swizzling is not an optimization—it’s a survival mechanism.

The hidden costs of swizzling

Swizzling takes time, sometimes a lot

In real workloads, swizzling often consumes 20–60% of the total runtime. That’s not a typo. In a convolutional neural network, half the time may be spent doing NHWC ↔ NCHW conversions; that is, two different ways of laying out 4D tensors in memory. In a transformer, vast amounts of time are wasted into reshaping Q/K/V tensors, splitting heads, repacking tiles for GEMMs, and reorganizing outputs.

Swizzling burns energy and energy is the real limiter

A single MAC consumes roughly a quarter of a picojoule. Moving a value from DRAM can cost 500 picojoules. Moving data from a DRAM dissipates in the ballpark of 1,000 times more energy than performing a basic multiply-accumulate operation.

Swizzling requires reading large blocks of data, rearranging them, and writing them back. And this often happens multiple times per layer. When 80% of your energy budget goes to moving data rather than computing on it, swizzling becomes impossible to ignore.

Swizzling inflates memory usage

Most swizzling requires temporary buffers: packed tiles, staging buffers, and reshaped tensors. These extra memory footprints can push models over the limits of L2, L3, or even HBM, forcing even more data movement.

Swizzling makes software harder and less portable

Ask a CUDA engineer what keeps him up at night. Ask a TPU compiler engineer why XLA is thousands of pages deep in layout inference code. Ask anyone who writes an NPU kernel for mobile why they dread channel permutations.

It’s swizzling. The software must carry enormous complexity because the hardware demands very specific layouts. And every new model architecture—CNNs, LSTMs, transformers, and diffusion models—adds new layout patterns that must be supported.

The result is an ecosystem glued together by layout heuristics, tensor transformations, and performance-sensitive memory choreography.

How major architectures became dependent on swizzling

  1. Nvidia GPUs

Tensor cores require specific tile-major layouts. Shared memory is banked, avoiding conflicts requires swizzling. Warps must coalesce memory accesses; otherwise, efficiency tanks. Even cuBLAS and cuDNN, the most optimized GPU libraries on Earth, are filled with internal swizzling kernels.

  1. Google TPUs

TPUs rely on systolic arrays. The flow of data through these arrays must be perfectly ordered. Weights and activations are constantly rearranged to align with the systolic fabric. Much of XLA exists simply to manage data layout.

  1. AMD CDNA, ARM Ethos, Apple ANE, and Qualcomm AI engine

Every one of these architectures performs swizzling. Morton tiling, interleaving, channel stacking, etc. It’s a universal pattern. Every architecture that uses hierarchical memory inherits the need for swizzling.

A different philosophy: Eliminating swizzling at the root

Now imagine stepping back and rethinking AI hardware from first principles. Instead of accepting today’s complex memory hierarchies as unavoidable—the layers of caches, shared-memory blocks, banked SRAMs, and alignment rules—imagine an architecture built on a far simpler premise.

What if there were no memory hierarchy at all? What if, instead, the entire system revolved around a vast, flat expanse of registers? What if the compiler, not the hardware, orchestrated every data movement with deterministic precision? And what if all the usual anxieties—alignment, bank conflicts, tiling strategies, and coalescing rules—simply disappeared because they no longer mattered?

This is the philosophy behind a register-centric architecture. Rather than pushing data up and down a ladder of memory levels, data simply resides in the registers where computation occurs. The architecture is organized not around the movement of data, but around its availability.

That means:

  • No caches to warm up or miss
  • No warps to schedule
  • No bank conflicts to avoid
  • No tile sizes to match
  • No tensor layouts to respect
  • No sensitivity to shapes or strides, and therefore no swizzling at all

In such a system, the compiler always knows exactly where each value lives, and exactly where it needs to be next. It doesn’t speculate, prefetch, tile, or rely on heuristics. It doesn’t cross its fingers hoping the hardware behaves. Instead, data placement becomes a solvable, predictable problem.

The result is a machine where throughput remains stable, latency becomes predictable, and energy consumption collapses because unnecessary data motion has been engineered out of the loop. It’s a system where performance is no longer dominated by memory gymnastics—and where computing, the actual math, finally takes center stage.

The future of AI: Why a register-centric architecture matters

As AI systems evolve, the tidy world of uniform tensors and perfectly rectangular compute tiles are steadily falling away. Modern models are no longer predictable stacks of dense layers marching in lockstep. Instead, they expand in every direction: They ingest multimodal inputs, incorporate sparse and irregular structures, reason adaptively, and operate across ever-longer sequences. They must also respond in real time for safety-critical applications, and they must do so within tight energy budgets—from cars to edge devices.

In other words, the assumptions that shaped GPU and TPU architectures—the expectation of regularity, dense grids, and neat tiling—are eroding. The future workloads are simply not shaped the way the hardware wants them to be.

A register-centric architecture offers a fundamentally different path. Because it operates directly on data where it lives, rather than forcing that data into tile-friendly formats, it sidesteps the entire machinery of memory swizzling. It does not depend on fixed tensor shapes.

It doesn’t stumble when access patterns become irregular or dynamic. It avoids the costly dance of rearranging data just to satisfy the compute units. And as models grow more heterogeneous and more sophisticated, such an architecture scale with their complexity instead of fighting against it.

This is more than an incremental improvement. It represents a shift in how we think about AI compute. By eliminating unnecessary data movement—the single largest bottleneck and energy sink in modern accelerators—a register-centric approach aligns hardware with the messy, evolving reality of AI itself.

Memory swizzling is the quiet tax that every hierarchical-memory accelerator pays. It is fundamental to how GPUs, TPUs, NPUs, and nearly all AI chips operate. It’s also a growing liability. Swizzling introduces latency, burns energy, bloats memory usage, and complicates software—all while contributing nothing to the actual math.

One register-centric architecture eliminates swizzling at the root by removing the hierarchy that makes it necessary. It replaces guesswork and heuristics with deterministic dataflow. It prioritizes locality without requiring rearrangement. It lets the algorithm drive the hardware, not vice versa.

As AI workloads become more irregular, dynamic, and power-sensitive, architectures that keep data stationary and predictable—rather than endlessly reshuffling it—will define the next generation of compute.

Swizzling was a necessary patch for the last era of hardware. It should not define the next one.

Lauro Rizzatti is a business advisor to VSORA, a technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.

 

Related Content

The post Why memory swizzling is hidden tax on AI compute appeared first on EDN.

Ignoring the regulator’s reference

Tue, 12/16/2025 - 15:00

DAC control (via PWM or other source) of regulators is a popular design topic here in editor Aalyia Shaukat’s Design Ideas (DIs) corner. There’s even a special aspect of this subject that frequently provokes enthusiastic and controversial (even contentious) exchanges of opinion.

It’s the many and varied possibilities for integrating the regulator’s internal voltage reference into the design. The discussion tends to be extra energetic (and the resulting circuitry complex) when the design includes generating output voltages lower than the regulator’s internal reference.

Wow the engineering world with your unique design: Design Ideas Submission Guide

What can be done to make the discussion less complicated (and heated)?

An old rule of thumb suggests that when one facet of a problem makes the solution complex, sometimes a simple (and better) solution can be found by just ignoring that facet. So, I decided, just for fun, to give it a try with the regulator reference problem. Figure 1 shows the result.

Figure 1 DAC control of a regulator while ignoring its internal voltage reference where Vo = Vomax*(Vdac/Vdacmax). *±0.1%

Figure 1’s simple theory of operation revolves around the A1 differential amplifier.

Vo = Vomax(Vdac/Vdacmax)
If Vdacmax >= Vomax then R1a = R5/((Vdacmax/Vomax) – 1), omit R1b
If Vomax >= Vdacmax then R1b = R2/((Vomax/Vdacmax) – 1), omit R1a

A1 subtracts (suitably attenuated versions) of the control input signal (Vdac) from U1’s output voltage (Vo) and integrates the difference in the R3C3 feedback pair. The resulting negative feedback supplied to U1’s Vsense pin is independent of the Vsense voltage and is therefore independent of U1’s internal reference.

With the contribution of accuracy (and inaccuracy) from U1’s internal reference thus removed, the problem of integrating it into the design is therefore likewise removed. 

Turns out the potential for really good precision is actually improved by ignoring the regulator reference, because they’re seldom better than 1% anyway.

With the Figure 1 circuit, accuracy is ultimately limited only by the DAC and very high precision DACs can be assembled at reasonable cost. For an example see, “A nice, simple, and reasonably accurate PWM-driven 16-bit DAC.”

Another nice feature is that Figure 1 leaks no pesky bias current into the feedback network. This bias is typically scores of microamps and can prevent the output from getting any closer than tens of millivolts to a true zero when the output load is light. No such problem exists here, unless picoamps count (hint: they don’t).

And did I mention it’s simple? 

Oh yeah. About R6, depending on the voltage supplied to A1’s pin 8 and the absmax rating of U1’s Vsense pin, the possibility of an overvoltage might exist. If so, adjust the R4R6 ratio to prevent that. Otherwise, omit R6.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Ignoring the regulator’s reference appeared first on EDN.

Expanding power delivery in systems with USB PD 3.1

Tue, 12/16/2025 - 15:00
Microchip MCP19061 USB DCP board.

The Universal Serial Bus (USB) started out as a data interface, but it didn’t take long before progressing to powering devices. Initially, its maximum output was only 2.5 W; now, it can deliver up to 240 W over USB Type-C cables and connectors, processing power, data, and video. This revision is known as Extended Power Range (EPR), or USB Power Delivery Specification 3.1 (USB PD 3.1), introduced by the USB Implementers Forum. EPR uses higher voltage levels (28 V, 36 V, and 48 V), which at 5 A will deliver power of 140 W, 180 W, and 240 W, respectively.

USB PD 3.1 has an adjustable voltage supply mode, allowing for intermediate voltages between 9 V and the highest fixed voltage of the charger. This allows for greater flexibility by meeting the power needs of individual devices. USB PD 3.1 is backward-compatible with previous USB versions including legacy at 15 W (5 V/3 A) and the standard power range mode of below 100 W (20 V/5 A).

The ability to negotiate power for each device is an important strength of this specification. For example, a device consumes only the power it needs, which varies depending on the application. This applies to peripherals, where a power management process allows each device to take only the power it requires.

The USB PD 3.1 specification found a place in a wide range of applications, including laptops, gaming stations, monitors, industrial machinery and tools, small robots and drones, e-bikes, and more.

Microchip USB PD demo board

Microchip provides a USB PD dual-charging-port (DCP) demonstration application, supporting the USB PD 3.1 specification. The MCP19061 USB PD DCP reference board (Figure 1) is pre-built to show the use of this technology in real-life applications. The board is fully assembled, programmed, and tested to evaluate and demonstrate digitally controlled smart charging applications for different USB PD loads, and it allows each connected device to request the best power level for its own operation.

Microchip MCP19061 USB DCP board.Figure 1: MCP19061 USB DCP board (Source: Microchip Technology Inc.)

The board shows an example charging circuit with robust protections. It highlights charge allocation between the two ports as well as dynamically reconfigurable charge profile availability (voltage and current) for a given load. This power-balancing feature between ports provides better control over the charging process, in addition to delivering the right amount of power to each device.

The board provides output voltages from 3 V to 21 V and output currents from 0.5 A to 3 A. Its maximum input voltage range is from 6 V to 18 V, with 12 V being the recommended value.

The board comes with firmware designed to operate with a graphical user interface (GUI) and contains headers for in-circuit serial programming and I2C communication. An included USB-to-serial bridging board (such as the BB62Z76A MCP2221A breakout board USB) with the GUI allows different configurations to be quickly tested with real-world load devices charging on the two ports. The DCP board GUI requires a PC with Microsoft Windows operating system 7–11 and a USB 2.0 port. The GUI then shows parameter and board status and faults and enables user configuration.

DCP board components

Being a port board with two ports, there are two independent USB PD channels (Figure 2), each with their own dedicated analog front end (AFE). The AFE in the Microchip MCP19061 device is a mixed-signal, digitally controlled four-switch buck-boost power controller with integrated synchronous drivers and an I2C interface (Figure 3).

Diagram showing two independently managed USB PD channels on the Microchip MCP19061-powered DCP board.Figure 2: Two independently managed USB PD channels on the MCP19061-powered DCP board (Source: Microchip Technology Inc.) Block diagram of the Microchip MCP19061 four-switch buck-boost device.Figure 3: Block diagram of the MCP19061 four-switch buck-boost device (Source: Microchip Technology Inc.)

Moreover, one of the channels features the Microchip MCP22350 device, a highly integrated, small-format USB Type-C PD 2.0 controller, whereas the other channel contains a Microchip MCP22301 device, which is a standalone USB Type-C PD port controller, supporting the USB PD 3.0 specification.

The MCP22350 acts as a companion PD controller to an external microcontroller, system-on-chip or USB hub. The MCP22301 is an integrated PD device with the functionality of the SAMD20 microcontroller, a low-power, 32-bit Arm Cortex-M0+ with an added MCP22350 PD media access control and physical layer.

Each channel also has its own UCS4002 USB Type-C port protector, guarding from faults but also protecting the integrity of the charging process and the data transfer (Figure 4).

Traditionally a USB Type-C connector embeds the D+/D– data lines (USB2), Rx/Tx for USB3.x or USB4, configuration channel (CC) lines for charge mode control, sideband-use (SBU) lines for optional functions, and ground (GND). The UCS4002 protects the CC and D+/D– lines for short-to-battery. It also offers battery short-to-GND (SG_SENS) protection for charging ports.

Integrated switching VCONN FETs (VCONN is a dedicated power supply pin in the USB Type-C connector) provide overvoltage, undervoltage, back-voltage, and overcurrent protection through the VCONN voltage. The board’s input rail includes a PMOS switch for reverse polarity protection and a CLC EMI filter. There are also features such as a VDD fuse and thermal shutdown, enabled by a dedicated temperature sensor, the MCP9700, which monitors the board’s temperature.

Block diagram of the Microchip UCS4002 USB port protector device.Figure 4: Block diagram of the UCS4002 USB port protector device (Source: Microchip Technology Inc.)

The UCS4002 also provides fault-reporting configurability via the FCONFIG pin, allowing users to configure the FAULT# pin behavior. The CC, D+/D –, and SG_SENS pins are electrostatic-discharge-protected to meet the IEC 61000-4-2 and ISO 10605 standards.

The DCP board includes an auxiliary supply based on the MCP16331 integrated step-down switch-mode regulator providing a 5-V voltage and an MCP1825 LDO linear regulator providing a 3.3-V auxiliary voltage.

Board operation

The MCP19061 DCP board shows how the MCP19061 device operates in a four-switch buck-boost topology for the purpose of supplying USB loads and charging them with their required voltage within a permitted range, regardless of the input voltage value. It is configured to independently regulate the amount of output voltage and current for each USB channel (their individual charging profile) while simultaneously communicating with the USB-C-connected loads using the USB PD stack protocols.

All operational parameters are programmable using the two integrated Microchip USB PD controllers, through a dynamic reconfiguration and customization of charging operations, power conversion, and other system parameters. The demo shows how to enable the USB PD programmable power supply fast-charging capability for advanced charging technology that can modify the voltage and current in real time for maximum power outputs based on the device’s charging status.

The MCP19061 device works in conjunction with both current- and voltage-sense control loops to monitor and regulate the load voltage and current. Moreover, the board automatically detects the presence or removal of a USB PD–compliant load.

When a USB PD–compliant load is connected to the USB-C Port 1 (on the PCB right side; this is the higher one), the USB communication starts and the MCP19061 DCP board displays the charging profiles under the Port 1 window.

If another USB PD load is connected to the USB-C Port 2, the Port 2 window gets populated the same way.

The MCP19061 PWM controller

The MCP19061 is a highly integrated, mixed-signal four-switch buck-boost controller that operates from 4.5 V to 36 V and can withstand up to 42 V non-operating. Various enhancements were added to the MCP19061 to provide USB PD compatibility with minimum external components for improved calibration, accuracy, and flexibility. It features a digital PWM controller with a serial communication bus for external programmability and reporting. The modulator regulates the power flow by controlling the length of the on and off periods of the signal, or pulse widths.

The operation of the MCP19061 enables efficient power conversion with the capability to operate in buck (step-down), boost (step-up), and buck-boost topologies for various voltage levels that are lower, higher, or the same as the input voltage. It provides excellent precision and efficiency in power conversions for embedded systems while minimizing power losses. Its features include adjustable switching frequencies, integrated MOSFET drivers, and advanced fault protection. The operating parameters, protection levels, and fault-handling procedures are supervised by a proprietary state machine stored in its nonvolatile memory, which also stores the running parameters.

Internal digital registers handle the customization of the operating parameters, the startup and shutdown profiles, the protection levels, and the fault-handling procedures. To set the output current and voltage, an integrated high-accuracy reference voltage is used. Internal input and output dividers facilitate the design while maintaining high accuracy. A high-accuracy current-sense amplifier enables precise current regulation and measurement.

The MCP19061 contains three internal LDOs: a 5-V LDO (VDD) powers internal analog circuits and gate drivers and provides 5 V externally; a 4-V LDO (AVDD) powers the internal analog circuitry; and a 1.8-V LDO supplies the internal logic circuitry.

The MCP19061 is packaged in a 32-lead, 5 × 5-mm VQFN, allowing system designers to customize application-specific features without costly board real estate and additional component costs. A 1-MHz I2C serial bus enables the communication between the MCP19061 and the system controller.

The MCP19061 can be programmed externally. For further evaluation and testing, Microchip provides an MCP19061 dedicated evaluation board, the EV82S16A.

The post Expanding power delivery in systems with USB PD 3.1 appeared first on EDN.

Troubleshooting often involves conflicting symptoms and scenarios

Tue, 12/16/2025 - 11:54

I’ve always regarded debugging and troubleshooting as the most challenging of all hands-on engineering skills. It’s not formally taught; it is usually learned through hands-on experience (often the hard way) and almost every case is different. And it’s a long list of why debugging and troubleshooting are often so difficult.

In some cases, there’s the “aha” moment when the problem is clearly identified and knocked down, but in many other cases, you are “pretty sure” you’ve got the problem but not completely so.

Note that I distinguish between debugging and troubleshooting. The former is when you are working on a breadboard or prototype that is not working and perhaps has never fully worked; it’s in the design phase. The latter is when a tested, solid product with some track record and field exposure misbehaves or fails in use. Each has its own starting points and constraints, but the terms are used interchangeably by many people.

Every engineer or technician has his or her own horror story of an especially challenging situation. It’s especially frustrating when there is no direct, consistent one-to-one link between observed symptoms and root cause(s). There are multiple cause/effect scenarios:

  • Clarity: The single-problem, single-effect situation—generally, the easiest to deal with.
  • Causality: A single problem with multiple causes, where one problem (often not directly visible) triggers a second, more visible one.
  • Correlation: Two apparent problems, with one common cause—or maybe the observed symptoms are unrelated? It’s also easy to have the assumption that correlation implies causality, but that is often not the case.
  • Coincidence: Two apparent problems that appear linked but really have no link at all.
  • Confusion: A problem with contradictory explanations, where the explanation addresses one aspect but does not explain the others.
  • Consistent: The problem is intermittent with no consistent set of circumstances that cause it to occur.

My recent dilemma

Whatever the cause(s) of faults, the most frustrating situation for engineers is where the problem is presumably fixed, but no clear cause (or causes) is found. This happened to me recently with my home heating system, which heats up water for domestic use and for radiator heating. It has one small pump sending heated water to a storage tank and a second small pump sending it to radiators; both pumps do not run at the same time.

One morning, I saw that we lost heat and hot water, so I checked the system (just four years old) and saw that the service-panel circuit breaker with a dedicated line had tripped.

A tripped breaker is generally bad news. My first thought was that perhaps there had been some AC-line glitch during the night, but all other sensitive systems in the house—PCs, network interfaces, and plug-in digital clocks—were fine. Perhaps some solar flare or cosmic particles had targeted just this one AC feed? Very unlikely. I reset the breaker and the system ran for about an hour, then the breaker tripped again.

I called the service team that had installed the system, they came over and they, too, were mystified. The small diagnostic panel display on the system said all was fine. They noted that my thermostat was a 50-year-old mechanical unit, similar to the classic 1953 round Honeywell unit, designed by Henry Dreyfus and now in the permanent display at Cooper Hewitt/Smithsonian Design Museum in New York (Figure 1). These two-wire units, with their bimetallic strip and glass-enclosed mercury-wetted switch, are extremely reliable; millions are still in use after many decades.

 

Figure 1 You have to start somewhere: The first step was to take out a possible but unlikely source of the problem. So, the mercury-wetted metallic-strip thermostat (above) similar to the classic Honeywell unit was replaced with a simple PRO1 T701 electronic equivalent (below). Sources: Cooper Hewitt Museum

While failure of these units is rare, technicians suggested replacing it “just in case.” I said, sure, “why not?” and replaced it with a simple, non-programmable, non-connected electronic unit that emulates the functions of the mechanical/mercury one.

But we knew that was very unlikely to be the actual problem, and the repair technicians could not envision any scenario where a thermostat—on 24-V AC loop with contact closure to energize a mechanical or solid-state relay to call for heat—could induce a circuit breaker to trip. Maybe the original thermostat contacts were “chattering” excessively, thus inducing the motor to cycle on and off rapidly? Even so, that shouldn’t trip a breaker.

Once again, the system ran for about an hour and then the breaker tripped. The techs spend some time adjusting the system’s hot water and heating water pumps; each has a small knob that selects among various operating modes.

Long-story short: the “fixed” system has been working fine for several weeks. But…and it’s a big “but” …they never did actually find a reason for the circuit-breaker tripping. Even if the pumps were not at their optimum settings, that should not cause an AC-line breaker to trip. And why would the system run for several years without a problem.

What does it all mean?

From an engineering perspective, that’s the most frustrating outcome. Now, even though the system is running, it still has me in that “somewhat worried” mental zone. A problem that should not have occurred did occur several times, but now it has gone away for no confirmed reason.

There’s not much that can be done to deal with non-reproducible problems such as this one. Do I need an AC-line monitor, as perhaps that’s the root cause? What sort of other long-term monitoring instrumentation is available for this heating system? How long would you have it “baby-sit” the system?

Perhaps there was an intermittent short circuit in the system’s internal AC wiring that caused the breaker to trip, and the act of opening the system enclosure and moving things around made the intermittent go away? We can only speculate.

Right now, I’m trying to put this frustrating dilemma out of my mind, but it’s not easy. Online troubleshooting guides are useless, as they have generic flowcharts asking, “Is the power on?” “Are the cables and connectors plugged in and solid?”

Perhaps I’ll instead re-read the excellent book, “DeBugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems” by David J. Agans (Figure 2). Although my ability and desire to poke, probe, and swap parts of a home heating system are close to zero.

Figure 2 This book on systematic debugging of electronic designs and products (and software) has many structured and relevant tactics for both beginners and experienced professionals. Source: Digital Library—Association for Computing Machinery

Or perhaps the system just wanted some personal, hands-on attention after four years of faithful service alone in the basement.

Have you ever had a frustrating failure where you poked, pushed, checked, measured, swapped parts, and did more, with the problem eventually going away—yet you really have no idea what the problem was? How did you handle it? Did you accept it and move on or pursue the mystery further?

Related Content

The post Troubleshooting often involves conflicting symptoms and scenarios appeared first on EDN.

The Schiit Modi Multibit: A little wiggling ensures this DAC won’t quit

Mon, 12/15/2025 - 15:00

Sometimes, when an audio component dies, the root cause is something electrical. Other times, the issue instead ends up being fundamentally mechanical.

Delta sigma modulation-based digital-to-analog conversion (DAC) circuitry dominates the modern high-volume audio market by virtue of its ability (among other factors) to harness the high sample rate potential of modern fast-switching and otherwise enhanced semiconductor processes. Quoting from Wikipedia’s introduction to the as-usual informative entry on the topic (which, as you’ll soon see, also encompasses analog-to-digital converters, i.e., ADCs):

Delta-sigma modulation is an oversampling method for encoding signals into low bit depth digital signals at a very high sample-frequency as part of the process of delta-sigma analog-to-digital converters (ADCs) and digital-to-analog converters (DACs). Delta-sigma modulation achieves high quality by utilizing a negative feedback loop during quantization to the lower bit depth that continuously corrects quantization errors and moves quantization noise to higher frequencies well above the original signal’s bandwidth. Subsequent low-pass filtering for demodulation easily removes this high frequency noise and time averages to achieve high accuracy in amplitude, which can be ultimately encoded as pulse-code modulation (PCM).

 Both ADCs and DACs can employ delta-sigma modulation. A delta-sigma ADC encodes an analog signal using high-frequency delta-sigma modulation and then applies a digital filter to demodulate it to a high-bit digital output at a lower sampling-frequency. A delta-sigma DAC encodes a high-resolution digital input signal into a lower-resolution but higher sample-frequency signal that may then be mapped to voltages and smoothed with an analog filter for demodulation. In both cases, the temporary use of a low bit depth signal at a higher sampling frequency simplifies circuit design and takes advantage of the efficiency and high accuracy in time of digital electronics.

Primarily because of its cost efficiency and reduced circuit complexity, this technique has found increasing use in modern electronic components such as DACs, ADCs, frequency synthesizers, switched-mode power supplies and motor controllers. The coarsely-quantized output of a delta-sigma ADC is occasionally used directly in signal processing or as a representation for signal storage (e.g., Super Audio CD stores the raw output of a 1-bit delta-sigma modulator).

Oversampled interpolation vs quantization noise shaping

That said, plenty of audio purists object to the inherent interpolation involved in the delta-sigma oversampling process. Take, for example, this excerpt from the press release announcing Schiit’s $249 first-generation Modi Multibit DAC, today’s teardown patient, back in mid-2016:

Multibit DACs differ from the vast majority of DACs in that they use true 16-20 bit D/A converters [editor note: also known as resistor ladder, specifically R-2R, D/A converters] that can reproduce the exact level of every digital audio sample. Most DACs use inexpensive delta-sigma technology with a bit depth of only 1-5 bits to approximate the level of every digital audio sample, based on the values of the samples that precede and follow it.

Here’s more on the Modi Multibit 1 bill of materials, from the manufacturer:

Modi Multibit is built on Schiit’s proprietary multibit DAC architecture, featuring Schiit’s unique closed-form digital filter on an Analog Devices SHARC DSP processor. For D/A conversion, it uses a medical/military grade, true multibit converter specified down to 1/2LSB linearity, the Analog Devices AD5547CRUZ.

That said, however, plenty of other audio purists object to the seemingly inferior lab testing results for multibit DACs versus delta-sigma alternatives (including those from Schiit itself), particularly in the context of notably higher comparative prices of multibit offerings. Those same detractors, exemplifying one end of the “objectivist” vs “subjectivist” opinion spectrum, would likely find it appropriate that in the “Schiit stack” whose photo I first shared a few months ago (and which I’ll discuss in detail in another post to come shortly):

I coupled a first-generation Modi Multibit (bottom) with a Vali 2++ tube-based headphone amplifier (top), both Schiit devices delivering either “enhanced musicality” (if you’re a subjectivist) or “desultory distortion” (for objectivists). For what it’s worth, I don’t consistently align with either camp; I was just curious to audition the gear and compare the results against more traditional delta-sigma DAC and discrete- or op amp-based amplifier alternatives!

A sideways wiggle did the trick

That said, I almost didn’t succeed in getting the Modi Multibit into the setup at all. My wife had bought it for me off eBay as a Valentine’s Day gift in (claimed gently) used condition back in late January; it took me a few more months to get around to pressing it into service. Cosmetically, it indeed looked nearly brand new. But when I flipped the backside power switch…nothing. This in spite of the fact that the AC transformer feeding the device still seemed to be functioning fine:

The Modi Multibit was beyond the return-for-refund point, and although the seller told me it had been working fine when originally shipped to us, I resigned myself to the seemingly likely reality that it’d end up being nothing more than a teardown candidate. But after subsequently disassembling it, I found nothing scorched or otherwise obviously fried inside. So, on a hunch and after snapping a bunch of dissection photos and then putting it back together and reaching out to the manufacturer to see if it was still out-of-warranty repairable (it was), I plugged it back into the AC transformer and wiggled the power switch back and forth sideways. Bingo; it fired right up! I’m now leaving the power switch in the permanently “on” position and managing AC control to it and other devices in the setup via a separately switchable multi-plug mini-strip:

What follows are the photos I’d snapped when I originally took the Modi Multibit apart, starting with some outside-chassis shots and as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. Front first; that round button, when pressed, transitions between the USB, optical S/PDIF, and RCA (“coaxial”) S/PDIF inputs you’ll see shortly, selectively illuminating the associated front panel LED-of-three to the right of the button at the same time:

Left side:

Back, left-to-right are the:

  • Unbalanced right and left channel analog audio outputs
  • RCA (“coaxial”) digital S/PDIF input
  • Optical digital S/PDIF input
  • USB digital input
  • The aforementioned flaky power switch, and
  • 16V AC input

Transformers vs voltage converters

Before continuing, a few words about the “wall wart”. It’s not, if you haven’t yet caught this nuance, a typical full AC-to-DC converter. Instead, it steps down the premises AC voltage to either 16V (which is actually, you may have noticed from the earlier multimeter photo, more like 20V unloaded) for the Modi Multibit or 24V for the Vali 2++, with the remainder of the power supply circuitry located within the audio component itself:

Fortunately, since the 16V and 24V transformer output plugs are dissimilar, there’s no chance you’ll inadvertently blow up the Modi Multibit by plugging the Vali 2++ wall wart into it!

Onward, right side:

Bottom:

And last but not least, the top, including the distinctive “Multibit” logo, perhaps obviously not also found on delta-sigma-implementing Schiit DACs:

Let’s start here, with those four screw heads you see, one in each corner:

With them removed, the aluminum top piece pops right off:

Next up, the two screw heads on the back panel:

And finally, the three at the steel bottom plate:

At this point, the PCB slides out, although you need to be a bit careful in doing so to prevent the steel frame’s top bracket from colliding with tall components along the PCB’s left edge:

Firmware fixes

Here’s a close-up of the PCB topside’s left half:

AC-to-DC conversion circuitry dominates the far left portion of the PCB. The large IC at the center is C-Media Electronics’ CM6631A (PDF) USB 2.0 high-speed true HD audio processor. Below it is the associated firmware chip, with an “MD218” sticker on top. The original firmware, absent the sticker, had a minor (and effectively inaudible, but you know how picky audiophiles can be) MSB zero-crossing glitch artifact that Schiit subsequently fixed, also sending replacement firmware chips to existing device owners (or alternatively updating them in-house for non-DIYers).

And here’s the PCB topside’s right half:

Now for the other side:

In the bottom left quadrant are two On Semiconductor MC74HC595A (PDF) 8-bit serial-input/serial or parallel-output shift registers with latched 3-state outputs. Above them is the aforementioned “resistor ladder DAC”, Analog Devices’ AD5547. Above it and to either side are a pair of Analog Devices AD8512A (PDF) dual precision JFET amplifiers. And above them is STMicroelectronics’ TL082ID dual op amp.

Shift your eyes to the right, and you’ll not be able to miss the largest IC on this side of the PCB. It’s the Analog Devices ADSP-21478 SHARC DSP, also called out previously in Schiit’s press release. Above it is an AKM Semiconductor AK4113 six-channel 24-bit stereo digital audio receiver chip for the Modi Multibit’s two S/PDIF inputs. And on the other side…

is a SST (now Microchip Technology) 39LF010 1Mbit parallel flash memory, presumably housing the SHARC DSP firmware.

Wrapping up, here are some horizontal perspectives of the front, back, and left-and-right sides:

And that’s “all” I’ve got for you today! In the future, I plan to compare the first-generation Modi Multibit against its second-generation successor, which switches to a Schiit-developed USB interface, branded as Unison and based on a Microchip Technologies controller, and also includes a NOS (non-oversampling) mode option, along with stacking it up against several of Schiit’s delta-sigma DAC counterparts. Until then, let me know your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post The Schiit Modi Multibit: A little wiggling ensures this DAC won’t quit appeared first on EDN.

Pages