EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 2 години 52 хв тому

Design digital input modules with parallel interface using industrial digital inputs

Птн, 07/11/2025 - 10:03

Industrial digital input chips provide serialized data by default. However, in systems that require real time, low latency, or higher speed, it may be preferable to provide level-translated, real-time logic signals for each industrial digital input channel.

So, some industrial digital inputs sample and serialize the state of eight 24-V current sinking inputs under SPI or pin-based (LATCH) timing control, allowing for readout of the eight states via SPI. A serial interface is used to minimize the number of logic signals requiring isolation, which is particularly beneficial in high channel count digital input modules.

Serialization of logic signals uses simultaneous sampling of the signals so that the signals become time quantized. This means that real-time information content is lost, which can be of concern in certain systems. Examples are applications where timing differences between switching signals are of concern, such as incremental encoders or counters.

These applications either necessitate the use of high-speed sampling with high-speed serial readout or the use of non-serialized parallel data, as provided by the MAX22195, an industrial digital input with parallel output. Using the MAX22190/MAX22199 industrial digital input devices with parallel operation provides the benefit of diagnostics and configurability.

This article delves into the characteristics, limitations, and design considerations regarding techniques for generating parallel logic outputs with industrial digital inputs.

Design details

The technique is based on repurposing the eight LED outputs to function as logic signals. LEDs serve to provide a visual indication of the digital input’s state—useful for installation, maintenance, and in service. The characteristics and specifications of industrial inputs are clearly defined in the IEC 61131-2 standard, with the output state being binary in nature: either on or off.

The MAX22190/MAX22199 chips feature energyless LED drivers that power the LEDs from the sensor/switch in the field, not drawing current/power from a power supply in the digital input module. These devices limit the input current to a level settable by the REFDI resistor. This is done to achieve the lowest power dissipation in the module.

For the common Type 1/Type 3 digital inputs, the input current is typically set to a level of ~2.3 mA (typ) to be larger than the 2.0 mA minimum required by the IEC standard. The ICs channel most of the ~2.3 mA field input (IN) current to the LED output pins, and only ~160 µA are consumed by the chip.

With the LED drivers being current outputs, not voltage, the current needs to be converted to voltage for interfacing with other logic devices like digital isolators and microcontrollers. Resistors are the simplest trans-resistance element for this purpose, as shown in Figure 1.

Figure 1 LED pins are used as voltage-based logic outputs. Source: Analog Devices Inc.

Using the LED output pins in this manner is not documented in the product datasheets. This article investigates the characteristics and possible limitations.

LED pin characteristics

When using ground-connected resistors on the LED pins to create voltage outputs, the following needs to be considered:

  • What is the maximum voltage allowed on the LED pins?
  • Is there interaction/feedback from the LED_ pin to the IN_ pin?
  • Specifically, does voltage on the LED pins result in a change of the input current, as minimum current levels are mandated by the IEC standards?
  • Do the LED output currents show undesired transient behavior, such as overshoots or slow rise/fall times?
  • Are the LED outputs suitable for use as high-speed logic signals when the inputs switch at high rates?
  • Are the LED outputs filtered (as programmable by SPI)?

The MAX22190/MAX22199 datasheets’ absolute maximum ratings specify the maximum allowed LED pin voltages as +6 V. This indicates that the LED pins are suitable for use as 5 V (and 3.3 V) logic outputs, with the caveat that the voltage may not be higher than 6 V.

The impact of the LED pin voltage on other critical characteristics needs to be evaluated. Of particular concern is the change of the input current with the presence of high LED pin voltages, as the current is specified by the standards. The critical case is with the field voltage close to the 11 V on-state threshold voltage, as defined for Type 3 digital inputs.

Figure 2 shows the measured field input current dependence on the LED pin voltage for three field input voltages close to the 11-V level: 9 V, 10 V, and 11 V. The 10-V and 9-V levels were chosen as these are within the transition region for Type 3 inputs, and their input currents have no defined minimum, while the minimum for the 11 V input case is 2 mA.

Figure 2 Field input current is dependent on the LED pin voltage. Source: Analog Devices Inc.

With the field voltage at the 11-V threshold, the blue curve shows that the input current starts decreasing when the LED voltage is higher than ~5.8 V. The current decrease is only 0.6% at 6 V. For cases of 9 V and 10 V, which are in the transition where the currents are not defined, the measurements show that the input current is still above 2 mA for up to 5.5-V inputs.

In conclusion, this shows that the MAX22190/MAX22199 will produce 5-V LED logic outputs (as well as lower voltage logic like 3.3 V) and still be compatible with Type 3 digital inputs. For Type 1 digital inputs, the case is trivial since the on-threshold is much higher at 15 V, meaning that the LED pins will also provide 5-V logic levels without any impact on the field input current.

Parallel operation example

Figure 3 shows a 10-kHz field input (yellow curve) with the resulting LED output voltage in blue. A 1.5-kΩ resistor was used on the LED output, which provides a 3.3 V logic signal. Glitch filtering was disabled (default bypass mode).

Figure 3 In 10-kHz switching, Channel 1 has field input and Channel 2 has LED output. Source: Analog Devices Inc.

Regarding the transient behavior of the LED output current under switching conditions, Figure 3 shows a case of 10-kHz switching. A 1.5-kΩ resistor was used to convert current to voltage. The scope shot illustrates that the LED outputs do not produce transient overshoots or undershoots that could damage logic input devices. The rise and fall times are fast and do not lead to signal distortion.

Using the SPI interface

The MAX22190/MAX22199 devices feature SPI-programmable filters to enable per-channel glitch/noise filtering. Eight filter time constants up to the 20-ms level are available as well as a filter bypass for high-speed applications. The selected noise filtering also applies to the LED outputs to make the visual representation consistent with the electrical signals.

Diagnostics are provided via SPI, like low power supply voltage alarms, overtemperature warnings, short-circuit detection on the REFDI and REFWB pins, and as wire-break detection of the field inputs.

The power-up default state of the register bits is:

  • All eight inputs are enabled
  • All input filters are bypassed
  • Wire-break detection is disabled
  • Short-circuit detection of the REFDI and REFWB (only MAX22199) pins is disabled

Hence, the SPI interface does not need to be used in applications that do not require glitch filtering (for example, for high-speed signals) and diagnostics. In cases where the per-channel selectable glitch/noise filtering is needed or diagnostic detection is wanted, SPI can be used.

The LED output waveform does not show overshoots or other undesired irregularities such as varying voltage in the on-state. This illustrates that the LED outputs can be used as voltage outputs. Its characteristics and limitations are investigated.

Glitch filtering

The MAX22190 and MAX22199 devices provide per-channel selectable glitch filtering. The following content demonstrates the effect of the glitch filters on the LED outputs by example of a 200-Hz switching signal with filter time set to 800 µs. Defined glitch widths were emulated by changing the duty cycle. Both positive and negative glitches were investigated.

Figure 4 shows an example of 750-µs positive pulses being filtered out by the 800-µs glitch filter. So, positive glitch filtering works both for the LED outputs as well as the SPI data.

Figure 4 Here is an example of positive glitch filtering. Source: Analog Devices Inc.

Negative glitches are, however, not filtered out at the LED outputs, as shown in Figure 5, where a 750-µs falling pulse propagates to the LED output. This differs from using the SPI readout, for which both positive and negative glitches are successfully filtered.

Figure 5 This image shows negative glitch filtering. Source: Analog Devices Inc.

Figure 6 shows the LED output signal with an 800-µs glitch filter enabled and input switching with a 50% duty cycle. The rising edges are delayed by ~770 µs while the falling edges show no delay. This illustrates that the filters do not work properly with the LED outputs.

Figure 6 This image highlights the filtering effect on LED output. Source: Analog Devices Inc.

High frequency switching

For applications with high switching frequencies, low propagation, or low skew requirements, glitch filtering would be disabled. In bypass mode (glitch filters) and 100-kHz input, the LED output results in the waveforms shown in Figure 7.

Figure 7 The 100-kHz input switching is shown with filter bypass. Source: Analog Devices Inc.

While the falling edges show low propagation delay of ~60 ns, the rising edges have significant propagation delay as well as jitter. The rising edge jitter is in the range of ±0.5 µs with an average propagation delay of ~1 µs. The rising delay and jitter are due to the ~1 MHz sampling documented in the datasheet. Sampling does not occur on the falling edges, hence the fast response.

This illustrates that the LED outputs have rise time/fall time skews of up to ~1.5 µs with jitter. Channel-to-channel skew is low on the falling edges but much higher on the rising edges. This could limit the use of the LED outputs in some applications.

Design considerations

This section discusses some considerations required when using the LED output pins as voltage outputs.

Ensure that the MAX22190/MAX22199 current-drive LED outputs are voltage limited to not exceed the safe levels of the logic inputs that they drive. While the REFDI resistor sets the field input current to a typical current level, the actual input current has a tolerance of ±10.6%, as specified in the datasheets. Thus, the voltage across the resistor will be in the ±10.6% range.

Logic inputs typically have tightly specified absolute maximum ratings, like VL + 0.3 V, where VL is the logic supply voltage. When interfacing two logic signals, a common VL supply is often used to ensure matching as standard logic outputs have push-pull or open-drain outputs whose maximum output voltage is defined/limited by a logic supply, VL.

One can make the typical LED pin’s output voltage lower to ensure that absolute maximum ratings are not exceeded for the input. Alternatively, one can consider that the LED pin’s ~2.3 mA output current will not damage a logic input, as these are commonly specified for tolerating much higher latch-up currents, in the 50 mA to 100 mA range. This needs to be verified for the device under consideration. The third, less attractive, option is to limit the voltage by clamping.

Standard logic outputs are push-pull and thus low impedance, providing high flexibility in driving logic inputs. In contrast, the LED outputs are open-drain outputs where the pull-down resistor with parasitic capacitance determines the switching speeds.

Without additional capacitors, switching rates of 100 kHz and higher are feasible.

The MAX22190/MAX22199 industrial digital inputs can be used as an octal input having eight parallel outputs, despite being documented for serialized data operation. To this purpose, the LED drivers, originally intended for visual state indication, are repurposed as voltage-based or current-based logic outputs. When using parallel operation in this manner, the use of the SPI interface is optional and provides all the diagnostics as well as device configurability with some limitations.

Wei Shi is an applications engineer manager in the Industrial Automation business unit of Analog Devices based in San Jose, California. She joined Maxim Integrated (now part of Analog Devices) in 2012 as an applications engineer.

Reinhardt Wagner was a distinguished engineer with Analog Devices in Munich, Germany. His 21-year tenure primarily involved the product definition of new industrial chips in the areas of communication and input/output devices.

Editor’s Note

This article was written in cooperation with Chin Chia Leong, senior staff engineer for hardware at Rockwell Automation.

Related Content

The post Design digital input modules with parallel interface using industrial digital inputs appeared first on EDN.

Converting pulses to a sawtooth waveform

Чтв, 07/10/2025 - 16:09

There are multiple means of generating analog sawtooth waveforms. Here’s a method that employs a single supply voltage rail and is not finnicky about passive component values. Figure 1 shows a pair of circuits that use a single 3.3-V supply rail, one producing a ground-referenced sawtooth and the other a supply voltage-referenced one.

Figure 1 The circuitry to the left of the 3.3 V supply implements a ground-referenced sawtooth labeled “LO”, while that to the right forms a 3.3V-referenced one labeled “HI”.

Wow the engineering world with your unique design: Design Ideas Submission Guide

For the LO signal, R1 supplies adequate current to operate U1. This IC enforces a constant voltage Vref between its V+ and FB pins. Q1 is a high beta NPN transistor which passes virtually all of R2’s current (Vref/R2) through its collector to charge C1 with a constant current, producing the linear ramp portion of this ground-referenced sawtooth. (U1’s FB current is typically less than 100 nA over temperature.) M1 is a MOSFET that is activated for 100 ns every T seconds to rapidly discharge C1 to ground. Its “on” resistance is less than 1 Ω and so yields a discharge that lasts more than 10 time constants.

The sawtooth’s peak amplitude A is Vref × T / (R2 × C1) volts, where Vref for U1 is 1.225 V. For a 3.3-V rail, the amplitude (A) should be less than an Amax of 2.1 V, which requires T to be less than a Tmax of  R2 × C1 × 2.1V / Vref. With the availability of a U1 Vref tolerance of 0.2% and a 0.1% tolerance for R2, the circuit’s overall amplitude tolerance is mostly limited by an at best 1% C1 combined with the parasitic capacitance of M1.

M2, C2, Q2, R3, R4 and U2 work much like the circuit just described, except that they produce an “upside-down” 3.3-V supply-referenced sawtooth. Both waveforms can be seen in Figure 2. With the exception of U2, the tolerance contributions of these components are those previously mentioned for the “right side-up” design respectively. U2’s reference current is typically less than 250 nA over temperature, but its Vref of 1.24 V has at best a 1% tolerance. Figure 2 depicts both sawtooth waveforms.

Figure 2 The waveforms shown have peak values which are slightly less than the largest recommended. The period T is 34 µs.

These circuits do not require any precision or matched-value passive components. And there is no need to coordinate these component values with any active component’s parametric values or with the switching period T, as long as T is kept less than Tmax. The only effect that the non-zero tolerances of the passive components and of certain active parameters has been on the peak-to-peak amplitude of the sawtooth waveforms.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post Converting pulses to a sawtooth waveform appeared first on EDN.

How spiders and eels inspired a magnetoreceptive sensor

Чтв, 07/10/2025 - 11:19

Researchers at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) laboratory in Germany have developed e-skin with magnetic-sensing capabilities, which they refer to as magnetoreception. They incorporated giant magnetoresistance effect and electrical resistance tomography technologies to achieve continuous sensing of magnetic fields across an area of 120 × 120 mm2 with a sensing resolution of better than 1 mm. Instead of focusing on sensor readings at specific points, the magnetoreceptor captures electrical resistance information across the entire measurement domain.

Read the full story at EDN’s sister publication, Planet Analog.

Related Content

The post How spiders and eels inspired a magnetoreceptive sensor appeared first on EDN.

Cross connect complementary current sources to reduce self-heating error

Срд, 07/09/2025 - 18:38

Lively discussions have sprung up here in editor Aalyia Shaukat’s Design Ideas regarding the limitations and quirks of, and design tricks for, the current control topologies shown in Figure 1.

Figure 1 How to control amps of Iout with mA of Ic using legacy voltage regulators as current regulators where Iout  = (Vadj – IcRc)/Rs.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Reader Ashutosh Sapre contributed a disturbing observation about the likely effect on regulator reference accuracy of temperature rise from self-heating, as illustrated in Figure 2.

Figure 2 LM317 reference variation with junction temperature as seen in page 5 of LM317 datasheet.

As shown in Figure 2, the temperature stability of these legacy devices is fairly good. Nevertheless, there are situations where the tempco can be problematic. 

For example, consider a scenario that begins with programming for 100% of full-scale output current (e.g., 1 A) so that regulator heat dissipation is high. Assume it’s maintained long enough for the regulator’s junction temperature to rise from 25oC to 125oC. Figure 2 predicts that this large temperature swing will cause Vref to drift from 1.25 V to 1.2375 V, causing the output current to decline by about 1% of full scale. 

This 1% corresponds to 10 mA out of 1000 mA and is somewhat less than 3 LSB of an 8-bit setting. That’s perhaps not great, but it’s not horrible either. But what if the output is then reprogrammed for 10% of full scale (e.g., 100 mA) while the regulator is still hot?

Then that 1% of full-scale error would become 10% of setting is what. It will manifest as a very lengthy thermal settling tail lasting many seconds as junction temperature gradually cools from 125oC, allowing Vref to (slowly) return to its initial 1.25 V and output current to settle at the correct 100 mA. It will happen eventually, but the time required will be objectionable. It may be unacceptable.

Fortunately, Ashutosh also contributed a simple and practical solution to the problem in the form of an auxiliary current shunt transistor. The shunt would allow most of the output current and, consequently, most of the self-heating to bypass the regulator entirely. This would leave its junction unheated and its Vref undrifted. Problem solved!

Or is it? Ashutosh also pointed out that the bypass transistor, while handily solving the thermal problem, would unfortunately also bypass other things. Specifically, the nifty fault protection features (e.g., automatic current limiting and overheating shutdown) built into LM317 and LM337 chips would be lost. While these assets could potentially be added to the transistor shunt, that would lose much of the simplicity that made it attractive in the first place.

So, I wondered if Ashutosh’s shunt idea could be implemented in a way that would inherently retain the desirable 317/337 features while staying simple. The obvious thing (I like obvious!) might be to just make the shunt out of another LM3xx. Figure 3 shows just that: A design that cross-connects complementary regulators using U1’s 317 for control and U2’s 337 for shunt. Control and shunt currents are then summed back together before passing through Rs to provide feedback to U1 where Iout = (I2 + I3) = (Vadj_U1 – IcRc)/Rs and I3 >> I2. Notice how the shunt gets turned “upside down.”

Figure 3 Cross connection reduces self-heating error because shunt regulator U2 carries most of the current, getting relatively hot, while U1, whose Vref is in control, stays relatively cool and accurate.

Figure 3’s U1 is connected mostly per Figure 1, except for Rx. The signal developed by Rx * I2 feeds U2’s ADJ pin so that when U1 input current I2 rises above about 10 mA, U2’s ADJ pin will drop enough to make it start conducting. This causes the I3 current component to rise and ultimately comprise the majority of total current I1 = I2 + I3. Thus, U2 dissipates most of the self-heating Watts, ensuring that U1 remains relatively cool and its Vref remains accurate. 

The 1N4001 in parallel with Rx protects Rx and U2’s ADJ pin if U2’s over-temp or over-current shutdown feature kicks in. That would leave U1 trying to shoulder the whole load, dropping enough voltage across Rx to likely damage U2 and fry the resistor. The diode prevents that. 

Figure 4 shows the idea working as a negative current source.

Figure 4 If the 317 and 337 swap places and the diodes reverse, Figure 3’s circuit can work for negative current, too.

If more current capability is needed, more U2 shunts and higher capacity diodes can be added (Figure 5).

Figure 5 Boost current handling capacity with beefier diodes and more U2s.

Figure 6 integrates this idea into a complete PWM controlled negative current source as detailed in: “A negative current source with PWM input and LM337 output.”

Figure 6 Negative current source circuit incorporates means for compensating component tolerances, including those of U1 and Z1 references. Note Rs = 1.1 Ω and should be rated for more than 1 W.

The one-pass adjustment sequence is:

  1. Set Df = 100%
  2. Adjust CAL pot for 1 amp output current
  3. Set Df = 0%
  4. Adjust ZERO pot for zero output current.

Done. Iout = 1.1 Df /Rs, where Df = PWM duty factor.

In closing, thanks go (again) to savvy reader Ashutosh for his suggestions and (likewise again) to editor Aalyia for the fertile DI environment she created, which makes this kind of teamwork workable.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Cross connect complementary current sources to reduce self-heating error appeared first on EDN.

System-level test’s expanding role in producing complex chips

Срд, 07/09/2025 - 10:19

System-level test (SLT), once used largely as a stopgap measure to catch issues missed by automated test equipment (ATE), has evolved into a necessary test insertion for high-performance processors, chiplets, and other advanced computational devices. Today, SLT is critical for ensuring that chips function correctly in real-world conditions, and all major CPUs, APUs, and GPUs now go through an SLT insertion before shipment.

Adding SLT in production is being considered for network processors and automotive processors for driver assistance. However, implementing SLT techniques effectively at scale poses key challenges in terms of managing costs, test times, and manufacturers’ expectations.

One of the biggest misconceptions about SLT is that it functions like ATE. ATE primarily uses pre-defined test patterns to stimulate circuit paths and check expected responses within individual cores or circuit blocks. On the other hand, SLT focuses on system interactions that occur between those cores or outside the chip.

That includes software, power management, sensor integration, and communication between internal cores and peripheral devices. Since SLT is often used to test cutting-edge chips, the test environment needs to be flexible so that it can handle application-specific conditions and different interface protocols.

This distinction is particularly relevant as the industry shifts toward chiplet-based architectures. With chiplets, manufacturers need to test how signals propagate across multiple interconnected dies, rather than just validating individual components in isolation.

Test pattern creation for traditional ATE methods, used for chip package-level testing, offers limited access to internal interactions within a multi-chip package. SLT, on the other hand, can exercise how data flows between chiplets and how this influences performance, power consumption, and overall system functionality.

However, this approach comes with its own unique complications, especially since many SLT methodologies are implemented manually.

Test coverage challenges

Using conventional design for test (DFT) techniques to generate test patterns ahead of production ramp, chip designers are lucky to get 99% coverage of all the transistors. However, for devices with 100 billion transistors, such as today’s advanced artificial intelligence (AI) processors, 1 billion transistors still go untested. Using purely ATE test methods, achieving that last 1% of test coverage could take months of development and significant tester time.

Moreover, today’s complexity of integrating heterogeneous chiplets into one large package challenges the stability and repeatability of the electromechanical stack-up in a high-volume test environment. There are limited test access points to the outside world that must stimulate pathways through multiple dies.

Because the packages are large, there may be warpage and restricted mechanical compression points for actuating the device-under-test (DUT) connections in the socket. Exercising processors and memories inside the same package under extreme test conditions, there are inevitable hot spots that must be managed to prevent damage to the device.

To provide a durable automated system-level tester with high availability in manufacturing, the customer test content must be tightly integrated with socket actuation and thermal control, along with power management and test sequencing.

Compounding the complexity of test content development is how many parties may be involved in optimizing the SLT insertion. Vendors of SLT equipment, sockets, and design and test IP must collaborate with the silicon designer/integrator, custom ASIC end-user, outsourced semiconductor assembly and test providers (OSATs), board designers, and even customers—for example, manufacturers of data centers, computer vendors and cellphone devices—to make sure the test station represents the real-world application it’s intended to test.

As the demand for processing power increases, chip designs have evolved to meet market requirements. This increase in processing power results in higher energy consumption and heat generation. So, test time for a typical SLT insertion can be a half hour or more, requiring many test stations to meet the monthly volume demands.

The buildings built to test the parts must have special facilities for electrical power and thermal control. Therefore, these test facilities aim to maximize their investment by testing as many devices in the smallest floor space as possible. However, the devices and their test application boards are getting bigger and consume more power.

Emerging developments in chip testing

Chip designers and EDA vendors have developed and introduced new DFT techniques that allow structural test content to be delivered as packetized data over standard high-speed serial ports like USB and PCIe. During SLT, these ports must be enumerated at the application level so that the port operates as intended.

Once this connection is made, the test program can switch into a test mode using a small number of high-speed pins to run structural test patterns or other built-in self-test functions. Once these serial data ports are working, the test content can be reused and correlated either to ATE with similar test stations (such as Link Scale) or post-silicon validation test stations (such as SiConic) to improve time to market and reuse.

Managing the heat dissipation of these high-power devices under extreme workload is a ubiquitous problem being addressed at the engineering, bench, ATE and SLT test insertions, and even in data-center-wide operation. Air, liquid, and refrigerants are all utilized, with an eye on environmental sustainability. Production test handlers have the added challenge of cycling heat and mechanical engagement multiple times per day.

The use of AI and machine learning (ML) is also being applied to semiconductor testing. Sharing the test result data between different test insertions, including ATE, burn-in, and SLT, feeds into AI and ML tools to improve yield, accelerate test-program development, and optimize test times.

Looking ahead

As semiconductor manufacturing becomes more complex, SLT will continue to grow in importance. For it to be truly effective, companies must integrate it into their overall testing strategy rather than treating it as a separate, isolated step. And the next generation of system-level testers must focus on addressing the challenges cited above. Success will require collaboration across design, test, and high-volume manufacturing teams, as well as a willingness to rethink traditional approaches to validation.

In an era defined by multi-chip packages, heterogeneous integration, and ever-tightening performance demands, SLT will remain a crucial tool for ensuring that cutting-edge chips perform as expected in real-world applications.

Davette Berry is senior director of Customer Programs & Business Development at Advantest.

Related Content

The post System-level test’s expanding role in producing complex chips appeared first on EDN.

Handy automatic zero reference for Hall effect probes

Втр, 07/08/2025 - 15:59

Many analog sensors today are designed to operate from a single power rail but produce output signal centered around approximately half of power supply voltage, linear hall effect sensors and MEMS accelerometers are few to name. But when their output is observed by oscilloscopes or measured by multimeters, it is desirable to bring the output centered around zero volts. This allows you to observe the physical parameters they sense with greater resolution without risking saturation of the scope or multimeter.

Figure 1 shows a simple circuit utilizing a non-volatile digital potentiometer that can be handy for automatically adjusting the zero-point virtual ground terminal of the scope with the push of a button, and keeping it steady even across power cycles.

Figure 1 Autozero circuit schematic diagram that allows the user to automatically adjust the zero-point virtual ground terminal of the scope with the push of a button.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The typical application for this circuit is analyzing waveforms of the currents in a power inverter with a Hall effect DC current sensor, like SS49E, and any DSO. The sensor HE1 is ratiometric, its output is centered around Vcc/2, and the voltage difference is proportional to a product of the magnetic field and Vcc. The typical bandwidth of the sensor is 200 kHz.

A floating ~6 V battery powers the probe, and the accurate Vcc required for a calibrated conversion factor of the sensor is generated by an optional micropower LDO U4. I tried to use low-power components to extend battery life.

The adjustable reference voltage is produced by digital potentiometer U1, with its wiper driving virtual ground buffer amplifier U2B. Its output is connected to the “ground” terminal of an SMA output connector for connection to a scope or multimeter, so that the sensor output observed on CON2 is centered around zero.

The auto adjustment is performed by comparator U2A, driving the “up/down” direction input of the potentiometer. It will move the wiper tap up or down on every clock of the clock generator U3, depending on the voltage difference between the sensor output and virtual ground. When the clock generator runs at ~1.5 kHz, the tap will move to the point of balance within 100 ms and will oscillate within one tap around the balance point.

The clock generator does not run continuously. The FET Q2 will release it only when SB1 is pressed momentarily, pulling the CS signal low to enable updates of the potentiometer. There is no special timer on the enable signal for simplicity. When the button is released, all signals are parked at a high level, resulting in an update of the non-volatile setting of the potentiometer.  

It is desirable to eliminate any switching within the circuit to eliminate noise, especially when sensitive measurements are needed. Q2 disables the clock oscillator, and Q1 disables the comparator to achieve that when SB1 is not pressed.

The optional R1 and D1 may help observe how the circuit reaches balance—its brightness will be half of the maximum at the balance point.

The range of balance is selected by R3, R5, and R7, which set the voltage between the upper and lower end of the resistor string in a digital potentiometer. With the values shown in the schematic, it is ~0.24 V, and the wiper steps are ~2.4 mV. The user may vary these values to set the optimum range and resolution of adjustment.

The autozero circuit may compensate for small bias and temperature drift. Press SB1 whenever the steady zero output exists at the output of the sensor to operate the circuit.

Celsiuss Watt is an engineer with a major semiconductor company and has contributed several technical publications for EDN and other electronics magazines. He enjoys handy electronics projects in his spare time

Related Content

The post Handy automatic zero reference for Hall effect probes appeared first on EDN.

Hardware alterations: Unintended, apparent advantageous adaptations

Пн, 07/07/2025 - 19:42

After many (many) years of resisting temptation, followed by sowing seeds via succumbing to irresistible, gently-used-on-eBay prices for both generations of Schiit’s Mani phono preamp:

Vinyl enthusiasm

I’ve recently reconnected with the “vinyl” infatuation of my youth. Sorry, audiophiles, you’re still not going to convince me that records “sound better” than lossless (in reality, preferably perceptibly, minimally), large sample size, and high sample rate digital files, whether locally stored or Internet-streamed. In my contrarian opinion, in fact, the claimed “warmth” of the stylus (aka, “needle”)-delivered music is fundamentally a reflection of its measurably degraded SNR and other distortion measures versus more pristine digital alternatives, akin to guitar players clinging to archaic amp-and-speaker sets and recording engineers preferring ancient mics.

So why have I gone back “down the rabbit hole”, then? It’s because, access inconvenience (vs digital) aside, there’s something fundamentally tactile-titillating and otherwise sensory-pleasing (at least to a memory-filled “old timer” like me) to carefully pulling an LP out of its sleeve, running a fluid-augmented antistatic velvet brush over it, lowering the stylus onto the disc and then sitting back to audition the results while perusing the album cover’s contents. I’m not going to admit publicly how many (dozen) albums I’ve already accumulated while also striving to suppress ruminations on the dozens of albums I donated a few dozen years ago. I’ve also acquired several turntables, both belt- and direct-drive in design, which I’ll assuredly be showcasing in future write-ups.

The two turntables

The first, Audio-Technica’s AT-LP60XBT, is the star of this piece:

My wife actually bought the AT-LP60XBT for me a while ago, along with a rare, sealed copy of “Buckingham Nicks,” Lindsey Buckingham, and Stevie Nicks’ only album prior to joining Fleetwood Mac. I finally dug it out of storage and fired it up earlier this year. Audiophiles are both cringing and chuckling at this point because it’s admittedly an entry-level model, both in comparison to upscale options from Audio-Technica’s own product line (that latest transparent turntable is wild, eh?) and examples from other manufacturers—my Fluance RT85, for example:

That said, it’s fully automatic in operation, which is great when we just want to listen to music, not bothering with the added minutia of manually placing the stylus on the disc prior to playing it and then returning the headshell (along with its mated tonearm) to rest afterward. And, referencing the “BT” portion of the product name, the AT-LP60XBT’s inclusion of both wired (albeit not with user-selectable integrated preamp bypass support for an external preamp such as one of the Schiit units mentioned earlier, a feature which its AT-LP70XBT successor offers) and wireless over both SBC and higher quality albeit baseline aptX Bluetooth codecs (audiophiles out there are really cringing now) affords it expanded connectivity and location flexibility.

Cartridge options

One key reason why the AT-LP60XBT is viewed as an “entry-level” turntable (aka, “record player”) is that it doesn’t support user upgrade of the (also entry-level) cartridge that originally came with it. Then again, that same inflexibility also means that AT-LP60XBT owners never need to bother with tracking force and antiskating control settings. The topics of cartridges and the styli they mate with, I quickly learned upon reconnecting with turntable technology, provide no shortage of opinions, debate, disagreement, and diatribes within the “vinyl” community.

Two main cartridge options exist: moving magnet and higher-end moving coil. They work similarly, at least in concept: in conjunction with the paired stylus, they transform physical info encoded onto a record via groove variations into electrical signals for eventual reproduction over headphones or a set of speakers. Differences between the two types reflect construction sequence variance of the cartridge’s two primary subsystems—the magnets and coils—and are reflected (additionally influenced by other factors such as cantilever constituent material and design) not only in perceived output quality but also in other cartridge characteristics such as output signal strength and ruggedness.

Stylus options

Once you’ve selected a particular cartridge technology, manufacturer, and model, you then need to pick the stylus (or styli…keep reading) for that cartridge. Again, two primary needle heads (the tip of the stylus, which makes contact with the record groove) types—conical and elliptical—exist, a topic which I’ll discuss in more detail shortly, but other, higher-end (at a much higher cost) options are also available. And then there are both nude (solid diamond) and bonded (metal with a diamond tip) needle construction options…terminating in either a round or square stylus shank…it’s enough to cause a headache. I’d even read about turntable owners who keep at close reach multiple styluses (and even cartridges, for the true fanatics) options for on-the-fly interchange, depending on what disc is to be played next! Therefore, I guess, another advantage to the AT-LP60XBT: fewer options (and combinations of them) to fuss about. Hold that thought.

As previously mentioned, the cartridge in this case (a variant of the AT3600L) is permanently integrated with the headshell (which is also permanently integrated with the tonearm):

The included ATN3600LC stylus is conical, with characteristics Audio-Technical describes thusly:

A good all-rounder, literally. Its head is rounded with a radius of around 0.6 mil which touches the centre of the record groove walls, though 78 RPM records will need a much larger needle. Conical styli are often more budget friendly, producing a rich, solid sound.

Here’s, from Fluance, a conceptual picture of what the conical needle head looks like in-groove:

Styli eventually wear out and need to be replaced:

And for alternative upgrade (at the tradeoff of shorter usable life prior to needed replacement), Audio-Technica also offers an elliptical stylus, the ATN3600LE, for the AT3600L cartridge:

Here’s how Audio-Technica describes the underlying elliptical needle head technology:

The front part of the needle rides in the center of the record groove, while the smaller side makes more contact with the groove walls. This helps produce a more enveloping sound, as an Elliptical stylus tracks the vinyl grooves with greater precision.

Again, from Fluance, here’s an in-groove conceptual image for the elliptical needle head:

And here’s Fluance’s broader “take” on differences between various cartridge and styli options:

That said, my research had also uncovered recommendations for two elliptical stylus alternatives to the ATN3600LE for the AT3600L; the comparably priced LP Gear CFN3600LE:

and lower cost (albeit, from my research, functionally equivalent) Pfanstiehl 4211-DE:

I, of course, went with the cheaper $29 option 😉

Stylus comparisons

Once I had both styli in my hands, I did audition comparisons between them on several pristine LPs typifying various music genres. Did I discern any differences? Not really, honestly. That said, my listening approach was admittedly casual, not critical. And again, the cartridge and broader turntable are entry-level. So, did that mean I’d wasted $29? Not at all, as it turned out.

The key word here is pristine. Most of the vinyl I’ve so-far acquired has been brand new, so that I don’t inherit the previous owner’s (or owners’) extended listening and potential poor handling wear-and-tear outcomes. But in a few cases, I’ve gone with used purchases for reasons such as:

  • The album’s no longer in “pressing” production, so any (if at all) remaining new copies are outrageously expensive (example: Widespread Panic’s Light Fuse, Get Away), or
  • On the other end of the spectrum, I might have come across a used copy whose vs-new low price I’d been unable to resist, so I’d decided to roll the dice and take a chance.

Take, for example, Rush’s Archives, a compilation of the band’s first three albums, Rush, Fly By Night, and Caress of Steel. The latter two standalone titles currently sell new for ~$30 each on Amazon; judging from scant and pricey ($138!) new inventory there, I suspect Rush is no longer in production. I can’t find Archives new on Amazon; on eBay, it’s selling (again, new) for $249 and up. Conversely, on Mercari (no, I never learn) I’d found a cosmetically decent (judging from photos and descriptions) copy for $22.95 minus a 10% promo coupon (plus shipping and tax).

I went for it. When it arrived, alas, it suffered from no shortage (albeit also not a superfluous amount) of “clicks” and “pops”, in spite of my diligent cleaning attempts. Disc two, unfortunately, also had a notable skip right in the middle of the eponymous title track. The apologetic seller offered me a $10 partial refund, which helped. That said, there still was the matter of the Fly By Night flaw. But then I remembered something else mentioned in my earlier stylus research.

Because conical styli only ride partway down in the record groove, they supposedly don’t capture all the available fidelity potential with pristine records. But that same characteristic turns out to be a good thing with non-pristine records, for which all manner of gunk has accumulated over time in the bottom of the groove. By riding above the dross, the conical needle head doesn’t suffer from its deleterious effects.

I glanced at the cartridge: yep, I had the elliptical stylus installed. I swapped it out for the conical counterpart: notably fewer “clicks” and “pops”. And apparently that same “lack of precision” also makes a conical stylus more immune to even more egregious groove flaws, because the skip was now completely gone, too. Prior sarcasm thus humbled by subsequent experience to the contrary, I now “keep at close reach both stylus options for on-the-fly interchange depending on what disc is to be played next.” With no shortage of associated chagrin. Ahem.

Lessons learned

Why’d I tell this tale? Because, at least to me, it’s not just about turntable styli. As I thought back on the experience afterward, I realized that it more broadly exemplifies a situation that many of you (and certainly I) have likely experienced during past product development cycles. You make a design tweak, maybe just to save a few cents on the bill-of-materials cost. It delivers the desired outcome…or maybe it doesn’t. But inevitably, it also results in other (often ahead of time unforeseen) transformations to the product you’re working on, sometimes for the better, sometimes for the worse, and sometimes just making it different.

What situations like mine have you encountered in the past, and how have they informed your subsequent product development (and broader tactics-for-life) approaches? Let me and your fellow readers know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post Hardware alterations: Unintended, apparent advantageous adaptations appeared first on EDN.

Handle with care: Power and perils of compound semiconductors

Пн, 07/07/2025 - 10:44

In the realm of advanced electronics and photonics, compound semiconductors are the silent enablers of breakthrough performance in many applications. Built from materials like indium gallium arsenide (InGaAs) or gallium arsenide (GaAs), these semiconductors enable applications that are beyond the capabilities of silicon devices.

Examples include avalanche photodiodes (APDs) for ultra-sensitive infrared detection at eye-safe wavelengths. These are used in LiDAR, optical test equipment, optical communications, and laser range finders, for example.

But as useful as they are, APDs made from compound semiconductors have some vulnerabilities, all of which can be mitigated through careful storage, handling, and application. Handle them with care, and you will avoid compromised performance, reduced lifespan, or complete failure.

Here’s what you need to know to ensure these components give their best.

Figure 1 Compound semiconductors have unique characteristics that can lead to vulnerabilities, but all of these can be mitigated with appropriate storage, handling, and application. Source: Phlux Technology

Why compound semiconductors are more fragile than silicon

Compared to silicon, compound semiconductors are physically and chemically more delicate. Their crystal structure makes them brittle, so they’re more likely to crack or chip if dropped or mishandled. Chemically, they’re also more reactive, exposing them to moisture, dust, or skin oils can lead to surface degradation, which affects performance.

They also struggle with heat. Lower thermal conductivity and mismatched coefficients of expansion make them vulnerable to thermal shock. Moreover, poor soldering practices or rapid temperature changes can cause delamination or internal fractures. And because many compound semiconductor APDs operate at low voltages and high impedance, they’re exceptionally sensitive to electrostatic discharge (ESD). A tiny static zap, invisible to you, might silently destroy the device.

APDs: Your handling survival guide

  1. Watch out for static

ESD isn’t just a nuisance, it’s a silent killer. Always handle APDs in an ESD-safe environment. That includes grounded workbenches, wrist straps, anti-static mats, and maintaining relative humidity between 40–60% to prevent static buildup. Keep the devices in anti-static packaging until you’re ready to use them.

  1. Handle with a gentle touch

APDs are inherently a little fragile. Use appropriate tools such as tweezers or vacuum pickups and handle the correct areas: the can for TO-CAN packages, the edges for SMDs, and the ferrule (never the fiber) for fiber-pigtailed versions. Never apply pressure to the photosensitive area or exposed wire bonds.

Contamination, whether from skin oils or airborne particles, can degrade signal integrity, so wear gloves and work in cleanroom or semi-clean environments. If cleaning is necessary, use approved solvents or dry nitrogen with extreme care.

  1. Beware the heat

Thermal stress is one of the top killers of compound semiconductors. Always follow recommended soldering profiles using gradual temperature ramps to prevent cracking or delamination. Preheat boards before reflow soldering and allow cooling to happen slowly. Desoldering is especially risky because temperatures above 330°C, even for just a few seconds, can irreparably damage many APD packages.

  1. Store them smart

APDs are sensitive even when idle. Store them between 5°C and 30°C, with relative humidity below 60%. Use moisture-barrier bags with desiccant packs and humidity indicators whenever possible. If you remove them from their original packaging, ensure the alternative provides equivalent ESD and moisture protection.

For SMD APDs, pay close attention to their moisture sensitivity level (MSL). Devices stored for extended periods—especially over a year—should be visually inspected and electrically tested before use. And if the MSL safe exposure window has been exceeded, follow proper bake-out procedures before reflow soldering.

Package-specific recommendations

  1. TO-CAN APDs

These are relatively robust but not invincible. The transparent window is critical for light transmission and must be kept clean and scratch-free. Avoid applying excessive mechanical force to the can, and ensure mounting solutions (sockets, clips, or heat sinks) don’t stress the package. Good thermal management is especially important when operating at high optical power or bias voltage.

Figure 2 Even TO-CAN packaged APDs need careful handling, so it’s important to keep transparent windows clean and scratch-free. Source: Phlux Technology

  1. SMD APDs

These are compact and efficient, but they come with stricter handling requirements. Many come under MSL 2 or MSL 3 classifications, meaning they must be soldered within a limited time after exposure to air. If that window closes, you’ll need to bake them before soldering. Stick to soldering temperatures between 270–300°C and avoid brief spikes above 330°C.

  1. Fiber-pigtailed APDs

These versions come with an optical fiber attached, and the fiber is the most delicate part of the assembly. Never bend it below its minimum bend radius and always protect the end-face with a dust cap or ferrule cover. Even minor contamination at the tip can cause significant optical loss or permanent damage.

Best practices summary

  • Use strict ESD precautions at every stage of handling.
  • Avoid mechanical shocks, bending, or applying pressure in the wrong places.
  • Store APDs in controlled temperature and humidity environments with proper packaging.
  • Follow soldering guidelines carefully, including reflow and desoldering profiles.
  • Keep optical fibers clean and protected from strain or contamination.

APDs built from compound semiconductors are extraordinary components and enablers of much of the technology we take for granted today. However, they also require more attention than their silicon-based cousins. If you handle them with the precision they deserve, you’ll be rewarded with reliable, high-quality results.

If not, the damage might not show up immediately but rear its head later. Handle with care to reap the full benefits of these devices, including those of the world’s most sensitive 1550-nm noiseless InGaAs APD sensors.

Christian Rookes is VP of marketing at Phlux Technology, a manufacturer of avalanche photodiode (APD) infrared sensors based in Sheffield, UK. He has over 25 years’ experience in technical marketing in semiconductors and optical communications. He holds two patents, including one related to impedance matching for laser diode circuits.

Related Content

The post Handle with care: Power and perils of compound semiconductors appeared first on EDN.

Can microchannels, manifolds, and two-phase cooling keep chips happy?

Птн, 07/04/2025 - 14:03
Two-phase cooling

Thermal management is an ongoing concern for many designs. The process usually begins with a tactic for dissipating or removing heat from the primary sources (mostly but not exclusively “chips”), then progresses to keeping the circuit-board assembly cool, and finally getting the heat out of the box and “away” to where it becomes someone else’s problem. Passive and active approaches are employed, involving some combination of active or passive convection, conduction (in air or liquid), and radiation principles.

The search for an effective cooling and thermal transfer solution has inspired considerable research. One direct approach uses microchannels embedded within the chip itself. This allows coolant, usually water, to flow through, efficiently absorbing and transferring heat away.

The efficiency of this technique is constrained, however, by the sensible heat of water. (“Sensible heat” refers to the amount of heat needed to increase the temperature of a substance without inducing a phase change, such as from liquid to vapor.) In contrast, the latent heat of phase change of water—the thermal energy absorbed during boiling or evaporation—is around seven times greater than its sensible heat.

Two-phase cooling with water can be achieved by using the latent heat transition, resulting in a significant efficiency enhancement in terms of heat dissipation. Maximizing the efficiency of heat transfer depends on a variety of factors. These include the geometry of the microchannels, the two-phase flow regulation, and the flow resistance; adding to the task, there are challenges in managing the flow of vapor bubbles after heating.

Novel water-cooling system

Now, a team at the Institute of Industrial Science at the University of Tokyo has devised a novel water-cooling system comprising three-dimensional microfluidic channel structures, using a capillary structure and a manifold distribution layer. The researchers designed and fabricated various capillary geometries and studied their properties across a range of conditions to enhance thin-film evaporation.

Although this is not the first project to use microchannels, it presents an alternative physical arrangement that appears to offer superior results.

Not surprisingly, they found that both the geometry of the microchannels through which the coolant flows and the manifold channels that control the distribution of coolant influence the thermal and hydraulic performance of the system. Their design centered on using a microchannel heat sink with micropillars as the capillary structure to enhance thin-film evaporation, thus controlling the chaotic two-phase flow to some extent and mitigating local dry-out issues.

This was done in conjunction with three-dimensional manifold fluidic passages for efficient distribution of coolant into the microchannels, Figure 1.

Figure 1 Microfluidic device combining a microchannel layer and a manifold layer. (A) Schematic diagrams of a microfluidic device. Scale bar: 5 mm. (B) Exploded view of microchannel layer and manifold layer. The heater is located on the backside of the substrate with parallel microchannels. Both the microchannel layer and manifold layer are bonded with each other to constitute the flow path. (C) The coolant flows between the manifolds and microchannels to form an N-shaped flow path. The capillary structures separate the vapor flow from the liquid thin film along the sidewall. The inset schematic shows the ordered two-phase flow under ideal conditions. Scale bar: 50 mm. (D) Cross-sectional schematic view of bonded device showing the heat and fluid flow directions. (E) Clamped device is mechanically tightened using bolts and nuts. (F) Images of clamped device showing the isometric, top, and side views. Scale bar, 1 cm. Source: Institute of Industrial Science at the University of Tokyo

Testing this arrangement requires a complicated electrical, thermal, and fluid arrangement, with clamps to put just the right calibrated pressure on the assembly for a consistent thermal impedance, Figure 2. They also had to allow time for start-up thermal transients to reach steady-state and take other test subtleties into account.

Figure 2 The test setup involved a complicated arrangement of electrical, thermal, mechanical, and fluid inputs and sensors, all linked by a LabVIEW application; top: system diagram; bottom: the actual test bench. Source: Institute of Industrial Science at the University of Tokyo

Their test process included varying key physical dimensions of the micropillars, capillary microchannels, and manifolds to determine optimum performance points.

It’s difficult to characterize performance with a single metric, Figure 3.

Figure 3 Benchmark of experimentally demonstrated critical heat flux and COP of two-phase cooling in microchannel using water. Zone 1 indicates the results in this work achieving efficient cooling by using a mass flow rate of 2.0 g/min with an exit vapor quality of 0.54. The other designs using manifolds marked by solid symbols in zone 2 consume hundreds of times of water with an exit vapor quality of around 0.1. The results of microstructure-enhanced designs are marked by open symbols in zone 3. Zone 4 shows the performance of typical single-phase cooling techniques. Source: Institute of Industrial Science at the University of Tokyo

One such number, the measured ratio of useful cooling output to the required energy input (the dimensionless coefficient of performance, or COP) reached up to 105, representing a meaningful advance over other water-channel cooling techniques that are cited in the references.

Details including thermal modeling, physics analysis, device fabrication, test arrangement, full data, results, and data discussion are in their paper “Chip cooling with manifold-capillary structures enables 105 COP in two-phase systems” published in Cell Reports Physical Science.

As noted earlier, this is not the first attempt to use microchannels to cool chips; it represents another approach to implementing this tactic. Do you think this will be viable outside of a lab environment in the real world of mass-volume production and liquid interconnections? Or will it be limited to a very small subset, if any, of enhanced chip-cooling solutions?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related content

The post Can microchannels, manifolds, and two-phase cooling keep chips happy? appeared first on EDN.

AI-focused MCUs embed neural processor

Чтв, 07/03/2025 - 23:27

Aimed at AI/ML applications, Renesas’ RA8P1 MCUs leverage an Arm Ethos-U55 neural processing unit (NPU) delivering 256 GOPS at 500 MHz. The 32-bit devices also integrate dual CPU cores—a 1-GHz Arm Cortex-M85 and a 250-MHz Cortex-M33­—that together achieve over 7300 CoreMark points.

The NPU supports most commonly used neural networks, including DS-CNN, ResNet, Mobilenet, and TinyYolo. Depending on the neural network used, the Ethos-U55 provides up to 35× more inferences per second than the Cortex-M85 processor on its own.

RA8P1 microcontrollers provide up to 2 MB of SRAM and 1 MB of MRAM, which offers faster write speeds and higher endurance than flash memory. System-in-package options include 4 MB or 8 MB of external flash memory for more demanding AI tasks.

Dedicated peripherals and advanced security features support voice and vision AI, as well as real-time analytics. For vision AI, a 16-bit camera engine (CEU) handles image sensors up to 5 megapixels, while a separate two-lane MIPI CSI-2 interface provides a low pin-count connection at up to 720 Mbps per lane. Audio interfaces including I²S and PDM enable microphone input for voice AI. To protect edge AI and IoT systems, the devices integrate cryptographic IP, enforce immutable storage, and monitor for physical tampering.

The RA8P1 MCUs are available now in 224-pin and 289-pin BGA packages.

RA8P1 product page

Renesas Electronics 

The post AI-focused MCUs embed neural processor appeared first on EDN.

Off-line converter trims component count

Чтв, 07/03/2025 - 23:27

ST’s VIPER11B voltage converter integrates an 800-V avalanche-rugged MOSFET with PWM current-mode control to power smart home and lighting applications up to 8 W. On-chip high-voltage startup circuitry, a senseFET, error amplifier, and frequency-jittered oscillator help minimize external components. The MOSFET requires only minimal snubbing, while the senseFET enables nearly lossless current sensing without external resistors.

As an off-line converter operating from 230 VAC, the VIPER11B consumes less than 10 mW at no load and under 400 mW with a 250-mW load. Under light-load conditions, it operates in pulse frequency modulation (PFM) mode with pulse skipping to enhance efficiency and support energy savings. The controller runs from an internal VDD supply ranging from 4.5 V to 30 V.

Housed in a compact 10-pin SSOP package, the converter conserves space—especially in designs with strict form factors like LED lighting drivers and smart bulbs. It’s also well suited for home appliances, low-power adapters, and smart meters. The device includes output overload and overvoltage protection with automatic restart, along with VCC clamping, thermal shutdown, and soft-start features.

In production now, VIPER11B voltage converters are priced from $0.56 each in lots of 1000 units.

VIPER11B product page

STMicroelectronics

The post Off-line converter trims component count appeared first on EDN.

MLCC saves board space in vehicle designs

Чтв, 07/03/2025 - 23:27

Designed for automotive use, Murata’s 50-V multilayer ceramic capacitor (MLCC) delivers higher capacitance in a compact 0805-size (2.0×1.25-mm) SMD package. With a rated capacitance value of 10 µF, the GCM21BE71H106KE02 ranks among the smallest in its class.

The capacitor operates on 12-V automotive power lines while conserving PCB space and reducing the overall capacitor count. It delivers approximately 2.1 times the capacitance of Murata’s earlier 4.7-µF/50-V model in the same 0805 footprint. Compared to the previous 10-µF/50-V MLCC in the larger 1206 size (3.2×1.6 mm), it occupies about 53% less board area, offering significant space savings for automotive designs.

The GCM21BE71H106KE02 10-µF/50-V capacitor in the 0805 package is now in production. Use the product page link below to request samples, get a quote, or check availability.

GCM21BE71H106KE02 product page

Murata Manufacturing 

The post MLCC saves board space in vehicle designs appeared first on EDN.

Isolated driver enables fast, stable GaN control

Чтв, 07/03/2025 - 23:26

Rohm has introduced the BM6GD11BFJ-LB, an isolated gate driver optimized for 600-V-class GaN HEMTs in industrial equipment such as motors and server power supplies. When paired with GaN transistors, the single-channel driver maintains stable operation under high-frequency, high-speed switching conditions.

The device ensures safe signal transmission by galvanically isolating the control circuitry during switching events with fast voltage rise and fall times. Its 4.5-V to 6.0-V gate drive range and 2500-Vrms isolation rating support a broad selection of high-voltage GaN devices, including Rohm’s 650-V EcoGaN HEMT. Low output-side current consumption—0.5 mA maximum—helps reduce standby power and improve overall system efficiency.

The BM6GD11BFJ-LB uses proprietary on-chip isolation to reduce parasitic capacitance, enabling high-frequency operation up to 2 MHz and reducing external component count. Enhanced CMTI of 150 V/ns—reportedly 1.5× higher than conventional products—prevents malfunctions during fast GaN switching. A reduced minimum pulse width of 65 ns improves duty cycle control, allowing stable, efficient operation at higher frequencies.

The BM6GD11BFJ-LB isolated gate driver is now available through online distributors including DigiKey and Mouser. Samples are priced at $4 each.

BM6GD11BFJ-LB product page

Rohm Semiconductor 

The post Isolated driver enables fast, stable GaN control appeared first on EDN.

Primemas unveils CXL 3.0 SoC controller

Чтв, 07/03/2025 - 23:26

Primemas, a fabless company specializing in SoC Hublets (hub chiplets), is now sampling its Compute Express Link (CXL) 3.0 memory controller. The company is collaborating with Micron through its CXL ASIC Validation Lab (AVL) program to accelerate the commercialization of next-generation CXL controllers compatible with Micron’s advanced DRAM modules.

Hublets are SoC modules in a pluggable chiplet format, offering a range of IP infrastructure, including CPU, network-on-chip bus, memory controllers, and resource schedulers, along with high-bandwidth, low-latency die-to-die interfaces.

Unlike conventional CXL memory expansion controllers constrained by fixed form factors and limited DRAM capacity, Primemas says its chiplet technology offers greater scalability and modularity. Working with Micron, the company aims to deliver a reliable CXL 3.0 controller paired with Micron’s high-capacity 128-GB RDIMM modules.

The semiconductor startup has delivered engineering samples and development boards to strategic customers and partners, who have helped validate the performance and capabilities of its Hublet versus alternative CXL controllers. Building on this early success, Primemas is now ready to ship Hublet product samples to memory vendors, customers, and ecosystem partners.

Learn more about Primemas Hublets here.

Primemas

The post Primemas unveils CXL 3.0 SoC controller appeared first on EDN.

Push ON, Push OFF for AC voltages

Чтв, 07/03/2025 - 17:22

Stephen Woodward’s DI, “Flip ON Flop OFF” does a wonderful job for DC voltages. I thought of extending this idea to much-needed AC voltages, as all our gadgets work with AC voltages.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the compact circuitry using a simple counter IC. This circuit utilizes a single push-button (PB) to switch between ON and OFF states for AC voltages. When you push PB once, the output terminal J2 gets 230V/110V AC. For the next push, output at J2 becomes zero. This action continues for subsequent pushes. Accordingly, the gadget connected to J2 will be ON or OFF.

Figure 1 Pushbutton circuit that switches on ON and OFF for AC voltages using electromechanical relay (RL1).

In Figure 1’s circuit, when PB is momentarily pushed once, U1’s(counter 4024) Q1 goes HIGH, counting one input pulse, which makes the Darlington pair Q1 and Q2 conduct. Relay RL1 gets energized. Its NO contact closes and passes 230V/110V AC connected to J1 to J2. The gadget connected to J2 turns ON.

When you push PB again, the second pulse is generated and counted by U1. It’s Q1 (LSB of counter) becomes LOW, making Q1 and Q2 OFF. The relay gets de-energized, and the AC voltage to J2 gets disconnected, making the gadget turn off. R2 and C2 are for the power-on reset of U1.

If you prefer not to use an electromechanical relay, a solid-state relay can be used, as shown in Figure 2. In this circuit, when you push PB once, the Q1, Q2 pair starts conducting, current flows through the LED of U3, an optically coupled TRIAC, causing it to conduct. Due to this, U4 TRIAC conducts, passing 230V/110V to J2. When you push PB again, the Q1, Q2 pair opens, stopping current flow through the LED of U3. The TRIACs of U3 and U4 stop conducting, disconnecting power to J2.

Figure 2 Circuit switches AC power on and off for output-connected gadgets using a solid-state relay formed by U3 and U4.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

The post Push ON, Push OFF for AC voltages appeared first on EDN.

Tenstorrent’s Blue Cheetah deal a harbinger of chiplet acquisition spree

Чтв, 07/03/2025 - 10:58

Less than a month after Qualcomm announced its acquisition of Alphawave Semi, another chiplet deal is in play. Artificial intelligence (AI) chip developer Tenstorrent has snapped up Blue Cheetah Analog Design after licensing its die-to-die (D2D) interconnect IP for AI and RISC-V chiplet solutions.

Blue Cheetah was founded in 2018 with an initial investment from Marvell co-founders Sehat Sutardja and Weili Dai and their pioneering vision for chiplets. Its BlueLynx D2D interconnect subsystem IP provides physical (PHY) and link layer chiplet interfaces compatible with both Open Compute Project (OCP) Bunch of Wires (BoW) and Universal Chiplet Interconnect Express (UCIe) standards.

Blue Cheetah also brings a wealth of analog mixed-signal expertise in developing D2D, DDR, SerDes, and other technologies critical in chiplet design. It’s co-founder and CEO Elad Alon is an expert in analog and mixed-signal design. He is also the technical lead of the Bunch of Wires PHY standard.

In addition to chiplet designers, Blue Cheetah offers chiplet interconnect IP solutions to various foundries and process nodes. Earlier this year, it announced the successful tape-out of its BlueLynx D2D PHY on Samsung Foundry’s 4-nm SF4X process node.

The latest version of BlueLynx PHY supports both advanced and standard chiplet packaging with an aggregate throughput exceeding 100 Tbps. As a result, the BlueLynx subsystem IP enables chip architects to meet the bandwidth density and environmental robustness necessary to ensure successful production deployment.

Qualcomm’s acquisition of Alphawave Semi and Tenstorrent buying Blue Cheetah mark an important step in the consolidation of the chiplet ecosystem. With the acquisition of Blue Cheetah, Tenstorrent will gain in-house capabilities for advanced interconnects and other analog and mixed-signal components.

Will 2025 be the year of chiplets? Are there more chiplet acquisitions in the works? There are several chiplet upstarts, such as Baya Systems and Chipuller, and likely, larger semiconductor outfits are currently eyeing them to acquire chiplet design capabilities.

Related Content

The post Tenstorrent’s Blue Cheetah deal a harbinger of chiplet acquisition spree appeared first on EDN.

A hands-on guide for RC snubbers and inductive load suppression

Срд, 07/02/2025 - 18:01

The other day, I was casually scrolling through Google when I stumbled upon a flood of dirt-cheap RC snubber circuit modules on various online stores. That got me thinking—it’s high time we talk about these little circuits and their real-world applications.

This post will offer some insights on RC snubber circuits along with a few handy tips for inductive load suppression. Whether you are a newbie looking to learn the ropes or an expert in need of a quick refresher, there is something in here for you. Let us dive in…

On paper, RC snubber circuits function as protective measures in switching applications, utilizing a resistor and capacitor together to mitigate voltage spikes and transient noise. But the commonly available RC snubber circuit module, sometimes referred to as an RC absorption circuit module by certain vendors, only contains a resistor, a capacitor and a varistor—just three basic components.

According to most vendors, the prewired module is suitable for AC/DC 5-400 V inductive loads (<1,000 W) to protect relay contacts and triacs. I could not find an actual schematic of it anywhere on the web, but since it’s pretty easy to prepare it through physical inspection, I drew it myself. Here is that diagram.

Figure 1 The block diagram represents the RC snubber module circuit. Source: Author

The components in the module are:

  • R = 220 Ω/2 W Resistor (MFR 1%)
  • C = 104 J/630 V Capacitor (CBB22)
  • MOV = 10 D/471 K Metal Oxide Varistor (10 mm/470 V ±10%)

The R-C values used in the snubber are by necessity compromises. In practice, the resistor value (R) must be large enough to limit the capacitive discharge current when the switch contacts close, but small enough to adequately limit the voltage when the switch contacts open. Larger capacitor value (C) decreases the voltage when the switch contacts open but it increases the capacitive discharge energy when the switch contacts close.

Furthermore, when the switch contacts are open, a current will be flowing through the snubber network. It should be verified that this leakage current does not cause issues in the application and that the power dissipation in the snubber resistor does not exceed its power rating.

A quick design insight

The optimal approach to determining the R-C values involves using an oscilloscope to trial various R-C combinations while monitoring spike reduction (or turn-off transient reduction). Then adjust the R and C values as needed until the desired reduction is achieved. Based on my practical experience, for most relays and triacs, 100 nF + 100 Ω values provide an acceptable suppression.

The above-mentioned RC snubber module, intended to be wired across a switching point as shown below, is a simplified resistor-capacitor snubber circuit made up of a resistor and a capacitor connected in series. Here, the resistor helps to absorb the energy from the voltage spikes, while the capacitor provides short-run storage for this energy. This way, the risk of a harm due to sudden change in electrical flow is minimized.

Figure 2 The RC snubber module is wired across a switching point. Source: Author

Most snubber circuits also include a metal oxide varistor (MOV) along with the RC circuit by placing the metal oxide varistor across the input line. An MOV is a specialized type of voltage dependent resistor (VDR) that uses a metal oxide, most commonly zinc oxide, as its non-linear resistor material.

The MOV will then protect the parallel circuit and the load. The MOV will set the maximum input voltage and di/dt through the load while the RC snubber sets the maximum dv/dt and peak voltage across the switching element like a triac; di/dt and dv/dt values should be considered when handling non-resistive loads.

At this point, it’s worth noting that when a triac drives an inductive load, the mains voltage and the load current are not in phase. To limit the slope of the reapplied voltage and ensure the right triac turn-off, a snubber circuit is usually connected in parallel with the triac. The snubber circuit can also be used to improve triac immunity to fast transient voltages.

Summed up briefly, the generic RC snubber circuit module covered in this post is suitable for certain circuits with inductive loads and switching devices such as triacs, thyristors, and power relays. When used, the two input screw terminals of the module are connected to the two contacts of the relay (such as common and normally open contacts), or it’s connected in parallel with the triac/thyristor (Figure 3).

Figure 3 The above image offers application hints for RC snubber modules. Source: Author

Inductive load suppression

Let it be known that inductive load suppression encompasses methodologies designed to mitigate the adverse effects of potential backlashes, which manifests when an inductive load—such as a solenoid or motor—is abruptly de-energized.

Moving on to additional guideposts for inductive load suppression, suppressor circuits are commonly used with inductive loads to control voltage spikes when a control output switches off. These circuits help prevent premature failure of outputs by mitigating the high-voltage transients that occur when current flow through an inductive load is interrupted.

The randomly selected sample voltage waveforms shown below illustrate this more clearly.

Figure 4 Here is a comparison between unsuppressed and snubber-suppressed voltage waveforms. Source: Paktron

In addition, suppressor circuits play a crucial role in reducing electrical noise/arc generated during the switching of inductive loads. Poorly suppressed inductive loads can make subtle noise that may interfere with the operation of delicate electronic components and circuits. The most effective way to reduce interference is to install an external suppressor circuit electrically across the load or switch element, as required by the setup, and position it in close physical proximity.

Listed below are some fine-tuned inductive load suppression application hints. The corresponding figures helps to visualize them.

  1. In most applications, placing a standard diode across a DC inductive load provides sufficient protection for DC or relay outputs that control DC inductive loads. However, if your application demands faster turn-off times, incorporating a properly sized Zener diode is a recommended approach.
  2. For relay outputs controlling AC inductive loads, an MOV can be paired with a parallel RC circuit. At this stage, ensure that MOV’s working voltage is at least 20% higher than the nominal line voltage.
  3. In DC voltage applications, the RC snubber network is typically wired across the relay contacts, whereas in standard AC voltage applications, it’s placed across the load. To reinforce the point, the RC snubber mechanism must be wired across the triac in phase control circuits.

Figure 5 The above image offers AC/DC application hints for inductive load suppression. Source: Author

Well, to wrap things up, RC snubbers help control voltage spikes and scale down noise in circuits, making them essential in power electronics. This quick guide provides only a glimpse into the complex topic, leaving plenty more to uncover—from diverse design configurations to their wide-ranging applications.

When dealing with power electronics systems, a thorough understanding of snubber behavior is essential for engineers and enthusiasts alike.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post A hands-on guide for RC snubbers and inductive load suppression appeared first on EDN.

The CyberPower DBH361D12V2: An UPS That Goes Old-School

Втр, 07/01/2025 - 16:22

Normally, when I cover the topic of uninterruptible power supplies (UPSs), I’m talking about devices containing rechargeable battery packs based on either sealed lead-acid (SLA) or one of several newer lithium-based charge storage technology alternatives. But what if the backup-power unit’s batteries aren’t rechargeable…and are lowly alkaline D cells (aka, IEC 420s)?

Normally, I’d probably take a pass on the editorial opportunity. But, given that this particular proposal came from my long-time colleague, mentor, and former bossBill Schweber —a name with which many of you are already familiar from his ongoing coverage in EDNEE TimesPlanet Analog, and other AspenCore properties —I couldn’t resist. Here are some (lightly edited) excerpts from his original email to me, titled “Teardown product?”:

Would you like to do a teardown on a Verizon-supplied battery holder/power pack? It holds 12 standard D cells; when AC power fails, you manually switch it on and it powers the Fios box (you also have to remember to switch if back off after AC power comes back on – yeah, right, as if that’s going to happen).

It’s a fairly simple device; has some LEDS to indicate battery condition, not much more. Supposedly powers the Fios box for 24-36 hours.

The unit is model DBH36D12V2, made by CyberPower Systems, is NOT listed on their site (I assume it’s custom for Verizon), and looks like this:

but replacements are available from Verizon as a spare part for end users:

 https://www.verizon.com/home/accessories/powerreserve/?&skuParam=sku190001Comes

 It comes with a skimpy manual showing the line crew how to install it, not much else.

So why do I have this? Verizon was here a month ago, replaced our copper drop from the street pole with fiber but left the copper landline in the house. They installed an AC (normally)-powered fiber-copper converter box, which they brought on their truck.

They mailed me its associated battery box, which they also installed, a few days before they came—except they mailed me two. No idea why they didn’t just bring it on the truck, too.

I called and emailed and wasted time trying to return it, but there is seemingly no way to do that. The local Verizon store said, “go away”. I even went over to a Verizon truck that was in the area, but the guys on the truck wouldn’t take it, either.

I enthusiastically accepted Bill’s offer. The note he included inside the shipment box was priceless and resonated with my own longstanding repair-and-reuse-or-donate aspirations:

Thanks for agreeing to take this off my hands. Whether or not you are able to do something with it, at least I won’t feel guilty leaving it in my basement for the next few years, or throwing it out to add to the electronic waste mountain.

Keep doing those great teardowns…

Aww 🥹 Let’s start with those stock photos from the Verizon product page (the device, with battery compartment door closed, has Dipert-tape-measured approx. dimensions of 10”x6”x2”):

Now, for our specific patient. I won’t bore you with photos of the light brown (save for Verizon logos on two of the sides) cardboard box that it came in, save for sharing a closeup of the product label attached to one of the other sides:

Nokia? Really?

Onward. Flip open the top flaps, remove a piece of retaining cardboard inside:

followed by several pieces of literature as usual, and as with other photos in this piece, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

And our victim comes into initial view:

Let’s tackle the literature first. I’m also not going to bore you with the original from-factory shipping slip included in the box. But there was also a wall-mounting template in there: 

along with two mounting screws:

Plus the “skimpy manual” that Bill’s initial email to me had mentioned, and which I’ve scanned for your convenience as a PDF: Skimpy UPS Manual

Now let’s get the device out of the box and out of its clear plastic protective baggie. Front view (orientation references that follow assume it’s wall-mounted per the template):

Here’s a close-up of the connector on the end of the cable coming out of the battery box, which ends up plugged into (and powering) the fiber-copper converter box:

Back:

Plus a close-up of that backside label:

Top:

Right side (note the latch, which I’ll be springing shortly):

Bottom, revealing Bill’s aforementioned power switch, plus a battery-test button and remaining-charge indicator LEDs that appropriately illuminate when the button is pressed (or not, if the D cells are drained; see the user manual for specifics):

And left side (note the hinges; I bet you can already tell which way the battery compartment door swings when opened!):

Another label closeup (again…Nokia?):

And finally, open sesame! Were you correct with your earlier door-swing-direction forecast?

Note that the stamped instructions explicitly warn against using rechargeable batteries:

And yep, a dozen will get one-time drained, not to mention irresponsibly discarded (likely, vs responsibly recycled) and added to the electronic waste mountain, each time the device is used:

On that note, by the way, Bill was spot-on (no surprise) that a web search on “CyberPower DBH36D12V2” was fruitless from a results standpoint. The outcome from dropping the “2” on the end wasn’t much better….that said, it did indirectly lead me to the scanned PDF of a user manual for a conceptually similar CyberPower product, the DTC36U12V, which dispenses with the D cells and instead embeds a conventional UPS-reminiscent SLA battery inside it.

Again, onward. At the bottom of the earlier back-view photo, you might have noticed two holes, one in each corner. Embedded within each is, unsurprisingly, a screw head. Removing them:

enables pull-out of the panel at the bottom of the device’s front side:

Underneath it, again unsurprisingly, is the humble-function PCB, intended fundamentally to regulate-then-output the electrons coming from the dozen-battery array that powers it:

All those caps you see are, I suspect, intended (among other things) to augment the batteries’ innate output power to address the fiber-copper converter box’s startup-surge current needs:

The PCB pulls right out of the enclosure without much fuss:

Once removed, and since we’re already at the side of the PCB, let’s do all four perspectives:

Shall I flip it over next? Yes, I shall. My, little PCB, what thick traces have thee!

One more PCB topside view, this time, the enclosure unencumbered. Note the three battery pack charge strength indicator LEDs and, to their right, the test switch:

More views of the front panel underside, this time with the battery spring contacts temporarily detached:

Speaking of which, here’s a close-up of the other (permanently mounted) spring contacts at the top of the battery compartment:

Here are the light pipe structures and the mechanical button that correspond to the LEDs and switch on the PCB:

And now, unlike Humpty Dumpty and all the king’s horses and men, I’ll put the DBH36D12V2 back together again:

That’s all I’ve got for you today! Bill, I hope I once again met (and even, stretch goal, exceeded?) your expectations. Reader thoughts are as-always welcomed in the comments!

p.s…anyone have a need for a disassembled-then-reassembled but functionally unused CyberPower (or is that Verizon? Or Nokia?) DBH36D12V2?

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post The CyberPower DBH361D12V2: An UPS That Goes Old-School appeared first on EDN.

Inherently DC accurate 16-bit PWM TBH DAC

Втр, 07/01/2025 - 16:19

The 16-bit DACs are a de facto standard for high DC accuracy and precision domain conversion, but surprisingly few are fully 16-bit (0.0015%) precise. Even when described as “high precision,” some have inaccuracy and integral nonlinearity (INL) that significantly exceed 1 LSB. The TBH PWM-based design detailed here, by contrast, has inherent 16-bit DC accuracy and integral linearity limited only by the quality of the voltage reference. And it gets them without fancy, pricey, high-accuracy components (e.g., no 0.0015% resistors need apply).

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows its underlying nonlinearity-correcting Take-Back-Half (TBH) topology, as explained in: “Take back half improves PWM integral linearity and settling time.”

Figure 1 The INL is canceled by the TBH topology.

Figure 1 relies on two differential relationships that effectively subtract out (take back) integral nonlinearity and attenuate ripple.

  1. For signal frequencies less than or equal to the reciprocal of settling time = 1/Ts (including DC) Xc >> R and Z = 2(Xavg – Yavg/2).
  2. For frequencies greater than or equal to Fpwm, Xc << R and Z = Xripple – Yripple.

Because only one switch drives node Y while two in parallel drive X, INL due to switch loading at Y is twice that at X. Therefore, since Z = 2(Xavg – Yavg/2), A1’s differential RC network actively subtracts (takes back) the INL error component, resulting in (theoretically) zero net INL.

 Figure 2 illustrates how these elements can fit together in a robust 16-bit DAC circuit design. Here’s how it works.

Figure 2 TBH principle sums two 8-bit PWM signals in one 16-bit DAC = Vref(MSBY+LSBY/256)/256. The asterisked resistors are 0.25% precision types. It is assumed that the PWM frequency (Fpwm) is ~10 kHz.

Two 8-bit resolution PWM signals with a rep rate of ~10 kHz serve as inputs, one for the most significant byte (MSBY) of the setting and the other for the least significant byte (LSBY). The MSBY signal drives R2 and R3, while the LSBY drives the R4, R5, and R7 network. The (R4+R5+4R7)/(R2+R3) = 256:1 ratio of the summing network accommodates the relative significance of the PWM signals. It also enables true 16-bit (15 ppm) conversion precision and differential nonlinearity (DNL) from only 8-bit (2500 ppm) resistor matching.

R6C3 suppresses small nanosecond duration ripple spikes on A1’s output caused by the super-fast output transitions of the U1 switches leaking past A1’s 10 MHz gain-bandwidth product.

The ultimate conversion accuracy is limited almost solely by the 5-V voltage reference quality, so this should be a premium component. Its job is made a little bit (pun) easier by the fact that the maximum current drawn by U1 is a modest 640 µA, which allows for true 16-bit INL with reference impedances up to 0.11 Ω. A maximum reference loading occurs at MSBY duty factor = 50%. The loading falls to near zero at Df = 0 and 100%.

The maximum ripple amplitude also occurs at 50%. The output ripple and DAC settling time are illustrated as the red curve in Figure 3.

Figure 3 Settling time to full precision requires ~100 PWM cycles = 10 ms for Fpwm = 10 kHz.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Inherently DC accurate 16-bit PWM TBH DAC appeared first on EDN.

Reducing manual effort in coverage closure using CCF commands

Втр, 07/01/2025 - 11:00

Ensuring the reliability and performance of complex digital systems has two fundamental aspects: functional verification and digital design. Digital Design predominantly focuses on the architecture of the system that involves logic blocks, control flow units, and data flow units. However, design alone is not enough.

Functional verification plays a critical role in confirming if the design (digital system) behaves as intended in all expected conditions. It involves writing testbenches and running simulations that test the functionality of the design and catch bugs as early as possible. Without proper verification, even the most well-designed system can fail in real world use.

Coverage is a set of metrics/criteria that determines how thoroughly a design has been exercised during a simulation. It identifies and checks if all required input combinations have been exercised in the design.

There are several types of coverage used in modern verification flows, the first one being code coverage, which analyzes the actual executed code and its branches in the design. Functional coverage, on the other hand, is user-defined and tests the functionality of the design based on the specification and the test plan.

Coverage closure is a crucial step in the verification cycle. This step ensures that the design is robust and has been tested thoroughly. With an increase in scale and complexity of modern SoC/IP architectures, the processes required to achieve coverage closure become significantly difficult, time-consuming, and resource intensive.

Traditional verification involves a high degree of manual intervention, especially if the design is constantly evolving. This makes the verification cycle recursive, inefficient, and prone to human errors. Manual intervention in coverage closure remains a persistent challenge when dealing with complex subsystems and large SoCs.

Automation is not just a way to speed up the verification cycle; it gives us the bandwidth to focus on solving strategic design problems rather than repeating the same tasks over and over. This research is based on the same idea; it turns coverage closure from a tedious task to a focused, strategic part of the verification cycle.

This paper focuses on leveraging automation provided by the Cadence Incisive Metric Center (IMC) tool to minimize the need for manual effort in the coverage closure process. With the help of configurable commands in the Coverage Configuration File (CCF), we can exercise fine control in coverage analysis, reducing the chances of manual adjustments and making the flow dynamic.

Overview of Cadence IMC tool

IMC stands for Incisive Metrics Center, which is a coverage analysis tool designed by Cadence to help design and verification engineers evaluate the completeness of verification efforts. It works across the design and testbench during simulation to collect coverage data stored in a database. This database is later analyzed to identify the areas of design that have been tested and those which have not met the desired coverage goals.

IMC uses well defined metrics or commands for both code and functional coverage, which provide a detailed view of coverage results and identify any gaps to improve testing. The application includes the creation of a user-defined file called CCF, which includes these commands to control the type of coverage data that should be collected, excluded, or refined.

This paper offers several commands—such as “select_coverage”, “deselect_coverage”, “set_com”,”set_fsm_arc_scoring” and “set_fsm_reset_scoring”—which handle different genres of coverage aspects. The “select_coverage” and “deselect_coverage” commands automate the inclusion and exclusion activity by selecting specific sections of code as per the requirement, thus eliminating the manual exclusion process.

The “set_com” command provides a simple approach to avoid the manual efforts by automatically excluding coverage for constant variables. Meanwhile, the “set_fsm_arc_scoring” and “set_fsm_reset_scoring” commands focus more on enhancement of finite state machine (FSM) coverage by identifying state and reset transitions for the FSMs present in the design.

By using this precise and command-driven approach, the techniques discussed in this paper improve productivity and coverage accuracy. That plays a crucial role in today’s fast-paced complex chip development cycles.

Selecting/deselecting modules and covergroups for coverage analysis

The RTL design is a hierarchical structure which consists of various design units like modules, packages, instances, interfaces, and program blocks. It can be a mystifying exercise to exclude a specific code coverage section (block, expr, toggle, fsm) for the various design units in IMC tool.

The exercise to select/deselect any design units for code coverage can be implemented in a clean manner by using the commands mentioned below. These commands also provide support to select/deselect any specific covergroups (inside classes).

  • deselect_coverage

The command can enable the code coverage type (block, expr, toggle, fsm) for the given design unit and can also enable covergroups which are present in the given class.

Syntax:

select_coverage <-metrics> [-module | -instance | -class] <list_of_module/instance/class>

Figure 1 The above snapshot shows an example of select_coverage command. Source: eInfochips

This command is to be passed in CCF with the appropriate set of switches; <-metrics> defines the type of coverage metric like block, expr, toggle, fsm, and covergroup. According to the coverage metric, -module or -instance or -class is passed and then the list of module/instance/class is to be mentioned.

  • deselect_coverage

The command can disable the code coverage type (block, expr, toggle, fsm) for the given design unit or can disable covergroups which are present in the given class.

Syntax:

deselect_coverage <-metrics> [-module | -instance | -class] <list_of_module/instance/class>

Figure 2 This snapshot highlights how deselect_coverage command works. Source: eInfochips

The combination of these two commands can be used to control/manage several types of code coverage metrics scoring throughout the design hierarchy, as shown in Figure 4, and functional coverage (covergroup) scoring throughout the testbench environment, as shown in Figure 7.

The design has hierarchical structure of modules, sub-modules, and instances (Figure 3). Here, no commands in CCF are provided and the code coverage scoring for all the design units is enabled, as shown in the figure below.

Figure 3 Code coverage scoring is shown without CCF Commands. Source: eInfochips

For example, let us assume code coverage (block, expr, toggle) scoring in ‘ctrl_handler’ module is not required and block coverage scoring in ‘memory_2’ instance is also not required; then in CCF, the deselect_coverage commands mentioned in Figure 4 will be used. To deselect all the code coverage metrics (block, expr, fsm, toggle), ‘-all’ option is used. Figure 4 also depicts the outcome of the commands used for disabling the assumed coverage.

Figure 4 Code coverage scoring is shown with deselect_coverage CCF commands. Source: eInfochips

In another scenario, the code coverage scoring is required for the ‘design_top’ module, and the toggle coverage scoring is required for the ‘memory_3’ instance. Code coverage for the rest of the design units is not required. So, the whole design hierarchy will be de-selected and only the two design units in which the code coverage scoring is required are selected, as shown in Figure 5. The code coverage scoring generated as per the CCF commands is also shown in Figure 5.

Figure 5 Code coverage scoring is shown with deselect_coverage/select_coverage CCF commands. Source: eInfochips

The two covergroups (cg1, cg2) in class ‘tb_func_class ’ are scored when no commands in CCF are mentioned, as shown in Figure 6. In case functional coverage scoring of ‘cg2’ covergroup is not required, the CCF command mentioned in Figure 7 is used. For de-selecting any specific covergroup in a class, the ‘-cg_name’ <covergroup name> option is used

Figure 6 Functional verification is conducted without CCF command. Source: eInfochips

Figure 7 Functional verification is conducted with CCF command. Source: eInfochips

It’s important to note that both commands ‘select_coverage/deselect_coverage’ will have a cumulative effect on the coverage analysis. In <metrics> sub-option, ‘-all’ will include all the code coverage metrics (block, expr, toggle, fsm) but will not include -covergroup metric.

In the final analysis, by using the ‘select_coverage/deselect_coverage’ commands, code/functional coverage in the design hierarchy and from the testbench environment can be enabled and disabled from the CCF directly, which makes the coverage flow neat. If these commands are not used, to obtain a similar effect, manual exclusions from design hierarchy and testbench environment need to be performed in the IMC tool.

Smart exclusions of constants in a design

In many projects, there are some signals or codes of a design that are not exercised throughout the simulation. Such signals or codes of design create unnecessary gaps in the coverage database. To manually add the exclusion of such constant objects in all the modules/instances of design is an exhausting job.

Cadence IMC provides a command which smartly identifies the constant objects in the design and ignores them from the coverage database. It’s described below.

set_com

When the set_com command is enabled in the CCF, it identifies the coverage items such as an inactive block, the constant signals, and constant expressions, which remain unexercised throughout the simulation; it omits them from coverage analysis and marks them IGN in the output generated file.

Syntax:

set_com [-on|-off] [<coverages>] [-log | -logreuse] [-nounconnect] [-module | -instance]

To enable the Constant Object Marking (COM) analysis, provide the [- on] option with the set_com command. When the COM analysis is done, the IMC generates an output file named “icc.com” which captures all the objects that are marked as constant.

By providing the [-log] option, it creates the icc.com file and ensures that the icc.com is updated each time for all the simulations. This icc.com file is created in the path “cov_work/scope/test/icc.com.” The COM analysis for specific module/instance is enabled by providing the [-module | -instance] option with the set_com command.

Figure 8 The above image depicts the design hierarchy. Source: eInfochips

Figure 9 The COM analysis command is shown as mentioned in CCF. Source: eInfochips

Consider that the “chip_da” variable of the design remains constant throughout the simulation. By enabling the set_com command as shown in Figure 9, the variable chip_da will be ignored from the coverage database, which is shown in Figure 10 and Figure 11.

Figure 10 The icc.com output file is shown in the coverage database. Source: eInfochips

Figure 11 Constant variable chip_da is ignored with set_com command enabled. Source: eInfochips

COM analysis

In the CCF, the set_com command is enabled for the addr_handler_instance1 instance.

  • Here, as the set_com command is enabled, the “chip_da” signal, which remains constant throughout the simulation, will be ignored from coverage analysis for the defined instances. As shown in Figure 10, in every submodule where the chip_da signal is passed, it gets ignored as the chip_da signal is the port signal, and the COM analysis is done based on the connectivity (top-down/bottom-up).
  • Along with the port signals, the internal signals which remain constant, are also ignored from the coverage database. In Figure 10, the “wr” signal is an internal signal and it’s ignored from the coverage database (also reflected in Figure 11).
  • The signal chip_da is constant for this simulation (which is IGN) while if chip_da is variable for some other simulation (which is covered/uncovered) and these two simulations are merged. Then the signal chip_da will be considered as a variable (covered/uncovered) and not an ignored constant.

It’s worth noting that when the set_com command is enabled for a module/instance, and if the signal is port signal and is marked as IGN, then the port signals of other sub-modules, which are directly connected to this signal, are also IGN irrespective of the command enabled for that module/instance.

Finally, to avoid the unnecessary coverage that is captured for constant objects and to save time in adding exclusion for such constant objects, the set_com command is extremely useful.

Detailed analysis of FSM coverage

A coverage-driven verification approach gives assurance that the design is exercised thoroughly. For the FSM-based design, there are several types of coverage analysis available. FSM state and transition coverage analysis are the two ways that help to perform the coverage-driven verification of FSM designs, but it’s not a complete verification of FSM designs.

FSM arc coverage provides a comprehensive analysis to ensure that the design is exercised thoroughly. To do that, Cadence IMC provides some CCF commands, which are described below.

set_fsm_arc_scoring

The FSM arc coverage is disabled by default in ICC. It can be enabled by using the set_fsm_arc_scoring command in the CCF. The set_fsm_arc_scoring enables the FSM arcs, which are nothing but all the possible input conditions for which transitions take place between the two FSM states.

Syntax:

set_fsm_arc_scoring [-on|-off] [ -module <modules> | -tag <tags>] [-no_delay_check]

To enable the FSM arc coverage, provide the [-on] option in the set_fsm_arc_scoring. The FSM arc coverage can be encompassed for all the FSMs defined in a module by providing the [-module <module_name>] option.

If the FSM arc coverage needs to be captured for specific FSM in the module, it can be achieved by providing the tag name to FSM using the set_fsm_attribute command in the CCF. By providing tag name option with set_fsm_arc_scoring, FSM arc coverage can be captured for the FSM in design.

set_fsm_reset_scoring

A state is considered a reset state if the transition to that state is not dependent on the current state of the FSM; for example, in the code shown below.

Figure 12 Here is an example of a reset state. Source: eInfochips

State “Zero” is a reset state because the transition to this state is independent of the current state (ongoing_state). By default, the FSM reset state and transition coverage are disabled in ICC, as shown in Figure 13. They can be enabled using the set_fsm_reset_scoring command in the CCF. This command enables scoring for all the FSM reset states and transitions leading to reset states that are defined within the design module.

Figure 13 FSM coverage is shown without set_fsm_arc_scoring command. Source: eInfochips

Syntax:

set_fsm_reset_scoring

In the design, there are two FSMs defined—fsm_design_one and fsm_design_two—and we are enabling the FSM arc and reset state and transition coverage for fsm_design_two only. If the set_fsm_arc_scoring and set_fsm_reset_scoring commands are not provided in the CCF, the FSM arc, FSM reset state and transition coverage are not enabled, as shown in Figure 13.

If the set_fsm_arc_scoring and set_fsm_reset_scoring commands are provided in the CCF, as shown in Figure 14, then the FSM arc, the FSM reset state, and the transition coverage are enabled as shown in Figure 15.

Figure 14 The set_fsm_arc_scoring and set_fsm_reset_scoring commands are provided in CCF. Source: eInfochips

Figure 15 FSM coverage is shown with set_fsm_arc_scoring and set_fsm_reset_scoring commands. Source: eInfochips

In case the design consists of FSM(s), and to ensure that the FSM design is exercised thoroughly, and it’s verified based on a coverage-driven approach, one should enable the set_fsm_arc_scoring and set_fsm_reset_scoring commands in the CCF files.

Efficient coverage closure

Efficient coverage closure is essential for ensuring thorough verification of complex SoC/IP designs. This paper builds on prior work by introducing Cadence IMC commands that automate key aspects of coverage management, significantly reducing manual effort.

The use of select_coverage and deselect_coverage enables precise control over module and covergroup coverage, while set_com intelligently excludes constant objects, improving the coverage accuracy. Furthermore, set_fsm_arc_scoring and set_fsm_reset_scoring enhance the FSM verification, ensuring that all state transitions and reset conditions are thoroughly exercised.

By adopting these automation-driven techniques, verification teams can streamline the coverage closure process, enhance efficiency, and maintain high verification quality, improving productivity in modern SoC/IP development.

Rohan Zala, a senior verification engineer at eInfochips, has expertise in in IP/SoC verification for sensor-based chips, sub-system verification for fabric-based design, and NoC systems.

Khushbu Nakum, a senior verification engineer at eInfochips, has expertise in IP/SoC verification for sensor-based chips and sub-system verification for NoC design.

Jaini Patel, a senior verification engineer at eInfochips, has expertise in IP/SoC verification for sensor-based chips and SoC verification for signal processing design.

Dhruvesh Bhingradia, a senior verification engineer at eInfochips, has expertise in IP/SoC verification for sensor-based chips, sub-system verification for fabric-based design, and NoC systems.

Related Content

The post Reducing manual effort in coverage closure using CCF commands appeared first on EDN.

Сторінки