EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 2 години 3 хв тому

Step-down converter trims quiescent current

Ндл, 02/02/2025 - 19:02

The NEX30606 step-down converter from Nexperia delivers up to 600 mA of output current with an operating quiescent current of just 220 nA. Supporting input voltages from 1.8 V to 5.0 V, the converter offers 16 resistor-settable fixed output voltages and uses constant on-time control for fast transient response.

Ultra-low quiescent current makes the NEX30606 well-suited for consumer wearables like hearing aids, medical sensors, patches, and monitors. It can also be used in battery-powered industrial applications, including smart meters and asset trackers. The converter provides greater than 90% switching efficiency for load currents ranging from 1 mA to 400 mA. Additionally, it has only 10 mV of output voltage ripple when stepping down from 3.6 VIN to 1.8 VOUT.

Nexperia also offers the NEX40400, a step-down converter that combines high efficiency with an operating quiescent current of 60 µA typical. It provides up to 600 mA of output current from a wide 4.5-V to 40-V input voltage range. The device employs pulse frequency modulation for high efficiency at low to mid loads and spread spectrum technology to minimize EMI. Target applications include industrial distributed power systems and grid infrastructure.

Visit the NEX30606 and NEX40400 product pages to check pricing and availability.

NEX30606 product page 

NEX40400 product page 

Nexperia

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Step-down converter trims quiescent current appeared first on EDN.

Wolfspeed debuts Gen 4 MOSFET portfolio

Ндл, 02/02/2025 - 19:02

Wolfspeed introduced its Gen 4 SiC MOSFET platform, supporting long-term roadmaps for high-power, application-optimized products. Gen 4 offerings include power modules, discrete components, and bare die available in 750-V, 1200-V, and 2300-V classes.

According to Wolfspeed, it is the only producer with both silicon carbide material and silicon carbide device fabrication facilities based in the U.S. This factor is becoming increasingly important under the new U.S. Administration’s increased focus on national security and investment in U.S. semiconductor production.

The Gen 4 platform was designed to improve system efficiency and prolong application life, even in the harshest environments. It is expected to deliver performance enhancements in high-power automotive, industrial, and renewable energy systems, with key benefits including: 

  • Holistic system efficiency: Delivering up to a 21% reduction in on-resistance at operating temperatures with up to 15% lower switching losses.
  • Durability: Ensuring reliable performance, including a short-circuit withstand time of up to 2.3 µs to provide additional safety margin.
  • Lower system cost: Streamlining design processes to reduce system costs and development time.

Gen 4 SiC power modules, discrete components, and bare die are available now through Wolfspeed’s distributor network.

Wolfspeed

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Wolfspeed debuts Gen 4 MOSFET portfolio appeared first on EDN.

Runtime security code embedded into IoT chip

Птн, 01/31/2025 - 14:27

A lightweight runtime security code embedded into a system-on-chip (SoC) for Internet of Things (IoT) applications. That’s the outcome of a collaboration between MediaTek and Italy-based embedded IoT security firm Exein. EE Times’ Editor-in-Chief Nitin Dahad spoke to Gianni Cuozzo, founder and CEO of Exein, to know more about this collaboration that ensures security is an integral part of the development process rather than an afterthought.

Cuozzo, who founded the company in 2018 to address the emerging mandatory cybersecurity regulations, claims it’s the world’s first integration between a chip manufacturer and runtime security software. He also claims it’s the lightest runtime agent available, whether running at the edge or the cloud.

Read the full story on EDN’s sister publication, EE Times.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Runtime security code embedded into IoT chip appeared first on EDN.

Ground-fault interruption protection—without a ground?

Чтв, 01/30/2025 - 14:14

A friend who was buying an older house was concerned about electrical safety and asked for my opinion as an electrical engineer. All of the AC receptacles (also called outlets) in the house were the two-wire non-grounded type with only a hot (black) and a neutral (white) wire; there were no three-wire receptacles with separate Earth ground (green) as mandated by the National Electrical Code (NEC) in the US since the 1960s, Figure 1. (Other countries have similar requirements, but we’ll stick with the US NEC for this discussion.)

Figure 1 For many decades, home AC-line wiring used a basic two-wire receptacle with hot and neutral wires, but the code was upgraded in the 1960s to mandate a three-wire receptacle with a separate ground wire. Source: NCW Home Inspections

An electrician had told him there were two safety-improvement options: 1) rewire some, or all, of the receptacles to have a true ground, a costly and messy undertaking; or 2) install receptacles with built-in Ground Fault Circuit Interruption (GFCI) functions costing about $20 each at outlets of concern. which is not messy, could be done by anyone with a screwdriver and basic ability, and no electrician needed.

My friend’s questions were these: was using a GFCI on a receptacle without a true ground just a cosmetic, feel-good thing? Did it provide any protection? Full protection? Was it code approved? Most important, would it prevent user shock in case of a fault in the wiring or the load?

My answer was simple: I didn’t know. I assumed you needed a ground for proper GFCI wiring, but the electrical code is complicated with many subtleties.

If you only learned about electricity as part of “Electronics 101” but not from the perspective of the power-electrical code and safety, you’re in for many surprises. There are often requirements that don’t make sense at first, and you are likely to have misconceptions as well. The NEC is very good at what it does and defines, and it characterizes a world which is far different than simply using a qualified AC/DC supply to power your lower-voltage circuits.

I did some research and found that, contrary to my intuition, a GFCI without a formal third-wire ground does provide some protection against some types of faults, but not all. Incidentally, we are talking about a real Earth ground here, not the circuit “common” which is often referred to as “ground” even though it has nothing to do with the Earth ground—a misnomer that is not only widely used but easily leads to sloppy and sometimes dangerous assumptions. Most electronic-circuit “grounds” are not grounds at all, end of story.

A little background: The consumer GFCI was developed in the 1960s; there were earlier designs, but they were subject to false tripping and had higher tripping thresholds. Use of GFCIs was mandated by the NEC since 1968, when it first allowed for GFCIs as a method of protection for underwater swimming pool lights. Throughout the 1970s, GFCI installation requirements were gradually added for 120-volt receptacles in areas prone to possible water contact, including bathrooms, garages, and any receptacles located outdoors. The 1980s saw additional requirements implemented.  During this period, kitchens and basements were added as areas that were required to have GFCIs, as well as boat houses, commercial garages, and indoor pools and spas.  New requirements during the ’90s included crawl spaces, wet bars and rooftops. 

How it works: The operating principle of the GFCI is clear, although implementation has subtleties, of course. The GFCI function is usually built into the AC receptacle and is connected across the three AC-line wires, Figure 2; it is “invisible” to the person doing the installation. Portable and external versions are also available and authorized by the NEC for some situations, but the principle is the same.

Figure 2 Wiring of a GFCI receptacle is the same as for a non-GFCI unit, as the GFCI function is embedded and invisible to the user. Source: PDH Online

In normal operation, current flows between the hot and neutral wires with the load in between the two, and there is no current flow through the ground wire. When there is a fault such as current leaking from one of the active conductor through the load (appliance, tool, hair dryer) and possibly through a user and then to ground—a potential shock situation—the current instead goes directly to ground, as that is the path with far lower impedance than through a person. The safety and shock risk from current flow is reduced to non-dangerous levels.

If there is no ground connection, or the ground wire is defective (thus, a “ground fault”), the user is at risk. The reason is that the fault current no longer has a low-impedance path to ground, and instead goes through the user, Figure 3. At the same time, the current flowing through the hot conductor is not the same as the current returning through the neutral conductor.

Figure 3 If a direct, low-impedance path to ground is absent, fault currents may instead flow through the user to ground, establishing a shock risk. Source: Pressbooks/Douglas College, Canada

This is where the GFCI comes into action: it detects this hot/neutral current imbalance and disconnects the hot and neutral lines from the load. When it senses that imbalance of current, a sensor coil within the GFCI generates a small current that is detected by a sensor circuit. If the sensed current is above a preset threshold, the sensor circuit releases a solenoid, and the current-carrying contacts open (“trip”).

How much imbalance is tolerated? The NEC dictates that residential GFCIs designed to protect people (rather than electrical infrastructure) interrupt the circuit within 25 milliseconds if the leakage current exceeds a range of 4 to 6 milliamps. (The GFCI manufacturer chooses the exact setting.) For equipment-only receptacles, the limit is higher at around 30 milliamps.

Note that GFCIs can’t protect against faults which do not involve an external leakage current, as when current passes directly from one side of the circuit through a victim to the neutral wire. They don’t protect against overloads or short circuits between the hot conductor and neutral.

What about non-grounded GFCIs?: The NEC is an evolving document that is updated every few years to allow new technologies and configurations while disallowing others. GFCI’s provide protection whether or not the house wiring is grounded—that’s why they are called “ground fault” devices and not “shock-protection” ones.

Over the years the NEC has mandated use of grounded GFCIs in new installations, but also formally allowed for retrofit installation without a third-wire ground. In such cases, the three-wire GFCI receptacle or its cover place must be marked “no equipment ground.”

A GFCI will help to protect against electric shock where current flows through a person from a hot or neutral phase to Earth, but it cannot protect against electric shock where current flows through a person from phase to neutral or phase to phase. For example, if someone touches both live and neutral wires the GFCI cannot differentiate between current flows through an intended load versus flows through a person.

When you think about it, not having a third-wire ground at all is the ultimate ground fault. A GFCI does not require an equipment-grounding conductor (green wire) since the GFCI detects an imbalance between the “hot” (black) conductor and the “neutral” (white) conductor.

In short: using a GFCI on a non-grounded receptacle does, indeed, provide some level of protection, even though there is no “ground in which a fault can develop”. The GFCI doesn’t magically produce a ground; it just interrupts power when abnormal current flow is detected. Your electronic devices won’t be protected if there’s a ground fault, for example, and a standard plug-in tester won’t work on the non-grounded GFCI outlet (that can be confusing). Still, an ungrounded GFCI outlet will still shut off in the event of a current-flow fault, so it can help keep users safe.

The answer to the question of using a GFCI in a non-grounded receptacle rather than adding a ground wire is easy: do it. The GFCI provides some protection when the ground wire is faulty, and the absence of a ground wire is certainly a clear fault. It provides some level of protection again user shock under the most common wiring and load failure modes.

Dealing with power-line wiring, faults, regulations, and codes is not trivial, but the rules are based on basic and solid electrical principles. It’s easy to think you understand more about it than you actually do, when you don’t grasp the reasoning behind many of the mandates of the code. While ignorance may be bliss, here it can be dangerous, especially when based on overconfidence or misconceptions.

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

References

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Ground-fault interruption protection—without a ground? appeared first on EDN.

DeepSeek’s AI stunner and the future of Nvidia

Чтв, 01/30/2025 - 09:36

After the release of DeepSeek’s R1, a reasoning LLM that matches the performance of OpenAI’s latest o1 model, trade media is abuzz with speculations about the future of artificial intelligence (AI). Has the AI bubble burst? Is it the end of Nvidia’s spectacular AI ride?

EE Time’s Sally Ward-Foxton takes a closer look at the engineering-centric aspects of this talk of the town, explaining how DeepSeek tinkered with AI models as well as interconnect bandwidth and memory footprint. She also provides a detailed account of Nvidia’s chips utilized in this AI head-turner and what it means for Nvidia’s future.

Read the full story at EDN’s sister publication, EE Times.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post DeepSeek’s AI stunner and the future of Nvidia appeared first on EDN.

1-A, 20-V, PWM-controlled current source

Срд, 01/29/2025 - 16:41

This design idea (DI) takes an unusual path to a power-handling DAC by merging an upside-down LM337 regulator with a simple (just one generic chip) PWM circuit to make a 20-V, 1-A current source. It’s suitable for magnet driving, battery charging, and other applications that might benefit from an agile and inexpensive computer-controlled current source. It profits from the accurate internal voltage reference, overload, and thermal protection features of that time proven and famous Bob Pease masterpiece! 

Wow the engineering world with your unique design: Design Ideas Submission Guide

Full throttle (PWM duty factor = 1) current output accuracy is entirely determined by R4’s precision and the ±2% (guaranteed, typically lots better) accuracy of the LM337 internal reference. It’s thus independent of the (sometimes dodgy) precision of logic supplies as basic PWM DACs often are not.

Figure 1 shows the circuit.

Figure 1 LM337 mates with a generic hex inverter to make an inexpensive 1-A PWM current source. (* = 1% metal film)
Iout = 1.07(DF – 0.07), Iout > 0

ACMOS inverters U1b through U1e accept a 10 kHz PWM signal to generate a -50 mV to +1.32 V “ADJ” control signal for the U2 current regulator proportional to the PWM duty factor (DF). Of course, other PWM frequencies and resolutions can be accommodated with the suitable scaling of C1 and C2. See the “K” factor arithmetic below.

DF = 0 drives ADJ > 1.25 V and causes U2 to output the 337’s minimum current (about 5 mA) as shown in Figure 1’s caption.

Iout = 1.07(DF – 0.07)

The 7% zero offset was put in to insure that DF = 0 will solidly shut off U2 despite any possible mismatch between its internal reference and the +5 V rail. It’s always struck me as strange that a negative regulator like the 337 sometimes needs a positive control signal, but in this case it does.

U1a generates an inverse of the PWM signal, providing active ripple cancellation as described in “Cancel PWM DAC ripple with analog subtraction.Since ripple filter C1 and C2 capacitors are shown sized for 8 bits and a 10-kHz PWM frequency, for this scheme to work properly with different frequency and resolution, the capacitances will need to be multiplied by a factor K:

K = 2(N – 8) (10kHz/Fpwm)
N = bits of PWM resolution
Fpwm = PWM frequency

If more current capability is wanted, the LM337 is rated at 1.5 A. That can be had by simply substituting a heavier-duty power adapter and making R4 = 0.87 ohms. Getting even higher than that limit, however, would require paralleling multiple 337s, each with its own R4 to ensure equal load sharing.

Finally, a word about heat. U2 should be adequately heatsunk as dictated by heat dissipation equal to output current multiplied by the (24 V – Vout) differential.  Up to double-digit wattage is possible, so don’t skimp in the heatsink area. The 337s go into automatic thermal shutdown at junction temperatures above 150oC so U2 will never cook itself. But make sure it will pass the wet-forefinger-sizzle “spit test” anyway so it won’t shut off sometime when you least expect (or want) it to!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 1-A, 20-V, PWM-controlled current source appeared first on EDN.

Host bus adapter boasts quantum-resistant network encryption

Срд, 01/29/2025 - 13:07

A new host bus adapter (HBA) secures all data moving between servers and storage by facilitating quantum-resistant network encryption and real-time ransomware detection in data centers. Broadcom’s Emulex Secure Fibre Channel HBA encrypts all data across all applications while complying with the NIST 800-193 framework, which encompasses secure boot, digitally signed drivers, T10-DIF, and more.

Figure 1 Emulex Secure Fibre Channel HBA provides in-flight encryption with quantum-resistant algorithms. Source: Broadcom

Encryption of mission-critical data is no longer a nice-to-have feature; it’s now a must-have amid the continued rise of ransomware attacks in 2024, costing $5.37 million on average per attack, according to Ponemon Institute’s “Cost of a Data Breach” report. The advent of generative AI and quantum computers further magnifies this risk if data is not encrypted at all points in the data center, including the network.

It’s important to note that data centers have the option of deploying application encryption or network encryption to protect their data. However, network encryption enables real-time ransomware detection while application-based encryption hides ransomware attacks.

Network encryption also offers several important advantages compared to application-based encryption. One is that it preserves storage array services such as dedupe and compression, which are destroyed when using application-based encryption.

Not surprisingly, therefore, IT users are seeking ways to protect themselves against crippling and expensive ransomware attacks; they also want to comply with new government regulations mandating all data be encrypted. That includes the United States’ Commercial National Security Algorithm (CNSA) 2.0, the European Union’s Network and Information Security (NIS) 2, and the Digital Operational Resilience Act (DORA).

These mandates call for enterprises to modernize their IT infrastructures with post-quantum cryptographic algorithms and zero-trust architecture. Broadcom’s Emulex Secure HBA, which secures data between host servers and storage arrays, provides a solution that, once installed, encrypts all data across all applications.

Figure 2 HBA’s session-based encryption is explained with three fundamental tasks. Source: Broadcom

Emulex Secure HBA facilitates in-flight storage area network (SAN) data encryption while complementing existing security technologies. Next, it supports zero-trust platform with Security Protocol and Data Model (SPDM) cryptographic authentication of endpoints as well as silicon root-of-trust authentication.

It runs on existing Fibre Channel infrastructure, and Emulex 32G and 64G Secure HBAs are available in 1, 2, and 4 port configurations. These network encryption solutions offloaded to data center hardware are shipping now.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Host bus adapter boasts quantum-resistant network encryption appeared first on EDN.

Power Tips #137: Implementing LLC current-mode control on the secondary side with a digital controller

Втр, 01/28/2025 - 14:14
Current-mode control LLC considerations

Inductor-inductor-capacitor (LLC) serial resonant circuits, as shown in Figure 1, can achieve both zero voltage switching on the primary side and zero current switching on the secondary side in order to improve efficiency and enable a higher switching frequency. In general, an LLC converter uses direct frequency control, which has only one voltage loop and stabilizes its output voltage by adjusting the switching frequency. An LLC with direct frequency control cannot achieve high bandwidth because there is a double pole in the LLC small-signal transfer function that can vary under different load conditions [1] [2]. When including all of the corner conditions, the compensator design for a direct frequency control LLC becomes tricky and complicated.

Current-mode control can eliminate the double pole with an inner control loop, achieving high bandwidth under all operating conditions with a simple compensator. Hybrid hysteretic control is a method of LLC current-mode control that combines charge control and ramp compensation [3]. This method maintains the good transient performance of charge control, but avoids the related stability issues under no- or light-load conditions by adding slope compensation. The UCC256404 LLC resonant controller from Texas Instruments proves this method’s success.

Figure 1 LLC serial resonant circuits that achieve both zero voltage switching on the primary side and zero current switching on the secondary side. Source: Texas Instruments

Principles of LLC current-mode control

Similar to pulse-width modulation (PWM) converters such as buck and boost, peak current-mode control controls the inductor current in each switching cycle and simplifies the inner control loop into a first-order system. Reference [2] proposes LLC charge control with the resonant capacitor voltage.

In an LLC converter, the resonant tank operates like a swing. The high- and low-side switches are pushing and pulling the voltage on the resonant capacitor: when the high-side switch turns on, the voltage on the resonant capacitor will swing up after the resonant current turns positive; conversely, when the low-side switch turns on, the voltage on the resonant capacitor will swing down after the resonant current turns negative.

Energy flows into the resonant converter when the high-side switch turns on. If you remove the input decoupling capacitor, the power delivered into the resonant tank equals the integration of the product of the input voltage and the input current. If you neglect the dead time, Equation 1 expresses the energy in each switching cycle.

In Equation 1, the input voltage is constant, and the input current equals the absolute of the resonant current. So, you can modify Equation 1 into Equation 2.

Looking at the resonant capacitor, the integration of the resonant current is proportional to the voltage variation on the resonant capacitor (Equation 3).

Equation 4 deduces the energy delivered into the resonant tank.

From Equation 4, it is obvious that the energy delivered in one switching cycle is proportional to the voltage variation on the resonant capacitor when the high-side switch turns on. This is very similar to peak current control in a buck or boost converter, in which the energy is proportional to the peak current of the inductor.

LLC current-mode control controls the energy delivered in each switching cycle by controlling the voltage variation on the resonant capacitor, as shown in Figure 2.

Figure 2 The LLC current-mode control principle that manages the energy delivered in each switching cycle by controlling the voltage variation on the resonant capacitor. Source: Texas Instruments

LLC current-mode control with MCUs

Figure 3 shows the logic of a current-mode LLC implemented with the TMS320F280039C C2000™ 32-bit microcontroller (MCU) from Texas Instruments, which includes a hardware-based delta voltage of resonant capacitor (ΔVCR) comparison, pulse generation and maximum period limitation [4].

In LLC current-mode control, signal Vc comes from the voltage loop compensator, and signal VCR is the voltage sense of the resonant capacitor. A C2000 comparator subsystem module has an internal ramp generator that can automatically provide downsloped compensation to Vc. You just need to set the initial value of the ramp generator; the digital-to-analog converter (DAC) will provide the downsloped VCR limitation (Vc_ramp) based on the slope setting. The comparator subsystem module compares the analog signal of VCR with the sloped limitation, and generates a trigger event (COMPARE_EVT) to trigger enhanced PWM (ePWM) through the ePWM X-bar.

The action qualifier submodule in ePWM receives the compare event from the comparator subsystem and pulls low the high side of PWM (PWMH) in each switching cycle. The configurable logic block then duplicates the same pulse width to the low side of PWM (PWML) after PWMH turns low. After PWML turns low, the configurable logic block generates a synchronous pulse to reset all of the related modules and resets PWMH to high. The process repeats with a new switching cycle.

Besides the compare actions, the time base submodule limits the maximum pulse width of PWMH and PWML, which determines the minimum switching frequency of the LLC converter. If the compare event hasn’t appeared until the timer counts to the maximum setting, the time base submodule will reset the AQ submodule and pull down PWMH, replacing the compare event action from the comparator subsystem module.

This hardware logic forms the inner VCR variation control, which controls the energy delivered to the resonant tank in each switching cycle. You can then design the outer voltage loop compensator, using the traditional interrupt service routine to calculate and refresh the setting of the VCR variation amplitude to Vc.

For a more detailed description of the hybrid hysteretic control logic, see Reference [1].

Figure 3 LLC current-mode control logic with a C2000 MCU where the signal Vc comes from the voltage loop compensator, and the signal VCR is the voltage sense of the resonant capacitor. Source: Texas Instruments

Experimental results

I tested the current-mode control method described here on a 1-kW half-bridge LLC platform with the TMS320F280039C MCU. Figure 4 shows the Bode plot of the voltage loop under a 400 V input and 42 A load, proving that the LLC can achieve 6 kHz of bandwidth with a 50-degree phase margin.

Figure 4 The Bode plot of a current-mode control LLC with a 400 V input and 42 A load. Source: Texas Instruments

Figure 5 compares the load transient between direct frequency control and hybrid hysteretic control with a 400-V input and a load transient from 10 A to 80 A with a 2.5 A/µs slew rate. As you can see, the hybrid hysteretic control current-mode control method can achieve better a load transient response than a traditional direct frequency control LLC.

For more experimental test data and waveforms, see Reference [5].

Figure 5 Load transient with direct frequency control (a) and hybrid hysteretic control (b), from 10 A to 80 A with a 2.5 A/µs slew rate under a 400 VDC input. Green is the primary current; light blue is the output voltage, with DC coupled; purple is the output voltage, with AC coupled; and dark blue is the output current. Source: Texas Instruments

Digital current-mode controlled LLC

The digital current-mode controlled LLC can achieve higher control bandwidth than direct frequency control and hold very low voltage variation during load transition. In N+1 redundancy and parallel applications, this control method can keep the bus voltage within the regulation range during hot swapping or protecting. So, this control method has been widely adopted in data center power and AI server power with this fast response feature and digital programable ability.

Desheng Guo is a system engineer at Texas Instruments, where he is responsible for developing power solutions as part of the power delivery industrial segment. He has created multiple reference designs and is familiar with AC-DC power supply, digital control, and GaN products. He received a master’s degree from the Harbin Institute of Technology in power electronics in 2007, and previously worked for Huawei Technology and Delta Electronics before joining TI.

Related Content

References

  1. Hu, Zhiyuan, Yan-Fei Liu, and Paresh C. Sen. “Bang-Bang Charge Control for LLC Resonant Converters.” Published in IEEE Transactions on Power Electronics 30, no. 2, (February 2015): pp. 1093-1108. doi: 10.1109/TPEL.2014.2313130.
  2. McDonald, Brent, and Yalong Li. “A novel LLC resonant controller with best-in-class transient performance and low standby power consumption.” Published in 2018 IEEE Applied Power Electronics Conference and Exposition (APEC), San Antonio, Texas, March 4-8, 2018, pp. 489-493. doi: 10.1109/APEC.2018.8341056.
  3. UCC25640x LLC Resonant Controller with Ultra-Low Audible Noise and Standby Power.” Texas Instruments data sheet, literature No. SLUSD90E, February 2021.
  4. Li, Aki, Desheng Guo, Peter Luong, and Chen Jiang. “Digital Control Implementation for Hybrid Hysteretic Control LLC Converter.” Texas Instruments application note, literature No. SPRADJ1A, August 2024.
  5. Texas Instruments. n.d. “1-kW, 12-V HHC LLC reference design using C2000™ real-time microcontroller.” Texas Instruments reference design No. PMP41081. Accessed Jan. 16, 2025.

 

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #137: Implementing LLC current-mode control on the secondary side with a digital controller appeared first on EDN.

Error assessment and mitigation of an innovative data acquisition front end

Пн, 01/27/2025 - 17:39

The recent design idea (DI) “Negative time-constant and PWM program a versatile ADC front end” disclosed an inventive programmable gain amplifier with integral samples-and-holds. The circuit schematic from the DI appears in Figure 1. Briefly, a PWM signal controls the switches shown. In the X0 positions, a differential signal connected to the inputs of op amps U1a and U1b drives a new voltage sample across capacitor C1 through switches U2a and U2b. Because switch U2c’s X-to-X1 connection is open, capacitor C2 is “holding” a version of the previous sample. This “held” version was amplified by the subcircuit consisting of U2a, U2b, U1c, C1, R1, R2, and R3 with switches in the X1 position. The U1c-based gain-of-two amplifier applies positive feedback through R1 and U2a to the load of C1 in series with the resistance of U2b. This causes the voltage across the load to increase exponentially (with a positive time constant), affording a gain which is a function of the time period that the switches are in the X1 position. The advantage of this approach is that programmable, wideband gains of 60 dB or more can be achieved because the op amps’ maximum closed loop gain is only 6 dB; bandwidth is not sacrificed to achieve a high closed-loop gain.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 Two generic chips and five passives make a versatile and unconventional ADC front end.

As with any design, this one has errors and characteristics whose nature must be understood before compensation can be considered. These include passive component tolerances; op amp input currents (negligible) and offset voltages; switch turn-on and turn-off times; leakage currents; as well as resistances and switching-induced charge injections. A non-obvious error can also exist which might be termed a “dead zone”. At time t = 0 when the X1 positions are initially active, the sum of a positive low amplitude VIN – (-VIN) input voltage sample and various errors can yield a negative voltage at the non-inverting input of U1c. Consequentially, U1c’s output voltage would not trend positive and, assuming the analog-to-digital converter (ADC) driven by VOUT accepts only positive voltages, the circuit would not work properly. To understand how to bound this undesirable behavior and for other reasons, it’s wise to develop equations to analyze circuit behavior. Some analytic simplicity is possible when the switches are in the X0 position and operation is mostly intuitive. But operation in the X1 position will require a bit of a deeper dive.

Charge injection error

Op amp input offset voltages and switch resistances are commonly well-understood. As for switch leakage current and charge injection, there’s a referencei that provides an excellent discussion of each. Charge injection Q is most pernicious when the switch transitions from “on” to “off” and a switch terminal is connected to a circuit path which includes a capacitor C and is characterized by a high resistance.

This extends the time for the error voltage Q/C impressed upon the capacitor to “bleed off”. This is not a concern for U2b’s X pin at any time, because both X0 and X1 positions provide a low “on” resistance path. But it must be considered for U2a and U2c when respectively, X0 and X1 turn off. For U2a, X1 is in series with relatively large resistance R1. When U2c’s X1 is turned off and C2 is in hold mode (allowing an analog to digital conversion), C2 sees X1’s multi-megaohm “off” resistance. There is no mechanism to bleed off and recover from U2c’s charge injection error; it is inherent in circuit operation until the PWM reactivates X1 for “tracking” mode, during which time conversions are precluded.

Leakage current

This same high resistance might create a real problem due to leakage current. Such currents flow continuously from U2’s power supplies through its internal ESD diodes to the switch terminals. What saves the circuit from these errors is that an analog to digital (A-to-D) conversion of VOUT can be triggered quickly after X1 turns off, before significant leakage current errors can accumulate. Leakage from terminal X of U2b can be ignored because as mentioned before, it’s always connected to a low resistance through X1 to ground or X0 to U1b’s output. Not so the X terminal of U2a with its connection to moderately high resistance R1. Here the leakage current effect must be considered.

LTspice model and equation validation

Taking all this into account, equations can be developed with reference to the circuit seen in Figure 2. Figure 2 is an illustration of the LTspice file developed to model Figure 1 circuit operation and compare it with the equations (which are also evaluated in the file) to ensure their accuracies.

U2a charge injection is not explicitly shown in the circuit, but is incorporated by summing it with the intended input sample voltage Vin in the .param Vc0 statement. The .param statements and the circuit constitute the model. These statements and the algebraic expressions assigned to voltage sources eq_1 and eq_VOUT validate the equations by allowing direct comparisons with the performance of the model. This is accomplished by graphing simulations of the circuit and evaluations of the voltage sources and confirming that eq_1 = e_1 and that eq_Vout = Vout. Not accounted for are switch turn on and turn off times. Their effects will be addressed later.

Figure 2 The LTspice file comparing the performances of the circuit model and the equations developed of it.

Reference Figure 1 and Figure 2, particularly Figure 2’s .param statements. At time t = 0 when the X0 switches turn off and the X1’s turn on, the voltage across C1 has been initialized to Vc0 as seen in the C1 initial condition (IC) and the .param Vc0 statements. We can write that the current (w) through C1 can be seen in [1].

Therefore:

Assuming a solution of the form:    

Where t = time, [3] and [4] can be seen:

And:

Therefore, the voltage at terminal e_1 can be seen in [6].

To evaluate the voltage at terminal e_2 in the model, it is necessary to convolve the signal at e_1 with the impulse response h(t) of the rc and C2 network shown in [7].

Where the exponential time constant can be seen in [8].

The convolution is given by [9].

This evaluates to [10].

Where:              

Allowing for the U2c charge injection and U1d input offset, shown in [12].

or equivalently:              

The model and this last equation above predict the circuit output at any time t = t1 immediately after the charge injection of U2c has occurred due to the enabling of the X0 switches.

Assessments

Let’s get some worst-case error parameter values for U1 and U2. The original DI proposed specific op amp TLV9164ii, but not a particular 74HC4053. Surprisingly, there are significant differences between parts from different 74HC4053 manufacturers. The MAX74HC4053Aiii looks like a reasonable choice. Let’s consider operation of both IC’s in the commercial temperature range of 0 to 70°C. Refer to Table 1 and Table 2.

Supply

Temperature range

Input current, typical

Input offset voltage

Input offset voltage drift

Open loop gain minimum

5-16 V

-40 to +125oC

± 10 pA

± 1.3 mV

± 0.25 µV/oC

104 dB

Table 1 The TLV9164 maximum parameter values.

Supply

Temperature range

Switch resistance

Switch resistance differences

Flatness, VCOM= ±3V, 0V

COM current

NO current

Charge injection

Switch t(on)

Switch t(off)

± 5 V

0 to 70 oC

125 Ω

12 Ω

15 Ω

± 2.5 nA

± 5 nA

± 10 pC

250 ns

200 ns

5 V

0 to 70 oC

280 Ω

± 5 nA

± 10 nA

± 10 pC

275 ns

175 ns

3 V

0 to 70 oC

700 Ω

± 5 nA

± 10 nA

± 10 pC

700 ns

400 ns

Table 2 MAX74HC4053 parameters, maximum values. Note performance degradations when powered by a single supply.

The output of U1c will not move in a positive direction if the sign of the parameter J in [5] is negative. Assuming ADCs whose most negative input value is ground, the circuit will fail to function properly. It’s unlikely that parameters Q1, Voffab, Voffc, and iLeak all take on their worst-case values in a particular circuit, but if they do, Vin will have to be more positive than 10pC/1nF + 2·1.2mV + 1.2mV + 2.5nA * 14300 = 14mV to avoid this “dead zone”. Of course, you’re free to use criterion other than the sum of the worst possibilities, but Caveat Designer!

Another consideration is the circuit settling time in successive PWM periods for the sampling voltage of C1, particularly in the transit between an ADC full-scale voltage to half of its LSB (this is the most extreme case which might not be a requirement for some applications). For a ±5-V powered MAX74HC4053A, two 125-ohm switches in series drive the 1-nF C1. With a 12-bit ADC, the required time is (2·125)·1e-9·ln(212+1) = 2.3 µs. Add the switch on-time of 250 ns, and the PWM should enable the X0 switches for tmin = three of its 1 µs cycles for accurate voltage sample acquisition. By comparison, 8-bit ADC’s can get by with 2 µs.

Calibration

The iLeak·R1 and the temperature-sensitive portions of U1’s input offset voltage errors are negligible in comparison with the ones caused by charge injection. However, as noted in the referencei and in typical curves provided in the switch datasheetiii, the magnitudes and signs of voltages will significantly affect the sizes of the charge injections Q1 (U2a) and Q2 (U2c) and also somewhat the ra, rb and rc resistances. For Q1, ra, and rb, the U1a and U1b input voltages are not determinable from A-to-D conversions. Increasing the values of capacitors C1 and C2 will reduce the Q/C charge injection-created error magnitudes but will also necessitate increases in PWM times. Avoiding such time increases by reducing the R1 value magnifies the errors due to mismatches of ra, rb and rc.

The resistances ra, rb and rc will vary with both temperature and voltage levels. Applying algebra to [13] shows surprisingly that if ra, rb and rc resistances are identical, there is zero error introduced regardless of their value! (This assumes that enough time has elapsed for the e-t/Tc term to be negligible. ) While not identical, the slightly less than 1% maximum mismatch error can be calibrated out, but only for a given set of switch voltages and temperature. The datasheet does not provide the information required to determine the errors that could occur when the voltages and temperature are other than those present during calibration.

With the circuit as it stands, I know of no way to eliminate temperature- and voltage-sensitive errors. But there are errors insensitive to these conditions that can be calibrated out. The following procedure assumes an ADC of negligible error (its resolution and accuracy require further investigation) and conversion factor CF counts/volt, one perhaps present on the assembly line of a product incorporating this design.

For any instance of this circuit to which specific and accurately known Vin and -Vin (See Figure 1) voltages are applied at a given temperature, [13] can be thought of as a function of time and Vin: VOUT(t, Vin). VOUT(t1, Vin), VOUT(t2, Vin), and VOUT(t3, Vin) can be captured such that t3/3 = t2/2 = t1 > tmin. A t1 value of 15 µs is suitable, and Vin = 20 ms avoids the dead zone. (The reason for such a small input voltage is given later.) A t3 of 45 µs applies a gain of less than 24 and keeps things under a 1.8 V full scale A-to-D level. It will be appreciated that et3/T = (et1/T)3 and et2/T = (et1/T)2 and that:

where each VOUT(t, Vin) is scaled by CF to a measurement made by the product line A-to-D. The difference terms cancel the constant terms in [13], and the ratio cancels the A·N term. Such cancellations are necessary if T is to be determined accurately. [14] is a quadratic equation, the desired of the two solutions of which is given by :

From this, the value of T can be obtained (note that T depends slightly on ra because of Req and so it also depends on voltage and temperature):

Further:

Allowing the solution:

N( ) is a function of Vin (see .param N) and is equal to SVin + U where S and U are unknown and again are slightly dependent on the usual. The process of equations [14] to [18] will need to be repeated with a value Vin2 different from Vin to arrive at an AN2 term. We could use different values of t, but we might as well keep the same ones. We can safely choose Vin2 = 25 mV (incurring a charge injection and switch resistances very close to that with Vin = 20 mV) and calculate:

From [13] it can be seen that:

so that:

Given an A-to-D conversion count and reversing [22], it can be seen that:

never forgetting that even this result is influenced by the usual.

Because of their negligible sensitivities to voltage and temperature, NPO/COG capacitors and 25 ppm or better metal film resistors are recommended. 1% or better parts are available at costs around .01 USD in quantity.

Conclusions

This innovative circuit has several features to recommend it. It offers differential to single ended conversion with very high CMRR and wide common mode operating range. It offers gains starting at 6 dB in increments of .6dB limited only by the combination of the sampled voltage and saturation at the positive supply rail. Op-amp gains are no greater than 6 dB, so there is no loss of bandwidth due to operation at high closed-loop gains. But this design has some disadvantages.

A detailed calibration scheme is needed which requires the availability on the production line of an ADC whose requirements for accuracy and resolution have not yet been determined. Even with calibration, various errors which cannot be rescued by calibration can impose an operational “dead zone” for circuit input voltages less than up to 14 mV. Errors due to switch charge injection and switch resistance vary with temperature and applied voltages and are difficult if not impossible to calibrate out. The MAX74HC4053A discussed here is a better ‘4053 than others, but another part may exist with less variations in resistance and charge injection.

I would suggest disconnecting the U2c X0 pin. This connection is of limited usefulness and does damage—it injects charge into U1c, affecting the signal fed through R1 to C1. (The effect on C2 during the “hold” mode can be neglected, being of very short duration due to the X1 rc “on” resistance in the C2 path.) If it is decided to retain this connection, please note that its effects have not been accounted for in the foregoing analysis.

Finally, I’d like to acknowledge the comments of eldercosta, a review and comments with some unique perspectives by David Lundquist, and especially the comments and contributions of Stephen Woodward, who designed the circuit discussed in this DI.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

References

ihttps://www.analog.com/media/en/training-seminars/tutorials/MT-088.pdf

iihttps://www.ti.com/lit/ds/symlink/tlv9164.pdf?ts=1733035616360&ref_url=https%253A%252F%252Fwww.ti.com%252Fproduct%252FTLV9164

iiihttps://www.analog.com/media/en/technical-documentation/data-sheets/MAX4051-MAX4053A.pdf

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Error assessment and mitigation of an innovative data acquisition front end appeared first on EDN.

Arm’s Chiplet System Architecture eyes ecosystem sweet spot

Пн, 01/27/2025 - 11:46

Arm has announced the availability of the first public specification drafted around its Chiplet System Architecture (CSA), a set of system partitioning and chiplet connectivity standards harnessed in a design ecosystem supported by over 60 companies. The companies that are part of the CSA initiative include ADTechnology, Alphawave Semi, AMI, Cadence, Jaguar Micro, Kalray, Rebellions, Siemens, and Synopsys.

CSA aims to offer industry-wide standards and frameworks by facilitating the reuse of specialized chiplets, and thus address the fragmentation caused by compatibility issues in chiplet design. It provides a shared understanding of defining and connecting chiplets while developing composable system-on-chips (SoCs).

A couple of recent announcements show how semiconductor firms are employing CSA to build chiplets as part of Arm Total Design, a platform for deploying chiplet-based compute subsystems. Arm Total Design facilitates custom silicon solutions powered by Arm Neoverse Compute Subsystems (CSS), which includes processor cores, IPs, software, and design tools.

Take the case of Alphawave Semi, which has combined its proprietary I/O dies with an Arm Neoverse CSS-powered chiplet. In this chiplet design, Alphawave Semi has employed AMBA CHI chip-to-chip (C2C) architecture specification to connect artificial intelligence (AI) accelerators.

Figure 1 The block diagram shows a chiplet’s major design building blocks. Source: Alphawave Semi

ADTechnology, another member of Arm’s CSA initiative, has utilized Neoverse CSS V3 technology to create a CPU chiplet for high-performance computing (HPC) and AI/ML training and inference applications. In this chiplet, ADTechnology has incorporated Rebellions’ REBEL AI accelerator; the chiplet design is implemented on Samsung Foundry’s 2-nm gate-all-around (GAA) process node, which has been standardized in the CSA ecosystem.

Figure 2 ADTechnology’s CPU chiplet is powered by Neoverse CSS V3 technology. Source: Arm

AMI, also a member of CSA initiative, is contributing its firmware expertise to accelerate the development of custom chiplet solutions. AMI, the first independent firmware vendor in the Arm Total Design ecosystem, offers modular framework for separating the compute and I/O subsystems. Its pre-configured and pre-tested production quality modules for the reusable chiplets streamline the time and resources required for this stage of development.

A multitude of industry-wide chiplet initiatives—spanning from Arm Total Design to Neoverse CSS to CSA—is a testament to how Arm sees the emergence of chiplets as a major opportunity. It also highlights how Arm sees its place as both chiplet solution provider and ecosystem builder.

The ecosystem-building part is especially crucial because RISC-V companies are also making inroads in the chiplets realm.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Arm’s Chiplet System Architecture eyes ecosystem sweet spot appeared first on EDN.

The SoC design: What’s next for NoCs?

Птн, 01/24/2025 - 12:28

Today’s high-end system-on-chips (SoCs) rely heavily on sophisticated network-on-chip (NoC) technology to achieve performance and scalability. As the demands of artificial intelligence (AI), high-performance computing (HPC), and other compute-intensive applications continue to evolve, designing the next generation of SoCs will require even smarter and more efficient NoC solutions to meet these challenges.

Although these advancements present exciting opportunities, they also bring significant hurdles. SoC designers face rapid expansion in architecture, time-to-market pressures, scarcity of expertise, suboptimal utilization of resources, and disparate toolchains.

Exponential growth in SoC complexity

SoC designs have reached unprecedented levels of complexity, driven by advancements in process technologies and design tools. Now, SoCs typically include between 50 and 500+ IP blocks, ranging from processor cores and memory controllers to specialized accelerators for AI and graphics.

These blocks, which once contained just tens of thousands of transistors, now house anywhere from 1 million to over 1 billion transistors each. As a result, these SoCs incorporate a staggering total of 1 billion to over 100 billion transistors, reflecting the exponential growth in both scale and sophistication, as shown in the figure below.

The above chart highlights relationship between increasing transistor budgets and use of SIP blocks. Source: Arteris, based on https://rb.gy/qmfcn and https://rb.gy/pgdop

This growth in IP blocks and transistor density has enabled the development of advanced architectures featuring multiple processor clusters. Each cluster typically contains up to 8 or more cores in mainstream designs, with high-performance configurations reaching 32 or more cores.

Today, these clusters are organized into arrays to provide massive parallelism. These cutting-edge designs integrate high-bandwidth memory controllers, dedicated AI accelerators, and sophisticated NoC interconnect systems to ensure seamless communication and scalability.

This unprecedented challenge is manageable by using advanced NoC interconnects, which serve as the backbone for efficient data transfer and communication within the chip. These on-chip networks enable seamless integration of numerous IP blocks. Moreover, high-end SoCs often rely on multiple NoCs, each tailored to specific tasks or subsystems to handle the diverse communication needs across different chip areas.

These NoCs may employ a variety of topologies, depending on the application requirements, such as rings for low-latency communication, trees for hierarchical organization, and meshes for scalability and flexibility.

To address these density and performance challenges, 3D stacking technologies are increasingly being adopted. These approaches integrate multiple layers of logic and memory vertically, enabling higher bandwidth and reduced latency compared to traditional 2D designs.

However, 3D stacking introduces additional complexity in NoC design, such as managing inter-layer communication and thermal constraints, which also require innovative interconnect solutions.

Additional challenges

The increasing sophistication of SoC designs has brought additional challenges driven by the rapid pace of growth in the market. As architectures become more elaborate, designers face mounting pressures to overcome these obstacles and adopt innovative solutions to try to keep pace with industry demands.

These challenges can be summarized as follows:

  • Time-to-market pressures: Modern SoC design faces immense competition, where delays can result in significant revenue loss and diminished market share. Traditional methods like manual NoC configuration are time-intensive, often consuming weeks or months, which is unsustainable in fast-paced markets.
  • Scarcity of expertise: The growing demand for specialized skills in SoC design outpaces the availability of experienced professionals. Engineering teams are often overburdened, with senior experts spending excessive time on repetitive, manual tasks rather than strategic and high-value design decisions.
  • Suboptimal utilization of resources: Manual design methods often result in inefficiencies such as excessive wire lengths, increased power consumption, and physical congestion. These inefficiencies impact the overall performance and escalate both the design complexity and production costs.
  • Disparate toolchains: Fragmented workflows in SoC development are a significant bottleneck, with disconnected tools used for floorplanning, connectivity and physical design. The lack of integration across these stages leads to inefficiencies, delays in achieving design closure, and difficulties in maintaining consistency throughout the design process.

Addressing these challenges requires adopting automated design methodologies, enhancing workforce expertise, and integrating toolchains to streamline workflows and reduce inefficiencies.

Designers require smarter NoC solutions

The pressure of this new wave of SoC design complexity is pushing design teams to their limits. An effective approach to managing these challenges is to divide the design into smaller, more manageable pieces by partitioning it into IP blocks.

While this method simplifies individual design tasks, it introduces a new challenge in ensuring seamless integration of these blocks to form a fully functional and optimized SoC. The integration process often reveals unexpected issues, such as mismatched interfaces, timing conflicts and resource contention, which can significantly impact performance and delay time-to-market.

The integration challenges become even more pronounced as SoC designs incorporate increasingly sophisticated components such as AI accelerators and advanced interconnect systems. For instance, the evolution of neural processor units (NPUs) and NoC technologies highlights how rapidly the complexity of SoC architectures has grown.

The first NPUs were typically implemented as arrays of multiply-accumulate (MAC) functions. By comparison, today’s NPUs are far more advanced and may be implemented as arrays of processing elements (PEs), all linked by their own mesh topology NoCs.

Similarly, NoC technology has significantly advanced. First-generation NoCs required manual layout and implementation, including the insertion of pipeline stages. Later generations of NoC technology introduced physical awareness, enabling automatic NoC generation and pipeline stage insertion.

The current generation of NoCs supports higher-end features such as soft tiling. This technology encompasses the automatic replication of processing units (PUs) such as processor clusters in high-level SoCs or PEs in NPUs. It also automatically generates the NoC and configures the network interface unit (NIU) associated with each PU with a unique address.

Features like physical awareness and NoC soft tiling dramatically increase productivity, reduce time to market, and mitigate risk. However, as design complexity continues to grow, additional advancements will be needed to address emerging challenges.

Preparing for the future of SoC design

Successfully realizing next-generation devices is getting harder, especially when it comes to integrating all the IPs into the full SoC. There is a clear and present need for the evolution of tools, including NoC technologies, to address the expanding requirements driven by market shifts such as:

  • Automate repetitive and time-consuming tasks, freeing up engineering resources for innovation.
  • Accelerate NoC generation without sacrificing performance, power, or quality.
  • Adapt to diverse design topologies, seamlessly accommodating both hierarchical and flat NoC structures.
  • Optimize across multiple metrics, including wire length, latency and congestion, to deliver high-performing designs that meet tight market windows.
  • Empower engineers with user-friendly interfaces and flexible workflows, enabling incremental updates and integration into existing toolchains.

When NoC tools and technologies with these capabilities become available, SoC designers will be able to address these escalating design requirements with greater efficiency and innovation.

In short, next-generation NoC solutions must be engineered to meet today’s challenges while anticipating the accelerating demands of future SoC design.

Andy Nightingale, VP of product management and marketing at Arteris, has over 37 years of experience in the high-tech industry, including 23 years in various engineering and product management positions at Arm.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The SoC design: What’s next for NoCs? appeared first on EDN.

MCUs target motor control and power conversion

Чтв, 01/23/2025 - 19:34

Infineon’s first PSOC Control MCUs, based on Arm Cortex-M33 processor, enable secured motor control and power conversion. Supported by Modus Toolbox design tools and software, the entry and mainline devices offer varied performance, features, and memory options.

PSOC Control MCUs—C3M for motor control and C3P for power conversion—can be used in appliances, industrial drives, robots, light EVs, solar systems, and HVAC equipment. Their Cortex-M33 processor runs at up to 180 MHz with a DSP and FPU, while a CORDIC math coprocessor accelerates control loop calculations.

Entry-line MCUs (C3M2, C3P2) feature high-resolution, high-precision ADCs and timers, while mainline MCUs (C3M5, C3P5) add high-resolution PWMs for faster real-time response. The devices are PSA Certified Level 2/EPC2 and include Class B and SIL 2 safety libraries. A crypto accelerator, Arm TrustZone, and secure key storage enable IP protection and firmware updates.

The PSOC Control C3 entry and main lineup comprises 34 devices, all available now.

PSOC Control C3 product page

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCUs target motor control and power conversion appeared first on EDN.

Keysight elevates chiplet design environment

Чтв, 01/23/2025 - 19:34

Chiplet PHY Designer 2025 from Keysight offers simulation capabilities for UCIe 2.0 and support for the Open Compute Project Bunch of Wires (BoW) standard. Tailored to AI and data center applications, this digital chiplet design and die-to-die (D2D) platform enables pre-silicon level validation, streamlining the path to tapeout.

The Chiplet PHY Designer aids chiplet development by ensuring interoperability with UCIe 2.0 and BoW standards, enabling seamless integration within advanced packaging ecosystems. It accelerates time-to-market by automating simulation and compliance testing setup, including Voltage Transfer Function (VTF) analysis, simplifying design workflows.

Enhancing design accuracy, the toolset provides insight into signal integrity, bit error rate (BER), and crosstalk analysis, minimizing the risk of costly silicon re-spins. It also optimizes clocking designs by supporting advanced schemes like quarter-rate data rate (QDR), ensuring precise synchronization for high-speed interconnects.

To read about what’s new in Chiplet PHY Designer 2025, click here.

Chiplet PHY Designer product page 

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Keysight elevates chiplet design environment appeared first on EDN.

GaN die power custom MMICs

Чтв, 01/23/2025 - 19:34

Guerrilla RF’s GRF0020D and GRF0030D GaN-on-SiC HEMT power amplifiers deliver up to 50 W of saturated power. Available as bare die, these discrete transistors are intended for wireless infrastructure, military, aerospace, and industrial heating applications, supporting integration into custom MMICs.

Each device operates from either 50-V or 28-V supply rails, covering multiple octaves of operational bandwidth for continuous wave, linear, and pulsed modulation. When using a 50-V rail, the GRF0030D delivers 50 W (PSAT) from DC to 6 GHz, with gain ranging from 13.5 dB to 23.7 dB. At 28 V, it provides up to 27.5 W of saturated output power.

The GRF0020D offers up to 30 W at 50 V and 19 W at 28 V. This lower-power HEMT supports frequencies up to 7 GHz and provides gain between 13.8 dB and 24.3 dB.

The GRF0020D and GRF0030D are 100% DC production tested to ensure known good die (KGD) compliance. They are available for order, with samples ready for distribution. Prices start at $30 each for quantities of 100 units.

GRF0020D product page 

GRF0030D product page 

Guerrilla RF 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN die power custom MMICs appeared first on EDN.

Scope option enables wideband modulated load pull

Чтв, 01/23/2025 - 19:34

R&S offers a load pull test setup with wideband modulated signals using the RTP oscilloscope for non-linear device characterization. Compared to conventional vector network analyzers, this setup enables wideband modulation characterization of RF frontends across varying impedances. It allows precise validation of key performance indicators, such as error vector magnitude and adjacent channel leakage ratio, to support the development of RF components for next-generation wireless technologies.

Designed to verify power amplifier performance when connected to an antenna with dispersive impedance, the setup is based on the RTP084 oscilloscope with the wideband modulated load-pull option RTP-K98, paired with the SMW200A vector signal generator. The oscilloscope’s internal architecture ensures precise phase and time synchronization for forward and reverse wave measurements. Meanwhile, the dual-path vector signal generator provides accurate timing and phase stability between the input and tuning signal for load pull operation.

The RTP-K98 software processes the oscilloscope’s measured data, performs the necessary calculations to achieve the target impedance, and controls the signal generator. It is well-suited for verifying the performance of RF frontends, typically used across wider frequency ranges and multiple transmission bands, such as 5G or Wi-Fi.

Option RTP-K98 is available now. For more information on load pull testing, click here.

RTP product page

Rohde & Schwarz  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Scope option enables wideband modulated load pull appeared first on EDN.

TCXO enhances synchronization for 800G networks

Чтв, 01/23/2025 - 19:34

The SiT5977 Super-TCXO from SiTime is a single-chip timing device that achieves 3X better synchronization than its predecessor and enables 800G network connectivity. Part of the Elite RF family, this differential-ended TCXO optimizes AI compute efficiency in large data centers.

With a dedicated low-phase-noise MEMS resonator driving its integrated PLL, the SiT5977 simplifies AI system architectures by replacing multiple timing components. This ultra-stable, low-jitter TCXO provides a 156.25-MHz output with 80-fs phase jitter and LVDS outputs, supporting 800G and higher links. Integrated digital control adds system-level programmability.

The SiT5977 offers ±0.1 to ±0.25 ppm frequency stability, ensuring precise timing for high-speed networks and AI systems. Designed for demanding environments, it features a ±1-ppb/°C frequency slope (dF/dT) for resilience against airflow and thermal shock. Its digitally controlled tuning allows fine frequency adjustments with a ±400-ppm pull range and 0.05-ppt (5e-14) resolution via an I2C/SPI interface, facilitating embedded control loops for real-time compensation.

Housed in a compact 5.0×3.5-mm package, the SiT5977 Super-TCXO is now in production, with samples available.

SiT5977 product page 

SiTime

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post TCXO enhances synchronization for 800G networks appeared first on EDN.

Calories, power dissipation, and environmental chambers

Чтв, 01/23/2025 - 16:01

I was having a bottle of iced tea one day when I noticed something. Please see this excerpt from the bottle’s label referring to a 2,000 calories per day of personal intake (Figure 1).

I confirmed that caloric number independently to learn that it’s considered an average, but still an essentially correct daily caloric intake number.

Figure 1 Bottle label excerpt referring to a 2000 calories per day standard diet.

If we look up the word “calorie”, we find that one calorie equals 4.184 Joules (Figure 2).

Figure 2 A quick word search reveals that a single calory is the equivalent of 4.184 Joules.

If then we consume 2000 calories per day, we find the following in Figure 3.

Figure 3 The human body power usage based upon the 2,000 calories per day reference for a standard diet.

Two thousand calories per day equals 8368 joules per day which then comes to 0.096852 joules per second which is just under 97 mW. We’ll call that 100 mW just for convenience of thought. We are dealing with approximations, of course.

If all of one’s caloric intake is ultimately dispersed as body heat, that 100 mW of body heat could plausibly be imparted to whatever delicate unit under test (UUT) one happens to be working on while working diligently inside an environmental test chamber.

I once saw such an environmental test chamber in which there was a bank of 100-W lamps mounted “over there”. Those lamps were kept constantly lit and running except that if one more person were to enter the room, one lamp would then be extinguished. This was supposed to help hold the thermal environment of the UUT as invariant as possible.

If we say the following:

  • That each light bulb was providing 100 W over a spherical area of 4*π*R² where R is the radial distance from the light bulb
  • That the UUT has 1 square foot (1 ft2)of presented area receiving that bulb’s thermal radiation
  • That the bulk of the 100 mW of a human body was also impinging on that UUT

We can then seek that value of R for which the 100 W  of one bulb was down by a factor of 1000 to yield the equivalent human presentation of 100 mW to that UUT.

Noting that 100 W divided by 100 mW equals 1,000, we seek that value of R at which the area of the posited sphere around the bulb comes to 1,000 ft2.

Where 4*π*R² = 1,000, we find that R is nominally 8.92 ft.

For all of these crude approximations, y’know what?? That is indeed just about how far away those light bulbs were positioned with respect to the UUT at hand in that chamber.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Calories, power dissipation, and environmental chambers appeared first on EDN.

A closer look at LLM’s hyper growth and AI parameter explosion

Чтв, 01/23/2025 - 13:35

The rapid evolution of artificial intelligence (AI) has been marked by the rise of large language models (LLMs) with ever-growing numbers of parameters. From early iterations with millions of parameters to today’s tech giants boasting hundreds of billions or even trillions, the sheer scale of these models is staggering.

Table 1 outlines the number of parameters in the most popular LLMs today.

Table 1 The number of parameters in today’s most popular LLMs reaches into the billions if not trillions. Source: VSORA

To understand why leading-edge LLMs are scaling so rapidly, we must explore the relationship between parameters, performance, and the technological advancements driving this trend.

Role of parameters in language models

In neural networks, parameters represent the weights and biases that the model captures and modifies. They are analogous to synaptic connections in the human brain.

From a computational architecture perspective, parameters act as the model’s memory, storing the complex relationships and subtle nuances within the input data. Intuitively, an increase in the number of parameters in a language model translates to enhanced ability to understand context, generate coherent text, and even perform tasks for which they were not explicitly trained.

Today, the largest models exhibit behaviors such as advanced reasoning, creativity, and the ability to generalize across diverse domains, reinforcing the notion that scaling up is essential for pushing the boundaries of what AI can achieve.

Scaling laws and diminishing returns

Early LLMs demonstrated that increasing the size of models led to predictable improvements in performance, especially when paired with larger datasets and superior computational power. However, these improvements follow a diminishing returns curve. As models grow larger, the incremental benefits become smaller, requiring exponentially more resources to achieve significant gains.

Despite this, the race to build bigger models persists because the returns, while diminishing, remain valuable for high-stakes applications. For instance, in areas like medical diagnostics, scientific research and autonomous systems, even marginal improvements in AI performance can have profound implications.

Drivers of parameter explosion

Modern LLMs are trained on vast and diverse datasets encompassing entire libraries of books, research papers, studies, analyses of a wide range of human endeavors, extensive software code repositories, and many more data sources. The breadth of these datasets necessitates larger models with billions of parameters to fully exploit the richness of the data.

Multimodal capabilities

Leading-edge LLMs are not limited to processing text alone; many are designed to handle multimodal inputs, integrating text, images, and other types of data. Expanding the parameter count allows these models to process and draw connections between various data types, thus enabling them to perform tasks that involve more than one type of input—such as image captioning, generating audio responses, and cross-referencing visual data with textual information.

The trend toward multimodal capabilities requires a significant increase in parameters to manage the added complexity. The added computational storage enables richer representations of different data modalities and deeper cross-modal understanding, making these models more versatile and valuable for practical applications.

Zero-shot/few-shot learning

One standout advancement in LLMs has been their proficiency in zero-shot and few-shot learning. These models can perform new tasks with minimal examples or even without explicit task-specific training. GPT-3 popularized this capability, showing that an appropriately large model could infer task instructions from just a few examples.

Achieving this level of generalization requires a massive number of parameters so that the model can encode a wide variety of linguistic and factual knowledge into its architecture. This capability is particularly useful in real-world applications where training data may not be available for every conceivable task. Expanding parameter counts helps LLMs build the necessary knowledge and contextual flexibility to adapt to various tasks with minimal guidance.

The competitive AI landscape

The competitive nature of AI research and development also fuels parameter explosion. Companies and research institutions strive to outdo each other in developing state-of-the-art models with more impressive capabilities.

The metric of “parameter count” has become a benchmark for gauging the power of an LLM. While sheer size is not the sole determinant of a model’s effectiveness, it has become an important factor in competitive positioning, marketing, and funding within the AI field.

Challenges in computational power and training infrastructure

The dramatic rise in parameter counts for AI models would not have been possible without parallel advancements in computational power and the supporting infrastructure. For decades, AI progress was hindered by the limitations of the central processing unit (CPU), the dominant computing architecture since its inception in the late 1940s. CPUs, while versatile, are inefficient at parallel processing, a critical capability for training modern AI systems.

A turning point occurred about a decade ago with the adoption of graphics processing units (GPUs) for executing deep neural networks. Unlike CPUs, GPUs are designed for efficient parallel computation, enabling rapid acceleration in AI capabilities.

Today, LLMs leverage distributed training across thousands of GPUs or specialized hardware such as tensor processing units (TPUs), combined with optimized software frameworks. Innovations in cloud computing, data parallelism, and sophisticated training algorithms have made it feasible to train models containing hundreds of billions of parameters.

Techniques like model parallelism and efficient gradient-based optimization have further advanced the field by distributing training tasks across multiple processors and clusters.

However, while larger parameter counts unlock unprecedented potential, they also bring significant challenges, chief among them being the soaring hardware computing resource demands. These demands inflate the total cost of ownership, encompassing not only sky-high upfront hardware acquisition costs but also steep operational and maintenance expenses.

Training vs. inference

Training: A computational beast

Training involves processing massive amounts of unstructured data to achieve accurate results, regardless of how long the task takes. It’s an extremely computationally intensive process, often reaching performance levels in the ExaFLOPS range.

Achieving these results typically demands months of continuous 24/7 operation on cutting-edge hardware. Today, this is conducted on thousands of GPUs, installed on large boards in vast numbers only available in the largest data centers. These setups come at enormous costs, but they are essential investments as no viable alternative exists at present.

Inference: A different approach

Inference operates under a distinct paradigm. While high performance remains critical, whether conducted in the cloud or at the edge, inference typically handles smaller, more targeted datasets. The primary objectives are achieving fast response times (low latency), minimizing power consumption, and reducing acquisition costs. These attributes make inference a more cost-effective and efficient process compared to training.

In data centers, inference is still executed using the same hardware designed for training—an approach that is far from ideal. At the edge, a variety of solutions exist, some outperforming others, but no single offering has emerged as a definitive answer.

Rethinking inference for the future

Optimizing inference requires a paradigm shift in how we approach three interconnected challenges:

  1. Reducing hardware requirements
  2. Accelerating latency
  3. Enhancing power efficiency

Each factor is critical on its own but achieving them together is the ultimate goal for driving down costs, boosting performance, and ensuring sustainable scalability.

Reducing hardware requirements

Lowering the amount of hardware needed for inference directly translates to reduced acquisition costs and a smaller physical footprint, making AI solutions more accessible and scalable. Achieving this, however, demands innovation in computing architecture.

Traditional GPUs, today’s cornerstone of high-performance computing, are reaching their limits in handling the scaling of AI models. A purpose-built architecture can significantly reduce the hardware overhead by tailoring design to the unique demands of inference workloads, delivering higher efficiency at lower costs.

Accelerating latency

Inference adoption often stalls when query response times (latencies) fail to meet user expectations. High latencies can disrupt user experiences and erode trust in AI-driven systems, especially in real-time applications like autonomous driving, medical diagnostics, or financial trading.

The traditional approach to reducing latency—scaling up hardware and employing parallel processing—inevitably drives up costs, both upfront and operational. The solution lies in a new generation of architectures designed to deliver ultra-low latencies intrinsically, eliminating the need for brute-force scaling.

Enhancing power efficiency

Power efficiency is not just an operational imperative; it is an environmental one. Energy-intensive AI systems contribute to rising costs and a growing carbon footprint, particularly as models scale in size and complexity. To address this, inference architectures must prioritize energy efficiency at every level, from the processor core to the overall system design.

Breaking through the memory wall

At the core of these challenges lies a shared bottleneck: the memory wall. Even with the rapid evolution of processing power, memory bandwidth and latency remain significant constraints, preventing full utilization of available computational resources. This inefficiency is a critical obstacle to achieving the simultaneous reduction in hardware, acceleration of latency, and improvement in power efficiency.

Transformation of AI systems

The rapid expansion of parameters in cutting-edge LLMs reflects the industry’s unyielding drive for superior performance and enhanced capabilities. While this progress has unleashed groundbreaking possibilities, it has also exposed critical limitations in current processing hardware.

Addressing these challenges holistically will open the path forward to wide adoption of inference as a seamless, scalable process that performs equally well in both cloud and edge environments.

In 2025, innovative solutions are expected to redefine the hardware landscape, paving the way for more efficient, scalable, and transformative AI systems.

Lauro Rizzatti is a business advisor to VSORA, a startup offering silicon IP solutions and chips. He is a verification consultant and industry expert on hardware emulation.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A closer look at LLM’s hyper growth and AI parameter explosion appeared first on EDN.

Microvolts to kilovolts in milliseconds with one I/O pin

Срд, 01/22/2025 - 16:28

Figure 1’s silly-simple voltage-to-time ADC is an exercise in dynamic range. Assuming that it is used with a 10-MHz counter/timer, its resolution is roughly 10 µV per count for inputs around 0 V and 100 mV per count at 1 kV, and it never really over-ranges. The simple trick that provides this multi-decade measurement span is the inherent logarithmic behavior of RC timing networks. 

Here’s how this one works.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 U1 works with R1, R2, and C1 to logarithmically digitize 0 kV to 1 kV inputs while tying up only one microcontroller I/O pin.

Between conversions, U1’s reset pin 4 is held active-low either by the connected GPIO pin, or by U1 itself.  Using a 555 in this self-resetting mode is unusual but is very handy here. This holds the C1 at zero or very near, since the Ron of pin 7’s open-drain FET is typically just 15ohms.

A conversion starts when the I/O pin is programmed for output and pulsed high, overriding U1 pin 3 (Out) and releasing the reset as sketched in Figure 2. The I/O pin is then immediately tri-stated and reprogrammed as input, gating an internal counter/timer peripheral for measurement of the conversion time T. 

Because U1’s pin 2 (Trigger) is held low, the end of reset also sets pin 3 high and releases pin 7, allowing C1 to begin to ramp positive. When it reaches the 2.048V (Threshold) voltage on pin 5, conversion will complete and the time T that was required to do so is the conversion result. Arrival of pin 5 at the threshold voltage drives pins 3 (Out) low, thereby both pin 4 (Reset) and GPIO, the latter being the “conversion complete” status bit to the microprocessor. Meanwhile, pin 7 is driven low to discharge C1. This process completes in about 12 µs, readying the converter for another cycle.

Figure 2 A single general-purpose tri-state I/O pin serves to both control and measure U1’s time-out. The pin is programmed for output and pulsed positive to start conversion, then tri-stated for timer input. Conversion time (T) is 10 ms for Vin = 0, decreasing to approximately 100 µs for Vin = 1 kV.

T versus Vin is given by the following equation:

T = C1(R1||R2)*Loge ((0.209Vin + 3.234) / (0.209Vin + 1.1919))
= 10.0ms*Loge ((0.209Vin + 3.234) / (0.209Vin + 1.1919))

This is plotted in Figure 3 for Vin = 10 mV to 1kV.

Figure 3 The conversion time (T) in milliseconds for Vin from 0.01 V to 1000 V.

To recover Vin from a T acquisition, do this:

Vin = (3.234 – 1.1919e(T/10.0ms))/(0.209(e(T/10.0ms) – 1))

Finally, here’s something in the nature of a reality check. 

You might be wondering why R1 is shown as a series connection of four 1M resistors instead of a single 4M component.  The answer is the somewhat obvious fact that 1 kV is some serious voltage and the somewhat less obvious fact that resistors, somewhat like capacitors, have voltage ratings. Resistors rated for 1 kV are not the usual breed of cat.

And speaking of cats, please remember the old story about what curiosity did to one unfortunate feline and consider that prudent and proper safety practices are literally vital when choosing to work with voltages of this magnitude.

Me–OW!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Microvolts to kilovolts in milliseconds with one I/O pin appeared first on EDN.

The rise of MCU-enabled sensor designs

Срд, 01/22/2025 - 15:29

New MCUs are addressing fundamental challenges in sensor technology by offering ample memory, a rich set of interfaces, and most importantly, the ability to run intelligent software algorithms. The key enablers include incorporation of sensing-supportive processor cores and MEMS integration with MCUs.

Read full story at EDN’s sister publication, Planet Analog.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The rise of MCU-enabled sensor designs appeared first on EDN.

Сторінки