EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 48 min ago

A single op-amp solution to stabilize laser output

Tue, 05/27/2025 - 14:42

Semiconductor laser diodes (SLDs) are often packaged with a photodiode. The output current from this photodiode can be monitored to regulate the output power intensity of the laser diode. SLDs, however, are prone to pathological drifts, such as temperature variations and mode-hopping, that can alter the output intensity. A popular approach to stabilize the output intensity is to first convert the photodiode current to voltage. This voltage can then be read by a microcontroller, where logic can be programmed to adjust the current supplied to the laser diode. This method is illustrated in Figure 1.

Figure 1 Using a microcontroller to regulate laser diode output power by sensing photodiode current.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 2 provides an alternative implementation that uses a single operational amplifier. When the circuit is powered on, there is initially no photodiode current. The voltage at the positive input of the op-amp is pulled to VCC, and the op-amp powers the laser diode. This induces current in the photodiode, which creates a voltage drop across R1, setting the positive input of the op-amp to: Vcc – Iphotodiode ∗ R1.

Figure 2 A single op-amp solution using negative feedback to provide output power regulation.

The op-amp buffers this voltage and feeds it to the laser diode. The system stabilizes at an operating point determined by:

  1. The laser diode’s VI-intensity curve
  2. The coupling efficiency between the laser diode and photodiode
  3. The current-intensity response of the photodiode
  4. R1

Thereafter, negative feedback stabilizes any variations in output intensity. If the laser intensity increases, the photodiode responds by generating a higher current, which in turn creates a larger voltage drop across R1. This reduces the output voltage of the op-amp, subsequently decreasing the laser intensity. The opposite behavior is seen with a drop in laser output power.

This circuit was built on a breadboard using the OPV314 850-nm VCSEL and the OPA551P op-amp from Texas Instruments (Figure 3). R1 was set to 68 kΩ, and VCC was set to 5 V.

Figure 3 Components assembled on a breadboard using the OPV314 850-nm VCSEL and the OPA551P op-amp.

The oscilloscope trace captured from the positive node of the op-amp is shown in Figure 4, demonstrating the stable output from the laser (arbitrary units). R1 can be used to control the output power intensity.

Figure 4 Oscilloscope trace of positive node of op-amp (proxy for output laser intensity).

Anirban Chatterjee is a biomedical engineer with 10 years of experience in the consumer electronics industry. He holds a master’s degree in electrical engineering and is a member of IEEE. He holds a keen interest in biomedical sensors and associated signal processing techniques.

Related Content

The post A single op-amp solution to stabilize laser output appeared first on EDN.

How reverse polarity protection works to safeguard car battery

Tue, 05/27/2025 - 11:01

Reverse polarity protection is essential in battery-connected automotive systems. So, while reverse polarity is a risky, how do we prevent it? This article delves into reverse polarity protection circuits built around standard and Schottky diodes, as well as high-side P-channel and N-channel MOSFETs. It also offers design ideas on how to implement these protection circuits efficiently.

Read the full article at EDN’s sister publication, Planet Analog.

Related Content

The post How reverse polarity protection works to safeguard car battery appeared first on EDN.

The truth about Wolfspeed’s bankruptcy chatter

Mon, 05/26/2025 - 17:11

At a time when the explosive growth in artificial intelligence (AI), data centers, electric vehicles (EVs), and renewable energy is triggering an unprecedented demand for high-voltage, high-frequency and high-efficiency power devices, the chatter about silicon carbide (SiC) poster child Wolfspeed’s bankruptcy has startled the semiconductors world.

Wolfspeed, which divested its LED and RF businesses to focus on SiC-based power electronics, has been considered a flagbearer in the rapidly emerging SiC semiconductors market. The company pioneered 1-inch, 2-inch, 4-inch and 6-inch SiC wafers, and it was the first outfit to open an 8-inch SiC wafer fab in Mohawk Valley in 2022.

In fact, Wolfspeed is now the only company manufacturing SiC devices on 8-inch wafers in high volume. So, what has gone wrong in Wolfspeed’s SiC fairy tale? For a start, while the word bankruptcy triggers a sense of shock for a company that’s considered the market leader, the truth is that Wolfspeed is restructuring itself to address financial vows and reinforce operational efficiency.

After all, SiC is a new market that is constantly evolving. That inevitably brings growing pains, especially when a new technology like SiC entails higher product development costs while carrying small-volume orders. In other words, Wolfspeed’s situation is more than a company in crisis.

Figure 2 The SiC-based devices promise to transform power electronics in segments ranging from data centers to EVs to renewable energy. Source: Wolfspeed

Why bankruptcy

Now, let’s take a closer look at Wolfspeed’s predicament. First and foremost, a slowdown in EV demand is widely quoted as the cause of Wolfspeed’s current misfortunes. Second, while the SiC substrate business has served as the cash cow for Wolfspeed, the arrival of Chinese players has led to a steep decline in the price of SiC substrates.

According to Yole, the advent of Chinese SiC substrate suppliers has led to a significant capacity expansion and a 30% price drop in 2024. Third, and probably most important, are Wolfspeed’s financial headwinds. It’s carrying $6.5 billion debt while its sales projections seem too optimistic amid the EV slowdown and aggressive push from Chinese players in the SiC market.

So, this bankruptcy news looks more like a bid to establish supply chain discipline, capital flexibility, and policy alignment. The recent change of guards at Wolfspeed in which Gregg Lowe bowed down to make way for Robert Feurle is most likely about setting the stage for this critical transition.

Figure 2 It’s probably no coincidence that Feurle’s appointment precedes the bankruptcy news. Source: Wolfspeed

It’s pretty ironic that Wolfspeed, then known as Cree, made a huge bet on LEDs at a time when the LED market was about to crash. Nearly two decades later, Wolfspeed decided to transform itself into a power electronics device company. Yole calls it an exciting story of business transition.

While the Wolfspeed bankruptcy is most likely coming in weeks, it’s important to put things in perspective. Wolfspeed is still a market leader in SiC materials and is ranked number four in SiC power devices. That said, SiC’s technology and cost challenges leave Wolfspeed with gigantic task of turnaround in a market that demands high CapEx for future development.

Related Content

The post The truth about Wolfspeed’s bankruptcy chatter appeared first on EDN.

Power Tips # 141: Tips and tricks for achieving wide operating ranges with LLC resonant converters

Mon, 05/26/2025 - 15:53

Inductor-inductor-capacitor (LLC) resonant converters have a couple of appealing characteristics for applications requiring an isolated DC/DC converter such as minimal switching losses, no reverse recovery when operating below the resonant frequency and the ability to tolerate large leakage inductance within the transformer.

The challenge

A primary challenge when designing an LLC converter with a wide operating range is the behavior of the gain curve with respect to equivalent load resistance. This is because as the quality factor (Qe), increases, the maximum attainable gain decreases Conversely, the minimum attainable gain increases as Qe decreases. This is shown in Figure 1 below.

Figure 1 LLC gain curves showing that, as Qe increases, the maximum attainable gain decreases. Source: Texas Instruments

This behavior makes it difficult to maintain reasonable root-mean-square (RMS) currents in the power stage and a reasonable switching frequency range. The inductance ratio (Ln) needs to be reduced to reduce the required frequency range; however, a lower inductance ratio increases the magnetizing current in the power stage. This article will discuss five tips for designing an LLC converter with a wide operating range.

Using a reconfigurable rectifier

One potential way to extend an LLC converter’s operating range is to implement a reconfigurable rectifier, as shown in Figure 2.

Figure 2 shows an LLC converter with a reconfigurable rectifier, which can be reconfigured as either a full-bridge or a voltage-doubler. Source: Texas Instruments

In this structure, you can configure the rectifier as a full-bridge or a voltage-doubler rectifier by using a comparator to look at the output voltage to decide the mode of operation. When operating as a full-bridge rectifier, Equation 3 calculates the input-to-output transfer function.

When operating as a voltage doubler rectifier, the input-to-output transfer function is:

Figure 3 shows the switching frequency versus output voltage for an LLC using the above approach to achieve a 140-V to 420-V output voltage range from a fixed 450-V input. This data is collected with an 800-mA load on the output. Notice the jump at 200 V where the comparator switches from full-bridge to voltage-doubler mode.

Figure 3 Switching frequency versus output voltage in LED driver reference design. Source: Texas Instruments

Minimizing winding and rectifier capacitance

If the operating point drops below the minimum gain curve, the LLC controller is forced to operate in burst mode to keep the output voltage in regulation. Burst mode results in higher low-frequency output ripple voltage. This is a concern for applications requiring very low output ripple at light load and at the minimum output voltage.

In such cases, the winding capacitance within the transformer and the output capacitance (Coss) or junction capacitance (Cj) of the rectifiers must be minimized. These parasitic capacitances will cause the gain curve to invert when operating above the resonant frequency. Figure 4 shows the traditional first harmonic approximation (FHA) calculation of an LLC gain curve at light load and the same LLC gain curve when accounting for winding the capacitance and Coss of the rectifiers used in the power stage.

Figure 4 The impact of parasitic capacitance on the LLC gain curve at light load. Source: Texas Instruments

Careful attention to the winding stackup within the transformer and selection of the rectifier components minimizes this gain curve inversion effect. Using wide bandgap devices such as SIC diodes or GaN high electron mobility transistors (HEMTs) as the rectifier can result in considerably lower Coss compared to Si MOSFETs or diodes.

Using LLC controllers with a high-frequency skip mode

A high-frequency skip mode can achieve a lower gain compared to what is achievable with normal switching. Below is an example from a 100-W half-bridge LLC converter with an input range of 70 V to 450 V. In Figure 5, the resonant current is shown in green, and the primary side switch node is shown in blue.

On the right side, the LLC converter is operating in a high-frequency skip mode, omitting every fourth switching cycle. The switching frequency is 260 kHz, but it is sub-modulated at a 77 kHz burst frequency. 

Figure 5 The 100-W LLC converter switching behavior at 70V and 450V inputs with resonant current in green and the primary side switch node in blue. Source: Texas Instruments

 Managing auxiliary bias voltages

Generating the necessary bias voltages for the primary and secondary sides of the power supply can be done by including auxiliary windings on the LLC transformer. For LLC converters with a variable output voltage, the auxiliary winding voltages will change as the output voltage changes. This is especially true for LLC transformers using sectioned bobbins where the auxiliary windings have poor coupling to the secondary windings. When using a simple low-dropout regulator (LDO) structure to regulate the bias voltage, the efficiency will drop as the output voltage increases. It may require a larger physical package to handle the power dissipation.

In Figure 6, Naux1 and Naux2 are sized so that at the lowest output voltage, or the VCC bias voltage, is provided through D1, Q1, and D4. As the output voltage increases, the voltage on C2 is limited to the breakdown voltage of Zener D3 minus the gate-source threshold voltage of Q1. As the output voltage is increased further, the voltage generated by Naux2 becomes high enough to supply VCC, and Q1 is forced off as the gate-source voltage decreases below the turn-off threshold.

Figure 6 Using auxiliary windings along with an LDO structure to generate the necessary bias voltages for the primary and secondary side of the power supply. Source: Texas Instruments

This approach is more efficient than a single winding + LDO but requires two aux windings. An alternative approach that requires only one aux winding is to use a buck converter or boost converter instead of an LDO.

Managing trickle-charging for deeply discharged batteries

LLC converters used as battery chargers must safely recover deeply discharged batteries by applying a small charging current until the battery pack voltage is high enough to safely take the full charging current. LLCs cannot regulate down to a 0-V output with a small output current and therefore struggle to meet this requirement.

This can be managed by including a small constant current circuit with a bypass FET in parallel, as shown in Figure 7When in trickle-charge mode, the bypass FET turns off, and the output current is supplied by LM317 configured to regulate the output current. This allows the minimum output voltage of the LLC converter to be greater than 0 V, even with an output voltage of 0 V. This approach allows the LLC transformer to generate the necessary bias voltages on the primary and secondary side and avoid needing a separate bias supply when the output voltage is 0 V. Once the battery pack voltage has risen to a high-enough level, a FET with a discrete charge-pump circuit bypasses the constant-current circuit.

Figure 7 LLC with a trickle charging circuit that can safely recover deeply discharged batteries. Source: Texas Instruments

Wide LLC operation

While achieving a wide operating range with an LLC converter may look difficult due the nature of the LLC topology, several strategies exist for obtaining a wide operating range easier to achieve. The five simple tips and tricks listed here are analog-control friendly and do not require more complex, digital-control implementations.

Ben Lough is a Systems Engineer in Texas Instrument’s Power Design Services team focused on power factor correction and isolated DC/DC conversion. He received his MS in Electrical and Computer Engineering from Ohio State University in 2016 and BS in ECE from Ohio State in 2015.

 

Related Content

The post Power Tips # 141: Tips and tricks for achieving wide operating ranges with LLC resonant converters appeared first on EDN.

Warranties: Inconsistent-requirements and -results policies

Fri, 05/23/2025 - 15:31
The smartphone case study

Back in late March, at the end of my coverage of Google’s Pixel 9a smartphone launch:

I mentioned that one of my Pixel 7 phones had started swelling, indicative of a failing battery:

I teased that my fortunate upfront purchase of an extended warranty for it ended up being fortuitous and promised that the “rest of the story” would follow shortly. That time is now, and in this piece, I’ll also contrast my most recent experience with an earlier, less-positive outcome, as a means of more broadly assessing the consumer electronics warranty topic.

First, the Pixel 7. Devices containing swollen batteries can quickly transform into dangerously flammable sources, so I immediately removed the smartphone from its charger and powered it down. I then reactivated my Pixel 6a backup phone, the same one I’d temporarily pressed into service around a year earlier when my other Pixel 7’s rear camera array’s glass cover spontaneously cracked, and swapped the SIM into it. And then, I jumped online with Asurion, reported the issue, paid a (bogus, IMHO) $138.68 “service fee”, was directed to a local repair location (Asurion had bought uBreakiFix in late 2019), dropped the swollen Pixel 7 off, and ~24 hours later had a gift card sitting in my account for the original purchase price!

Let’s do the math:

  • I bought the 128 GByte version of the phone in early June 2023 for $499, promotion-bundled (at the time) with a $100 Amazon gift card, for an effective price of $399.
  • For the next 20 months (Asurion also auto-refunded my most recent month’s payment, although I had to then manually cancel the overall policy through Amazon, where it was treated as a subscription), I’d been paying $7.83 per month inclusive of tax for extended warranty coverage…a bit irritating, as the phone was redundantly covered by Google’s standard warranty for the first 12 of those months, but…for $156.60 total.
  • I paid a $138.68 (once again, ridiculous, but…) “service fee” to process the warranty
  • And I ended up getting a $499 gift card.

If my arithmetic is right, I ended up using the phone for nearly 2 years for a total fiscal outlay of $195.28 (plus the cost of the replacement phone, which I’ll mention next). I’m a bit surprised, honestly, that Asurion didn’t just have uBreakiFix swap in a new battery and give it back to me. That said, the display or internals might have gotten stressed by the swelling, so it was likely more straightforward for them from a long-term customer retention standpoint to just give me my money back. And to be clear, considering the burgeoning market for refurbished phones and other consumer electronics devices, they probably went ahead and swapped the battery themselves and then, after running diagnostics on the phone to make sure everything else checked out, resold it on Amazon Renewed, eBay Refurbished, or elsewhere.

Speaking of which, eBay is where I ended up picking up my replacement smartphone. I could have gone with a newer-generation Pixel device (or something else, for that matter), but I already had a bunch of extra Pixel 7-tailored cases, screen protectors and such in storage. And, thanks to Google’s recently expanded five years of software coverage for the Pixel 7 (and my Pixel 6a spare, for that matter), it was now guaranteed to get OS and security updates until October 2027 (versus the original October 2025, i.e. a few months from now as you read these words). I ended up with an eBay Certified Refurbished 128 GByte Pixel 7 in claimed excellent condition, complete with a 1-year bundled warranty, for $198.95 plus tax.

And indeed, when it arrived, it was in excellent condition (reflective of the highly and abundantly rated supplier I’d intentionally, carefully selected), cosmetically at least. It appears to have had a case and screen protector on it for its entire ownership-to-date, both of which I immediately replicated. And functionally, it also seems to be fine, albeit with one characteristic that gave me initial pause. Check out the to-date battery recharge cycle count reported for it:

At first glance, that seemed like a lot, given that Google documents that the Pixel 7 “should retain up to 80% capacity for about 800 charge cycles, after which battery replacement is “recommended,” and particularly given that my other Pixel 7 only has 40 to-date cycles on it:

But I’m an admittedly atypical case study. I work from home, where I also have VoIP, and rarely travel, so my smartphone usage is much lower than the norm. Conversely, given that the Pixel 7 first became available on October 13, 2023, 531 cycles almost exactly match a more typical one-recharge-per-day cadence. Going forward, now in my possession, this phone’s incremental-cycle cadence should dramatically decrease. And to further extend usable life, I’ve belatedly taken the extra step of limiting the peak charge point to 80% of total capacity on both Pixel 7s.

The soundbar case study

So, all good, right? Not exactly…there’s that other case study that I mentioned upfront I wanted to share. Two years back, I told you about my Hisense HS205 soundbar:

 

which I’d recently snagged on sale at Amazon for $34.99 to replace the BÖHM B2 precursor that wouldn’t accept beyond-Red Book Audio digital input streams:

Well…about six months after I bought it, and after very little use, it quit working. It still toggled among the various audio input sources using both the side panel buttons and the remote control:

but nothing came out of the speakers from any of them (and no, it wasn’t in “mute” mode). Given its low price and compact form factor, I assume that the power amplifier fed by all of those inputs via a preamp intermediary was based on inexpensive class D circuitry and had failed.

Good news: although it was beyond the one-month Amazon return period, it was still covered by the one-year factory warranty. Bad news: that warranty was “limited”. Translation: I was responsible for the cost and effort of return shipping to Hisense, including any loss or damage en route, which meant that I’d need to both package it in a bulky/heavily padded/more expensive fashion and pay for optional insurance on it. Further translated: it’d likely cost me as much, if not more, to ship the soundbar back to them as I’d paid for it originally. And I’d probably end up with an already-used replacement, with even more “limited” warranty terms.

Eventually, after I complained long and hard enough, Hisense’s customer support folks relented and emailed me a postpaid shipping label, followed by shipping me a seemingly brand-new replacement soundbar. Candidly, I suspect that although I always try to avoid such “media special treatment,” someone there did an Internet search on my name and figured out I was a “press guy” who should get “handled with kid gloves”. Would the average consumer have accomplished the same outcome, no matter how long and hard they complained? No. Which, again, is why I always strive to maintain anonymity. Sigh.

Similar experiences, good and/or bad? Other thoughts on what I’ve discussed? Sound off in the comments, please!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Warranties: Inconsistent-requirements and -results policies appeared first on EDN.

Hot swap basics: Controllers, schematics, and design examples

Fri, 05/23/2025 - 12:10

How does a hot swap circuit work? What’s the role of a hot swap controller? What are the basic design considerations for selecting a hot swap controller or module? Here is a short tutorial explaining the inner functioning a hot swap device while outlining key design challenges. It also includes hot swap circuit schematics and design examples.

Read the full article at EDN’s sister publication, Planet Analog.

Related Content

The post Hot swap basics: Controllers, schematics, and design examples appeared first on EDN.

Platform helps secure in-vehicle connectivity

Thu, 05/22/2025 - 21:08

NXP’s OrangeBox 2.0 automotive connectivity domain controller features an upgraded CPU and embedded AI acceleration. This second-generation development platform facilitates secure connectivity between the vehicle’s gateway and its wired and wireless systems in domain- and zonal-based architectures.

Powered by the i.MX 94 applications processor, OrangeBox 2.0 delivers 4× the CPU performance of its predecessor. The processor integrates four Arm Cortex-A55 cores, two Cortex-M7 cores, two Cortex-M33 cores, and the NXP eIQ Neutron NPU. It also adds post-quantum cryptography acceleration along with enhanced AI, safety, and security capabilities. An integrated 2.5-Gbps Ethernet switch enables software-defined networking and supports the shift to software-defined vehicles (SDVs).

OrangeBox 2.0 builds on its predecessor with integrated NXP wireless technologies, including the SAF9100 for software-defined audio and the AW693 for concurrent Wi-Fi 6E and Bluetooth 5.4 to enable secure over-the-air updates. It supports smart car access via NXP’s latest BLE/UWB technology and an automotive-grade secure element.

The OrangeBox 2.0 automotive development platform is expected to be available in the second half of 2025.

OrangeBox 2.0 product page

NXP Semiconductors 

The post Platform helps secure in-vehicle connectivity appeared first on EDN.

MCU enables neuromorphic processing at the edge

Thu, 05/22/2025 - 21:08

As Innatera’s first mass-market neuromorphic MCU, Pulsar delivers intelligence at the edge by emulating the brain’s neural networks. It uses Spiking Neural Networks that process only changes in input—enabling real-time decision making with significantly reduced energy and latency. According to Innatera, Pulsar achieves up to 100× lower latency and 500× lower energy consumption compared to conventional AI processors.

The Pulsar chip combines neuromorphic computing with conventional signal processing. In addition to its Spiking Neural Networks (SNNs), it integrates a RISC-V CPU and dedicated accelerators for Convolutional Neural Networks (CNNs) and Fast Fourier Transform (FFT). By processing data intelligently at the sensor level, Pulsar reduces reliance on power-hungry edge processors or cloud infrastructure for interpreting sensor input.

With sub-milliwatt power consumption, Pulsar enables always-on intelligence in power-constrained devices—from sub-millisecond gesture recognition in wearables to energy-efficient object detection in smart home systems. It provides real-time responsiveness with power budgets as low as 600 µW for radar-based presence detection and 400 µW for audio scene classification.

Pulsar is available now, supported by Innatera’s Talamo SDK for neuromorphic application development.

Pulsar product page

Innatera

The post MCU enables neuromorphic processing at the edge appeared first on EDN.

PSU combines GaN and SiC for hyperscale AI

Thu, 05/22/2025 - 21:08

Navitas announced a production-ready 12-kW PSU reference design that achieves 97.8% efficiency for hyperscale AI data centers with 120-kW rack densities. The design incorporates three-phase interleaved TP-PFC and FB-LLC stages, implemented using Gen-3 Fast SiC MOSFETs and 4th-generation high-power GaNSafe ICs, respectively. The GaNSafe ICs integrate control, drive, sensing, and essential protection functions, while IntelliWeave digital control enhances overall performance.

IntelliWeave uses a hybrid strategy combining Critical Conduction Mode (CrCM) and Continuous Conduction Mode (CCM) to optimize efficiency from light to full load. This approach simplifies the design, reduces component count, and lowers power losses by 30% compared to conventional CCM-only solutions.

The PSU meets Open Rack v3 (ORv3) and Open Compute Project (OCP) standards, with dimensions of 790×73.5×40 mm. It operates from 180 VAC to 305 VAC and delivers up to 50 VDC, supplying 12 kW above 207 VAC and 10 kW below. Features include active current sharing and protection against overcurrent, overvoltage, undervoltage, and overtemperature. It operates from –5°C to +45°C, provides ≥20 ms hold-up time at 12 kW, and limits inrush current to ≤3× steady-state current for <20 ms. Cooling is provided by an internal fan.

For more information about the 12-kW PSU reference design, click here.

Navitas Semiconductor 

The post PSU combines GaN and SiC for hyperscale AI appeared first on EDN.

Toshiba shrinks SiC MOSFETs with DFN package

Thu, 05/22/2025 - 21:07

Toshiba has released four 650-V third-generation SiC MOSFETs in compact 8×8-mm DFN packages. The surface-mount DFN reduces volume by over 90% compared to leaded packages such as TO-247 (3-terminal) and TO-247-4L(X) (4-terminal). It also enables smaller parasitic impedance components, helping to lower switching losses.

The package’s flat, leadless design enables a Kelvin connection for the gate-drive signal-source terminal, minimizing source wire inductance. This improves switching speed and efficiency. For example, the TW054V65C achieves about 55% lower turn-on loss and 25% lower turn-off loss compared to Toshiba’s existing products.

Well-suited for industrial applications, the devices can be used for switch-mode power supplies, EV charging stations, and photovoltaic inverters. Key specifications include:

Toshiba has begun volume shipments of the TW031V65C, TW054V65C, TW092V65C, and TW123V65C 650-V SiC MOSFETs in the 8×8-mm DFN package.

Toshiba Electronic Devices & Storage 

The post Toshiba shrinks SiC MOSFETs with DFN package appeared first on EDN.

PCIe card provides FPGA-based data acceleration

Thu, 05/22/2025 - 21:07

Powered by the Achronix Speedster 7t1500 FPGA, the VectorPath 815 PCIe accelerator card meets the performance demands of AI and HPC workloads. Speedster FPGAs integrate machine learning processors to deliver a massively parallel architecture, customizable data paths, and efficient processing of sparse and irregular computations.

“The VectorPath 815 card delivers greater than 2000 tokens per second with 10-ms inter-token latency (LLAMA 3.1-8B Instruct) for unmatched generative AI inferencing performance — enabling customers to accelerate bandwidth-intensive, low-latency applications with a greater than 3× total cost of ownership (TCO) advantage vs. competitive GPU solutions,” said Jansher Ashraf, director of AI Solutions Business Development at Achronix.

The Speedster 7t1500 FPGA features 2560 machine learning processors, a 2D network-on-chip, 692k LUTs, and 32 SerDes lanes supporting PCIe Gen5 ×16 and dual 400G Ethernet. The VectorPath 815 card builds on this by integrating 32 GB of GDDR6 memory for 4-Tbps bandwidth, 16 GB of DDR4 memory, dual QSFP-DD ports, and a PCIe Gen5 interface.

VectorPath 815 accelerator cards are now in volume production. 

VectorPath 815 product page

Achronix Semiconductor

The post PCIe card provides FPGA-based data acceleration appeared first on EDN.

TEG energy harvesting: hype or hope?

Thu, 05/22/2025 - 16:31

I like to follow energy-harvesting research developments and actual installations, as there are many creative approaches and useful applications. In many cases, harvesting has solved a power-source problem effectively and with reasonable cost versus benefit.

At the same time, however, I see energy harvesting as often being oversold at best and overhyped at worst. There’s a real glow with the concept of getting something for (almost) nothing that is often associated with it, when the harsh reality is you may be getting very little energy for a much higher cost and complexity than what was promoted. That tradeoff may be acceptable if you are desperate or have no viable alternative, but often that is not the case.

Perhaps the strangest non-conventional harvesting scheme I saw was a specialized coating that could be applied as wall paint (see References 1 and 2). That coating used humidity in the air to harvest energy, with the speculative projections that maybe you could power a house using this paint. Of course, beyond the obvious issues of physical-connection wiring, there was the near-trivial actual available output. The power density output of 0.0001-0.05 watts/meter2 was quite modest (to be polite) in both absolute and relative terms and certainly wouldn’t power your house or even a smartphone.

Tailpipe TEG

A good example of a more practical harvesting arrangement is a recent thermoelectric generator (TEG) story I saw in the Wall Street Journal, of all places (Reference 3). A research team at Pennsylvania State University developed a TEG that fits into the exhaust tailpipe of an internal combustion engine (ICE) vehicle and uses the exhaust waste heat to generate up to about 40 watts (Figure 1).

Figure 1 (a) 3D schematic diagram of the TEG system. The geometry of the exhaust gas pipeline can vary. (b) Power (P) and (c) power density (ω) for automobile and high-speed object conditions. Source: Pennsylvania State University

While that’s enough to power or recharge a small electronic device, it’s a fairly modest amount of power in the context of the power of the engine of a car, small airplane, or helicopter. One of their claimed innovations in this implementation is that it is optimized to work better when there is cooling airflow around the moving tailpipe, yielding a larger temperature differential and thus greater output. The design has been modeled, a prototype built and tested, and the collected data is in line with the expectations (Reference 4).

So far, so good. But then it the storyline goes into what I call extrapolation mode, as the “free energy” and “something for almost nothing” aspects start to overtake reality. How much does this harvester cost as a single unit, or perhaps as a mass-produced item? How long will it last in the tailpipe, which is a harsh environment? What’s the effect on engine exhaust flow and back pressure? What’s the developed energy density, by weight and volume?

The WSJ reporter covering this story seemed to be a non-technical journalist who basically repeated what the researchers said—which is certainly a valid starting point—but didn’t ask any follow-up questions. That’s the problem with most energy-harvesting stories, especially the free-heat TEG ones: they are so attractive and feel-good in concept that the realities of the design and installation are not brought up in polite conversation while the benefits are touted.

I’m not saying that this TEG harvesting scheme is of no value. It may, in fact, be useful in specific and well-defined situations. There are many examples of viable waste-heat recovery installations in industrial, commercial, and residential settings to prove that point. But as will all designs, there are hard and soft costs as well as short- and long-term implications that shouldn’t be ignored.

Small-scale TEG

There are also smaller-scale TEG-type harvesting success stories out there. For example, for many decades, gas-fired home water heaters used their own always-pilot light (no longer allowed in many places due to energy mandates) to heat an array of thermocouples. This array then provided power to activate and turn on the gas valve and ignite the gas to heat the water in the tank (Figure 2).

If the pilot light was out for any reason, turning on the gas valve to heat that water would be extremely dangerous. However, the gas-heated thermocouple system is self-protecting and fail-safe: in the absence of that pilot light that ignites the gas and heats the thermocouples, there is no power to actuate the valve, thus the gas flow would be shut off. As an additional benefit, no electrical wiring of any type was needed by the water heater. It was a plumbing-only (water and gas) installation with no external electricity needed.

Figure 2 This schematic of a gas-fired water heater shows the bottom thermocouple assembly whose electrical output controls fail-safe actuation for the gas-flow valve. Source: All Trades Las Vegas

Harvesting hubbub

My sense is that harvesting gets so much favorable attention because it is so relatable and appears to offer no/low-cost benefits with little downside, at least at first glance. There’s little doubt that the multifaceted attraction of TEG and other energy-harvesting approaches attracts a lot of positive attention and media coverage, as this one did. That’s a big plus for these researchers as they look for that next grant.

Engineers know that reality is usually different. When it comes to generating, capturing, and using energy and power, the old cliché that “there’s no such thing as a free lunch” usually applies. The real question is the cost of that lunch.

Have you used TEG-based harvesting in any project? What were the expected and unexpected issues and benefits? Did you stick with it, or do you have to go with another approach?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

References

  1. Nature, “Generic Air-Gen Effect in Nanoporous Materials for Sustainable Energy Harvesting from Air Humidity
  2. Nature, Supplement to “Generic Air-Gen Effect in Nanoporous Materials for Sustainable Energy Harvesting from Air Humidity”
  3. The Wall Street Journal, April 18, 2025, “The Heat Coming Out of Your Car’s Tailpipe? Some Can Be Turned Into Electricity
  4. ACS Applied Materials & Interfaces, January 7, 2025, “Thermoelectric Energy Harvesting for Exhaust Waste Heat Recovery: A System Design” (behind paywall, but it is also posted here at ResearchGate)

The post TEG energy harvesting: hype or hope? appeared first on EDN.

Smartphone design: The confrontation of comfort and intention

Thu, 05/22/2025 - 11:02

The smartphone wasn’t designed to hijack our attention—it was designed to be indispensable. But somewhere along the way, it started doing both. While they’ve become essential for navigation, coordination, and connection, smartphones also pull us into behaviors that fragment our focus, drain our energy, and steal our ability to be present.

The numbers speak for themselves: In 2023, the average U.S. smartphone user checked the phone 144 times, spent four and a half hours on the phone, and intercepted constant interruptions bringing users back to the screen. Our devices are brilliant at capturing attention but terrible at respecting it.

We’re meant to be productive and focused, despite apps fighting for our attention, sucking us into endless scrolling, and helping lose track of how we want to spend our time. These behaviors don’t just steal moments—they shape habits and distance us from our goals. Why is it so hard to take back control?

One reason lies in how devices are designed. Many apps use gamification techniques like rewards, streaks, and variable reinforcement—borrowed from game design—to keep users engaged. These mechanisms exploit our psychology, turning tools into traps. What began as a way to enhance user experience now prioritizes screen time over well-being.

Meanwhile, the “default effect”—a cognitive bias that nudges us toward baseline settings—further complicates things. While smartphones offer customization features to streamline our consumption, these rarely overcome the powerful habits formed by defaults. Together, gamification and defaults create a cycle that keeps users engaged but not necessarily in control.

Figure 1 A mere minimalist approach, stripping away functionality, would also sacrifice digital creativity. Source: frog

Some minimalist devices and feature phones have attempted to solve this by stripping away functionality. While effective at reducing distractions, they often sacrifice the digital creature comforts—payment, health tracking, or photography—that make smartphones indispensable in modern life. The challenge isn’t to do less but to do better.

Principles for intentional interactions

To address this challenge, we envision a smartphone concept that fosters focus without sacrificing our digital creature comforts. This design reimagines apps as ephemeral: a core set of essential apps are selected to remain on the phone daily, while additional apps downloaded during the day disappear by evening, resetting the device to a clean slate every morning.

Figure 2 New guiding principles like curation, intentionality, and resistance can give smartphone users better control. Source: frog

This approach gives people an extra lever of control over their attention and is guided by three principles rooted in behavioral science: curation, intentionality, and resistance.

  1. Curation: Thoughtful defaults for pre-commitment

Digital devices often pull us into experiences we never intended, whether it’s social media, news, or games. A redesigned smartphone encourages users to pre-select a core set of essential apps that align with their intentions.

Inspired by Daniel Kahneman’s Thinking, Fast and Slow, this approach minimizes the impulsive, reactive “fast brain” and creates space for the deliberate, reflective “slow brain.” Pre-committing to essential apps helps users focus on what truly matters and avoid distractions.

  1. Intentionality: Reducing decision fatigue

Our brains make roughly 35,000 decisions a day, and the mental energy required quickly adds up. Even deciding not to engage with an app is a choice. By automatically removing non-essential apps overnight, a reimagined smartphone eliminates visual cues that drain willpower, keeping the daily experience focused on essentials.

  1. Resistance: Friction as a feature

Introducing friction can help users break habits formed by unconscious engagement. Instead of mindlessly launching an attention-grabbing app, requiring users to reinstall it creates a moment of reflection. Behavioral research shows that even small barriers can disrupt automatic behaviors and encourage intentional decision-making.

Rethinking technological rituals

Smartphones are both marvels of utility and sources of unintended consequences. By rethinking the principles that guide their design, we can create tools that serve users’ intentions rather than exploiting their attention.

Incorporating thoughtful defaults, reducing decision fatigue, and strategically introducing friction can empower users to reclaim control of their time and focus. The future of smartphones isn’t about limiting functionality but rather giving people agency—offering experiences that align with our priorities and values while respecting our boundaries.

Technology brands that design products to help their users focus, while reducing the mental load of distractions and excessive decisions, can deliver real value to their users in the form of greater mental clarity, better use of their time, and more rewarding engagement with their devices and the world around them.

Emma Brennan is senior interaction designer at frog, part of Capgemini Invent. She is a multi-disciplinary designer passionate about translating design research into products and services aligned with user needs and behaviors, spanning both physical and digital spaces.

Inna Lobel is head of industrial design at frog, part of Capgemini Invent. Her cross-disciplinary work spans a broad range of industry verticals and product types, including consumer products, breakthrough technologies, healthcare, climate tech and sustainable design.

Related Content

The post Smartphone design: The confrontation of comfort and intention appeared first on EDN.

A two-way mirror—current mirror that is

Wed, 05/21/2025 - 16:06

One classic set you’ll see in most vintage mystery movies is an interrogation room with a “two-way” mirror on one wall. This cool gadget lets the witnesses see the suspects from another room while the suspects can see only themselves. Which direction the two-way mirror works in is determined by which side has more light. The bright side is the suspect’s mirror. The dim side is the witness’s window.

So, what does that have to do with electronics?

This simple design idea describes a two-way current mirror that, in dubious analogy to the optical kind, can mirror or transmit according to whether the input or output side is more positive. It comprises just two BJTs and one diode and is highlighted yellow in Figure 1’s 555 triangle wave VCO.

Figure 1 D1, Q3, and Q4 comprise the two-way current mirror. It passively conducts Q2’s collector current to C1 via Q3’s base-collector when OUT versus C1 is negative, but becomes an active inverting unity-gain current source when OUT goes positive and reverses the polarity.

Wow the engineering world with your unique design: Design Ideas Submission Guide

 Here’s how it works.

Let’s call the D1 node the input, and the C1 node the output. When U1’s OUT pin is low, D1 and the mirror are reversed biased. Now, Q2’s collector current I has nowhere to go except to forward-bias Q3’s base-collector junction, as shown in Figure 2. This connects Q2 to C1 so current I linearly ramps C1 negative.

Figure 2 When OUT goes low, Q4 is off while Q3 saturates and transmits negative current I to C1. Keep Vin < 1 V so Q2 won’t saturate.

When C1 descends to U1’s trigger voltage (1.33 V), OUT transitions positive. This swaps polarity across the mirror, forward biasing it, pulling the emitters of Q3 and Q4 positive relative to C1. Q3 and Q4 then assume the role of a normal active unity-gain current mirror. They now mirror an inverted positive version of I to C1, making it ramp positive, as shown in Figure 3

Figure 3 With OUT high, the mirror goes active and inverts I into a positive current to charge C1.

When C1 charges up to U1’s threshold (3.67 V), OUT snaps negative again and a new oscillation cycle begins, finishing output of a (theoretically) symmetrical isosceles triangle waveshape. Keep Vin < 1 V.

Details, including strengths and weaknesses, of the classic two-transistor current mirror can be found in numerous electronics design references.

Of course, a complementary version of the two-way mirror could also be made with NPN transistors. If a current source replaces Q2’s current sink, it will work equally well.

You might be wondering what’s D1 for? When OUT goes low, both transistor emitters are reverse-biased, so no current should flow through the input node anyway. Therefore, D1 is superfluous, right?

Well, no, it isn’t. The reason is summed up in the term “reverse beta.” It turns out that when you reverse-bias a BJT’s base-emitter junction and forward-bias its base-collector, significant current flow is possible—even likely—from collector to emitter. The associated current gain of this upside-down configuration is always way lower than normal beta, but it’s still more than leakage and definitely too significant to ignore. We need D1.

That wraps up the whodunit of the two-way current mirror. Thanks for reading. Sorry if it’s old news to you. It was new news to me when I thought of it, and I hope to show more applications for it in the near future.

 Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 

Related Content

The post A two-way mirror—current mirror that is appeared first on EDN.

From system design to SiP: What’s the design flow

Wed, 05/21/2025 - 08:51

Like any successful system-on-chip (SoC) effort, a multi-die system-in-package (SiP) project must start with a sound system design. But then what? Are the steps in the SiP design flow different from the stages in an SoC design? What is necessary to ensure that a 2.5D or 3D SiP will be functionally correct, within the power, performance, and cost specifications, and that it will be manufacturable?

The easiest way to answer these questions is to describe the multi-die design process we have developed at Faraday through our participation in SiP designs with our clients.

Co-design from the start

Ideally, the SiP specialists will be involved in the design from the early stages. Even when the design is just a block-level sketch on a napkin, it’s not too early to begin discussing how the IP blocks will be distributed among the dies, and what the implications will be for the completed SiP (Figure 1).

Figure 1 Collaboration on a SiP design can begin with the customer’s selection of chiplets and continue through to a production-ready design. Source: Faraday Technology Corp.

A 2.5D or 3D design adds one or two additional levels to the interconnect hierarchy between the fast, efficient, and dense on-die routing and the slow, power-hungry, and sparse board-level routing. First, advanced packaging provides a silicon interposer for interconnecting the dies.

This level of interconnect is dense, although far less dense than the lower metal layers on a die, and it’s relatively energy-efficient, low-latency, and high-bandwidth, albeit not as good as on-die metal. Stacking dies—going 3D—adds another level to the hierarchy: direct connections between dies, through-silicon vias, and microbumps or hybrid bonding. This level is better than interposer connections, but still not equal to on-die connections.

The challenge in partitioning the multi-die design is that the limitations of each level of interconnect will impose themselves on whatever signals are being routed through that level. Thus, choosing what signals to carry and where will they ultimately impact system power, performance, and area. Therefore, partitioning is a key decision in this process: which IP blocks to put on which dies.

Partitioning will determine which signal paths must be routed on which levels of the routing hierarchy. Thus, partitioning decisions will influence the ease, difficulty, or impossibility of routing and timing closure on each level. If critical signals are placed on an interconnect with insufficient bandwidth or excessive latency to meet requirements, they will impact system performance.

Additionally, they will affect system power, as interconnect power consumption is not inconsequential at the system level. For these reasons, the earlier the SiP-design experts engage with the system designers, the better the resulting design quality is likely to be.

Chip and SiP

There are two distinct cases to consider here. In some SiP designs, all dies that will go into the SiP are already designed. The SiP team will then decide on die placement and, possibly, the order of dies in stacks, and will route the connections between dies. However, the partitioning of functions among the dies and the locations of individual pads on each die have already been fixed. This significantly reduces the SiP design planning problem and limits the SiP designers’ freedom.

In contrast, in some projects, one or more dies are designed specifically for the SiP. One of these new dies will often be an SoC, carrying much of the system functionality and serving as the hub for connections within the SiP. In these cases, much more optimization is possible if the die design and SiP teams work together. At the very least, the die and interposer designers can cooperate on the die pad location to ease the interposer layout.

Deeper cooperation can include optimizing the die floorplan to get the pads for critical interconnect buses in the best place to minimize interconnect length and congestion. Early cooperation may influence choices of protocols and transceivers for die-to-die connections or even reconsider the partitioning.

This added freedom is valuable. The relatively long latency, limited bandwidth, and higher power consumption of interposer and package-substrate interconnect can dominate SiP performance. Therefore, minor adjustments to a die layout that allow for significant improvements in SiP routing can result in substantial gains in system-level quality of results.

Interposer and package

The result of all this planning and co-design is a list of the exact location of each pad on each die and each ball on the package substrate, together with a routing list indicating what must connect to what. An additional vital dataset contains the signal-integrity and power-integrity requirements for each connection.

These latter specifications may come from interface standards such as Universal Chiplet Interconnect Express (UCIe), Bunch of Wires (BoW), or the High-Bandwidth Memory (HBM) channel specifications. Or they may be dictated by specific pin requirements on the dies.

Now, the 2.5D/3D team must design an interposer and package substrate that satisfies the connection, signal, and power-integrity requirements. The design should also minimize overall SiP cost and ensure manufacturability. Needless to say, this is an over-constrained optimization problem—it requires excellent tools and deep design experience to get the best result.

SiP analyses

Successful routing is not the end of this story. Before the SiP design can be released, each trace must be subjected to signal-integrity or power-integrity analysis using special analysis tools, sometimes at the detailed level of multiphysics tools. The SiP design team should provide the system designers with the SiP’s thermal and electrical characteristics for complete thermal analysis. Ideally based on actual circuit activity with production software, this analysis is often vital to ensuring the SiP’s reliability in its intended environment.

This design flow has proven successful at Faraday, emphasizing early engagement among system designers, die design teams, and SiP designers. The latter group must possess the skills and experience to recognize potential issues early, before there is sufficient data for complete analysis, when a partitioning choice, die placement, or pad location may cause trouble downstream.

The SiP team must have the skills and tools to design, optimize, and analyze the interposer and package substrate. As important as this is, the organization must have close relationships with foundry, assembly, and test partners to ensure the SiP will be manufacturable in its intended supply chain (Figure 2).

Figure 2 Close collaboration between design teams, silicon foundries, and OSAT partners is essential for the successful production of a multi-die device. Source: Faraday Technology Corp.

To return to our original question: yes, additional steps, skills, and relationships are necessary to ensure the success of an SiP. These needs make choosing a design partner one of the most critical decisions the management will make on a SiP project.

Next, close cooperation between the design teams, the silicon foundry, and the OSAT partners is necessary to produce a successful multi-die device.

Wang-Jin Chen leads Faraday’s Design Development Methodology team, which focuses on methodology, design flow, verification, and sign-off for advanced package design.

 

Related Content

The post From system design to SiP: What’s the design flow appeared first on EDN.

Analyzing a lightning-zapped NAS

Tue, 05/20/2025 - 16:37

As introduced last October, the summer of 2024 was once again brutal from a lightning-induced electronics-culling standpoint at the Dipert household. I’ve already covered the hot tub control board that got zapped, as well as documenting the laundry list of other now-DOA devices:

Once again, several multi-port Ethernet switches (non-coincidentally on the ends of those exterior-attached network cable spans) got fried, along with a CableCard receiver and a MoCA transceiver (both associated with exterior-routing coax). My three-bay QNAP NAS also expired, presumably the result of its connection to one of the dead multi-port Ethernet switches. All this stuff will be (morbidly) showcased in teardowns to come.

Those teardown showcases start today with the last item on the list, the QNAP TS-328, which I’d purchased on sale for $169.99 back in January 2019 and fired up for service the following September in 2020. Here again to start are the stock photos I shared back in December 2020 when I editorially covered the TS-328 in detail for the first time (dimensions are 5.59” (H) × 5.91” (W) × 10.24” (D), and its net weight absent HDDs is 3.62 lbs.):

The NAS (network-attached storage device) is tipped over on its left side for “show” in that second photo, if not already obvious. In normal operation, it’s the HDDs that are on their sides.

Now for some real-life snapshots of today’s patient, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. Front (various status and activity LEDs along the left side, with the power and USB copy buttons, the latter with an integrated USB 3.0 connector for a tethered external storage device, below them):

(Bland) top:

(Even more bland, at least on the outside…hold that thought) left side:

Right side:

Back, with the fan dominating the landscape at upper left, a Kensington security lock site at lower left, the “full range” system speaker (vs the PCB buzzer, which you’ll see later) to its right, and (top-to-bottom along the right side) the recessed reset switch, an undocumented-details (TS-328 user manual is here) “maintenance port”, an audio line-out jack, two RJ45 Gigabit Ethernet ports, side-by-side USB 2.0 (left) and 3.0 (right) connectors, and the power input:

And last, but not least, bottom:

Before I forget, here’s the external power supply (which still seems to be fully functional):

Those three thumbscrews you may have noticed on the NAS’s back panel are our path inside:

Voila:

For orientation purposes, what you’re looking at here is the inside of the right-side shell, with the front of the NAS to the right in the photo:

And here are the now-exposed guts of the NAS, tipped over on its left side and looking down on the primary PCB mounted to the left side’s inside:

Here’s another perspective on the internal assembly, with the NAS still sideways:

And three more views, with the NAS back in its “normal” orientation. Top:

Bottom:

and front:

As it turns out, there are two PCBs in this design, the main one we’ve already caught a glimpse of, and another associated with the three SATA connectors at the back of the HDDs’ “cage” (I’ve briefly “jumped to the future” for the following shot, showing the “cage” already detached):

Back to the present moment. Detaching the “cage” involves removing seven screws, two each at the top and bottom:

And three more that normally mate with the back panel:

With them removed, all that’s left is to detach the two halves of the connection between the two PCBs:

and the “cage” (with PCB still attached) is now free:

Four more screws to go, to detach the PCB from the cage:

Mission accomplished:

Notice what looks like corrosion on this rectangular metal region?

Sorry for ruining the surprise, but this won’t be the last time you see it. I was unable to remove any of it with my fingernail, and trust me, the NAS was never exposed to moisture, so it’s not rust. I don’t know whether it’s an artifact of being lightning-zapped, or (my suspicion) just the outcome of long-term exposure to three heat-generating fast-spinning HDDs in a tiny enclosure.

The dominant IC at the bottom is unsurprising given the PCB’s function, an ASMedia ASM1062 PCI-to-SATA bridge and SATA controller. That said, I’m still somewhat surprised, because the ASM1062 supposedly only supports “two ports of Serial ATA 6Gbps”, but there are obviously three SATA connectors (therefore three SATA ports) in this design. Ideas, readers?

In the ASM1062’s lower left corner is a Macronix 25L4006E 4 Mbit serial interface (SPI, to be precise) NOR flash memory. Given the 25L4006E’s low capacity, not to mention its location on the other side of a PCI interface from the host CPU (whose heatsink you may have already glimpsed in a previous photo), I’m assuming that it only houses the firmware for the ASM1062, not the entire system. And no, it isn’t a NVM cache for the HDDs’ contents, either…

The other side of this PCB is comparatively unmemorable apart from a whole lot more of what looks like corrosion. Given that this side is more directly exposed to the heat coming off the HDDs, coupled with the fact that the HDDs remained fully functional after the NAS’s demise, my working theory as to the discoloration’s root cause (high temperatures) is seemingly bolstered.

Now for the main system PCB (look, more corrosion!):

Before diving in, here are some close-ups of the front while the PCB is still installed, showing the light pipes from the LEDs to the front panel, along with the USB3 port and switch assemblies:

In that earlier overview photo, you might have glimpsed five red marker-augmented (presumably to tip off the company to warranty-voiding owner removal) screw heads. Unfortunately, removing them didn’t enable the PCB to budge. But then I noticed a bulge in the warranty sticker in one corner:

Sneaky, QNAP. Very sneaky!

That further loosened the PCB, but I still couldn’t get it to fully detach from the metal bracket surrounding it, so I removed those four screws too:

Getting closer:

And finally, after disconnecting it from the zip cable-clustered multi-harness morass above it (which in retrospect may have been what kept it stuck in place after the initial six-screw removal process):

The PCB was finally free:

Before diving in, a brief diversion; let’s look more closely at the inside of the back panel, rotated 90° from its “normal” position in the images that follow. That’s the system fan to the left (duh) and the mounting bracket for the speaker to the right:

Remove the two screws that normally hold the mounting bracket in place:

And there’s your transducer!

Back to the PCB. There was, you may have already noticed from the earlier overview image, nothing of particular note on the backside…unless you’re into solder points (or corrosion patches), that is. The front side, however, was more interesting. Here’s the far-right end:

Top-to-bottom along the right edge, again, are the recessed reset switch, the mysterious “maintenance port”, an audio line-out jack, two RJ45 Gigabit Ethernet ports, side-by-side USB 2.0 and 3.0 connectors, and the power input. At the bottom left is the black-color connector, which originally mated with the SATA PCB. And the right-most two off-white colored ones above it go to the speaker (two-pin) and fan (four-pin). Curiously, as you may have already noticed, the other four-pin connector, directly above the upper right corner of the Faraday Cage, seems to be unused in this particular system design.

Speaking of the Faraday cage:

There’s another seemingly unused connector above it, eight-pin and black in color. And the IC in its upper left corner is where, I believe, the primary system firmware resides; it’s a Toshiba (now Kioxia) THGBMNG5D1LBAIL 4 GByte eMMC NAND flash memory module.

Shifting over once more to the left…

At far left is the earlier alluded-to PCB “buzzer”. Above it is a Weltrend Semiconductor WT61P803, which appears to be a microcontroller optimized for power management. Above that is another unpopulated four-pin connector. To the right of the buzzer is the RTC (aka, CMOS) battery (which, by the way, I confirmed post-NAS failure was still functional but swapped anyway, not that it helped…sometimes a dead battery can thwart a successful system start).

Let’s get that heatsink off, shall we? Needle-nose pliers did the trick:

The system CPU, a Realtek RTD1296 based on a 64-bit Arm Cortex-A53 four-core 1.4 GHz cluster, is now exposed to view.

And under the remainder of the Faraday cage:

are four Samsung K4A4g165We BCRC 4 Gbit DDR4-2400 SDRAMs, together comprising the NAS’s 2 GBytes of (non-expandable, obviously) system volatile memory:

I’ll close with a couple of PCB end shots:

and a premise, attempting to answer the fundamental question, “What caused the NAS to fail?” As I’ve mentioned in past coverage of the QNAP TS-328, this NAS doesn’t have a particularly stellar long-term reliability record; see, for example, this Reddit thread or this one, both of which are reminiscent of what I experienced. So, it may have just coincidentally chosen that point in time to expire, driven by long-term heat-induced failure of some component on the PCB, for example. But the chronological proximity to last summer’s lightning debacle is hard to ignore, given that it’d been reliably running for a few weeks shy of four years at that point.

I don’t think the DC power input was the failure point, as the PSU still seems fully functional (as I mentioned earlier). The only other thing physically connected to the NAS was its upper Gigabit Ethernet port; I’d wager that was the Achilles’ Heel, instead. Subsequent non-operation characteristics (a brief twitch of the system fan on each powerup attempt, for example) are past-history reminiscent to me of a PC whose CPU has gone awry. Fundamentally, after all, what is a NAS but a tailored-function, Arm- and Linux-based (in this case) computer? Although I’m unable to find a detailed datasheet on the Realtek RTD1296 online, the overview information I’ve dug up makes repeated mention of dual-port Gigabit Ethernet support, suggesting that, at minimum, the RTD1296 integrates the MAC, thereby providing the requisite failure path.

Agree or disagree with my premise? Anything else that jumps out at you from my dissection? Sound off with your thoughts in the comments!

Related Content

The post Analyzing a lightning-zapped NAS appeared first on EDN.

Boosting RISC-V SoC performance for AI and ML applications

Tue, 05/20/2025 - 09:44

Today’s system-on-chip (SoC) designs integrate unprecedented numbers of diverse IP cores, from general-purpose CPUs to specialized hardware accelerators, including neural processing units (NPUs), tensor processors, and data processing units (DPUs). This heterogeneous approach enables designers to optimize performance, power efficiency, and cost. However, it also increases the complexity of on-chip communication, synchronization, and interoperability.

At around the same time, the open and configurable RISC-V instruction set architecture (ISA) is experiencing rapid adoption across diverse markets. This growth aligns with rising SoC complexity and the widespread integration of artificial intelligence (AI), as illustrated in figure below. Nearly half of global silicon projects now incorporate AI or machine learning (ML), spanning automotive, mobile, data center, and Internet of Things (IoT) applications. This rapid RISC-V evolution is placing increasing demands on the underlying hardware infrastructure.

The above graph shows projected growth of RISC-V-enabled SoC market share and unit shipments.

NoCs for heterogeneous SoCs

A key challenge in AI-centric SoCs is ensuring efficient communication among IP blocks from different vendors. These designs often integrate cores from various architectures, such as RISC-V CPUs, Arm processors, DPUs, and AI accelerators, which adds to the complexity of on-chip interaction. So, compatibility with a range of communication protocols, such as Arm ACE and CHI, as well as emerging RISC-V interfaces like CHI-B, is critical.

The distinction between coherent networks-on-chip (NoCs), primarily used for CPUs that require synchronized data caches, and non-coherent NoCs, typically utilized for AI accelerators, must also be carefully managed. Effectively handling both types of NoCs enables the design of flexible, high-performance systems.

NoC architectures address interoperability and scalability. This technology delivers flexible interconnectivity, seamlessly integrating the expanding variety and number of IP cores. Approximately 10% to 13% of a chip’s silicon area is typically dedicated to interconnect logic. Here, NoCs serve as the backbone infrastructure of modern SoCs, enabling efficient data flow, low latency, and flexible routing between diverse processing elements.

Advanced techniques for AI performance

The rapid rise of generative AI and large language models (LLMs) has further intensified interconnect demands, with some now surpassing trillions of parameters and significantly increasing on-chip data bandwidth requirements. Conventional bus architectures can no longer efficiently manage these massive data flows.

Designers are now implementing advanced techniques like data interleaving, multicast communication, and multiline reorder buffers. These methods enable widened data buses with thousands of bits for sustained high-throughput and low-latency communication.

In addition to addressing bandwidth demands, new architectural approaches optimize system performance. One technique is AI tiling, where multiple smaller compute units or tiles are interconnected to form scalable compute clusters.

These architectures allow designers to scale CPU or AI-specific processing clusters from dozens to thousands of cores. The NoC infrastructure manages data movement and communication among these tiles, ensuring maximum performance and efficiency.

Beyond tiling, physical and back-end design challenges intensify at advanced nodes. Below 10 nanometers, routing and layout constraints significantly impact chip performance, power consumption, and reliability. Physically aware NoCs optimize placement and timing for successful silicon realization. Early consideration of these physical factors minimizes silicon respin risk and supports efficiency goals in AI applications at 5 nm and 3 nm.

Reliability and flexibility

Hardware-software integration, including RISC-V register management and memory mapping, streamlines validation, reduces software overhead, and boosts system reliability. This approach manages coherent design complexity, meeting performance and safety standards.

Next. safety certifications have become paramount as RISC-V-based designs enter safety-critical domains such as autonomous automotive systems. Interconnect solutions must deliver high-bandwidth, low-latency communication while meeting rigorous safety standards such as ISO 26262 up to ASIL D. Certified NoC architectures incorporate fault-tolerant features to enable reliability in AI platforms.

Modularity and interoperability across vendors and interfaces have also become essential to keep pace with the dynamic demands of AI-driven RISC-V systems. Many real-world designs no longer follow a monolithic approach.

Instead, they evolve over multiple iterations and often replace processing subsystems mid-development to improve efficiency or time to market. Such flexibility is achievable when the interconnect fabric supports diverse protocols, topologies, and evolving standards.

Andy Nightingale, VP of product management and marketing at Arteris, has over 37 years of experience in the high-tech industry, including 23 years in various engineering and product management positions at Arm.

 

Related Content

The post Boosting RISC-V SoC performance for AI and ML applications appeared first on EDN.

Adaptive resolution for ADCs

Mon, 05/19/2025 - 16:26
Impact of ADC resolution and its reference

Engineers always want to get all they can from their circuits; this holds for analog-to-digital converters (ADCs). To maximize performance from an ADC, resolution is perhaps the primary spec to focus on. However, the sad fact is that we have defined the maximum single-read resolution once we pick the ADC and its reference. For example, let’s say we use a 10-bit ADC with a 5-V reference. This means that our reading resolution is 5 V / (1023) or 4.888 mV per digital step. But what if we had this ADC system and had to apply it to a sensor that had an output of 0 to 1 V? The ADC resolution stays at 4.888 mV, but that means there are only 1 V /4.888 ms, or ~205 usable steps, so in essence, we have lowered the sensor’s resolution to 1 part in 205.

What if we were designing a device to measure the voltage across an inductor when a DC step voltage is applied through a series resistor? You can see in the curve below (Figure 1), in the first couple of seconds we probably would get decent data point readings, but after that, many of the data points would have the same value since the slope is shallow. In other words, the relative error would be high.

Figure 1 Sample inductor voltage versus time curve for a circuit measuring the voltage across an inductor when a DC step voltage is applied through a series resistor. Note the flat slope after 3 seconds, which will increase the relative error of the measurement.

Wow the engineering world with your unique design: Design Ideas Submission Guide

There are two basic ways to change this:

  • Change the ADC to one with more bits (such as 12, 14, or 16 bits) or,
  • Change the ADC’s reference voltage.

(There are also more exotic ways to change resolution, such as using delta-sigma converters.) Changing the number of bits would mean an external ADC or a different microcontroller. So, what if we designed a system with an adjustable reference? This resolution could change automatically as needed—let’s call this adaptive resolution.

Adaptive resolution demo

Let’s look at an easy method first. The Microchip ATMEGA328P has three settings for the internal ADC reference: the Vcc voltage, an internal 1.1-V reference, and an external reference pin (REF). So, for demonstration, the simplest setup is to use an Arduino Nano, which uses the ATMEGA328P.

The demonstration uses one of the analog inputs, which can connect to a 10-bit ADC, to measure voltage or the sensor output. The trick here is to set the reference to Vcc (+5 V in this design) and take an ADC reading of the analog input.

If the reading, after being converted to a voltage, is greater than 1.1 V, use that value as your measurement. If it is not greater than 1.1 V, change the reference to the internal 1.1-V band-gap voltage and retake the reading. Now, assuming your sensor or measured voltage is slow-moving relative to your sample rate, you have a higher-resolution reading than you would have had with the 5-V reference.

Referring to our inductor example, Figure 2 illustrates how this adaptive resolution will change as the voltage drops.

Figure 2 Change in adaptive resolution using the Microchip ATMEGA328P’s internal ADC with either reference Vcc voltage of 5 V or internal reference of 1.1 V.

Adaptive resolution code

The following is a piece of C code to demonstrate the concept of adaptive resolution.

[An aside: As a test, I used Microsoft Copilot AI to write the basic code, and it did a surprisingly good job with good variable names, comments, and offered a clean layout. It also converted the ADC digital to the analog voltage correctly. As I was trying to get Copilot to add some logic changes, it got messier, so at that point, I hand-coded the modifications and cleanup.]

// Define the analog input pin const int analogPin = A0; // Variable to store the reference voltage (in mV) const float referenceVoltage5V = 4753.0; // Enter the actual mv value here const float referenceVoltage1p1V = 1099.0; // Enter the actual mv value here // Types and variable to track reference state enum ReferenceState {Ref_5, Ref_1p1}; ReferenceState reference = Ref_5; void setup() { // Initialize serial communication at 9600 bits per second Serial.begin(9600); // Set the analog reference to 5V (default) analogReference(DEFAULT); reference = Ref_5; // Set reference state } void loop() { int sensorValue = 0; int junk = 0; float voltage = 0; sensorValue = analogRead(analogPin); // Take a reading using the current reference if (reference == Ref_5) { voltage = (sensorValue / 1023.0) * referenceVoltage5V; //Convert reading if (voltage < 1100) { // Check if the voltage is less than 1.1V // Change the ref voltage to 1.1v and take a new reading analogReference(INTERNAL); // Set the analog reference to 1.1V (internal) reference = Ref_1p1; // Set reference state junk = analogRead(analogPin); // Take a throw-away read after ref change sensorValue = analogRead(analogPin); // Take a new reading using 1.1v ref voltage = (sensorValue / 1023.0) * referenceVoltage1p1V; //Convert reading } } else // Reference is currently set to 1.1v voltage = (sensorValue / 1023.0) * referenceVoltage1p1V; //Convert reading if (sensorValue == 1023) { // Check if the ADC is at maximum (>= 1.1v) // Voltage is 1.1 volts or greater, so change to 5v ref and reread analogReference(DEFAULT); // Set the analog reference to 5V (default) reference = Ref_5; // Set reference state junk = analogRead(analogPin); // Take a throw-away read after reference change sensorValue = analogRead(analogPin); // Take a reading using the 5v reference voltage = (sensorValue / 1023.0) * referenceVoltage5V; //Convert reading } // Print the analog value and voltage to the serial monitor if (reference == Ref_5) Serial.print("Analog value with 5V reference: "); else Serial.print("Analog value with 1.1V reference: "); Serial.print(sensorValue); Serial.print(", Voltage: "); Serial.println(voltage / 1000,4); // Delay for a moment before the next loop delay(1000); }

This code continuously reads the ADC connected to analog pin A0. It starts by using Vcc (~5 V) as the reference for the ADC. If the reading is less than 1.1 V, the ADC reference is switched to the 1.1-V internal reference. This reference is continued to be used until the ADC returns its maximum binary value of 1023, which means the A0 voltage must be 1.1 V or greater. So, in this case, the reference is switched to Vcc again. After taking a valid voltage reading, the code prints the reference value used along with the reading.

The 5 V and 1.1 V references must be calibrated before use to get accurate readings. This should be done using a reasonably good voltmeter (I used a calibrated 5½ digit meter). These measured voltages can then be entered into the code.

Note that towards the top of the code, the 5-V reference voltage variable (“referenceVoltage5V”) is set to the actual voltage as measured on the REF pin of the Arduino Nano, when the input on A0 is greater than 1.1 V. The 1.1-V reference voltage variable (“referenceVoltage1p1V”) should also be set by measuring the voltage on the REF pin when the A0 voltage is less than 1.1 V. Figure 3 below illustrates this concept.

Figure 3 This code continuously reads the ADC connected to analog pin A0. If A0 voltage < 1.1 V, the ADC reference is switched to 1.1 V. If A0 > 1.1 V, the ADC reference is switched to Vcc.

Relative error between 1.1 V and 5 V references

A few pieces of data showing the improvement of this adaptive resolution are as follows: Around 1.1-V, the 5-V referenced reading can have a resolution error of up to 0.41% while the 1.1-V reference reading can have up to a 0.10% error. At 100 mV, a reading that references 5 V could have up to a 4.6% error, while a 1.1-V referenced reading may have up to a 1.1% error. When we reach a 10-mV input signal, the 5 V reference may err by 46% while the 1.1 V reference reading will be 10.7% or less.

Expanding reference levels External DAC method

If needed, this concept could be expanded to add more levels of reference, although I wouldn’t go more than 3 or 4 levels on a 10-bit ADC due to diminishing returns. The following are a few examples of how this could be done.

The first uses a DAC with its own reference tied to the Nano’s REF pin. The DAC controlled by the Nano could then be adjusted to give any number of reference values. An example of such a DAC is the MAX5362 DAC with I2C control (although its internal reference is 0.9xVcc, so the max reading would be roughly 4.5 V). In this design, the Nano’s REF pin would be set to “EXTERNAL.” See Figure 4 below for more clarity.

Figure 4 Using an external DAC (MAX5362) controlled by the Arduino Nano to provide more reference levels.

Nano’s PWM output method

Another way to create multiple reference voltages could be by using the Arduino Nano’s PWM output. This would require using a high-frequency PWM and very good filtering to obtain a nice flat DC signal, which is proportional to the 5-V reference value. You would want a ripple voltage of about 1 mV (-74 dB down) or less to get a clean, usable output. The outputs would also need to be measured to calibrate it in the code. This technique would require minimal parts but would give you many different levels of reference voltages to use. Figure 5 shows a block diagram of this concept.

Figure 5 Using the Arduino Nano’s PWM output and a lowpass filter to obtain the desired DC signal to use as a voltage reference.

Resistor ladder method

Another possibility for an adjustable reference is using a resistor ladder and an analog switch to select different nodes in the ladder. Something like the TI TMUX1204 may be appropriate for this concept. The resistor ladder values can be selected to meet your reference requirements. Figure 6 shows that two digital outputs from the Nano are also used to select the appropriate position in the resistor ladder.

Figure 6 Using a resistor ladder and an analog switch, e.g., TI TMUX1204, to select different nodes on the ladder to generate tailored voltage reference values.

You get the idea

There are other ways to construct the reference voltages, but you get the idea. The bigger picture here is using multiple references to improve the resolution of your voltage readings. This may be a solution looking for a problem, but isn’t that what engineers do—match up problems with solutions?

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

Related Content

The post Adaptive resolution for ADCs appeared first on EDN.

What you need to know about firmware security in chips

Mon, 05/19/2025 - 10:55

The rapid advancement of semiconductor technologies has transformed industries across the globe, from data centers to consumer devices, and even critical infrastructure. With the ever-growing reliance on interconnected devices, robust security systems are paramount. Among the unsung heroes in ensuring this security are the firmware systems that power these devices, particularly security firmware embedded within semiconductor components.

Secure firmware, especially in devices like self-encrypting drives (SEDs), is crucial in safeguarding sensitive data. As data breaches and cyberattacks become more sophisticated, ensuring that the foundation of technology—semiconductors—remains secure is critical. The secure firmware embedded in these systems enables the encryption and decryption of data in real time, ensuring that sensitive information remains protected without compromising performance.

Figure 1 SEDs provide hardware-based encryption for robust data protection. Source: Micron

While often invisible to the end user, this technology has a far-reaching impact. It secures everything from financial transactions to personal health data, laying the groundwork for secure, scalable, and efficient systems that are vital for industries in both the public and private sectors. In this context, the evolution of secure firmware can be seen as an essential pillar of digital safety, contributing to national and global security priorities.

Role of security protocols and standards

In the world of secure firmware, several protocols and standards ensure that systems remain resilient against evolving threats. These include advanced cryptographic algorithms, trusted platform modules (TPMs), and the implementation of standards set by organizations like National Institute of Standards and Technology (NIST) and Trusted Computing Group (TCG). These technical frameworks serve to safeguard sensitive data and build trusted systems from the hardware level up.

  1. Security Protocol and Data Model (SPDM)

SPDM is a protocol developed by the Distributed Management Task Force (DMTF) to provide a standardized framework for secure communication between devices, especially in scenarios involving trusted hardware such as TPMs and secure boot mechanisms.

Figure 2 The security standard enables system hardware components such as PCIe cards to have their identity authenticated and their integrity verified. Source: OCP Security

It ensures secure data exchange by supporting authentication, integrity checks, and confidentiality for devices in a distributed environment. By embedding SPDM into security firmware, semiconductor systems can ensure end-to-end security from device initialization to secure communication with other networked devices.

  1. NIST Cybersecurity Framework

NIST provides a comprehensive set of standards and guidelines that address the security requirements for information systems in various industries. The NIST Cybersecurity Framework, along with specific guidelines like NIST SP 800-53 and NIST SP 800-171, defines best practices for managing cybersecurity risks and ensuring system integrity.

Figure 3 The cybersecurity framework provides a structured approach to cybersecurity risk management, incorporating best practices and guidance. Source: NIST

These standards heavily influence the design and implementation of secure firmware within semiconductor systems, helping organizations meet regulatory compliance and industry standards. With strong encryption, secure boot processes, and robust key management, firmware embedded in semiconductor chips must comply with NIST standards to ensure that systems are protected against evolving threats.

  1. Trusted Computing Group (TCG)

TCG defines industry standards for hardware-based security technologies, including TPMs, which are widely used in semiconductor systems for secure authentication and encryption. TCG’s specifications, such as the TPM 2.0 standard, enable the creation of a hardware-based root of trust within a device.

This ensures that even if the operating system is compromised, the underlying hardware remains secure. The integration of TCG standards into firmware helps strengthen the security posture of semiconductor devices, making them resilient to physical and remote attacks.

Impact of firmware security on different Industries

Secure firmware embedded in semiconductors is crucial in advancing various sectors, ensuring the protection of data and systems at a foundational level. Here’s how it’s benefiting key segments:

  1. Financial sector

Secure firmware is essential in safeguarding financial transactions and sensitive data, particularly in banks, payment systems, and online platforms. Self-encrypting drives and hardware-based encryption ensure that financial data remains encrypted, even when stored on physical drives.

Implementing security protocols such as Secure Hash Algorithm (SHA), Advanced Encryption Standard (AES), and public-key cryptography standards ensures that financial data is protected against cyber threats, reducing the risk of data breaches and fraud.

  1. Healthcare

The healthcare sector is increasingly relying on digital technologies to manage patient data. Secure firmware is critical in ensuring that health information remains protected across devices, from medical records to diagnostic machines.

By using encrypted semiconductor solutions and ensuring compliance with standards like Health Insurance Portability and Accountability Act (HIPAA), patient data is safeguarded from unauthorized access. The integration of secure boot processes and data encryption protocols, such as AES-256 and RSA, prevents data leakage and ensures that sensitive health records remain confidential.

  1. Government and national security

Government agencies rely heavily on secure hardware solutions to protect sensitive national data. Secure firmware within semiconductor devices used by government systems ensures that classified information, defense data, and communications remain secure.

Through the implementation of NIST-approved cryptographic algorithms and TCG’s trusted hardware standards, government systems can resist both local and remote threats. This security infrastructure supports national defense capabilities, from intelligence gathering to military operations, indirectly enhancing national security.

  1. Critical infrastructure

The protection of critical infrastructure, such as power grids, transportation systems, and communications networks, is paramount for the functioning of society. Secure firmware in semiconductors enables these systems to operate securely, preventing cyberattacks that could compromise national safety.

Protocols such as SPDM help ensure that all components of critical infrastructure can communicate securely, while hardware-backed encryption ensures that even if systems are breached, data integrity is maintained.

  1. Manufacturing and industrial control systems

In manufacturing environments, industrial control systems that manage production lines, robotics, and automated systems need to be protected from cyber threats. Secure firmware embedded in the semiconductor chips that control these systems helps prevent cyberattacks targeting production processes, which could lead to significant financial losses or safety issues.

For instance, the use of TCG’s TPM technology enables secure authentication and encryption of communication between devices, ensuring that industrial systems remain operational and tamper-free.

  1. Defense and aerospace

In the defense and aerospace sectors, secure firmware is indispensable for the integrity of both commercial and military technologies. From satellites to weapon systems, semiconductor-based firmware security ensures the protection of classified military data and technologies from cyber espionage and attacks.

With the growing adoption of TPMs and other hardware-based security solutions, defense technologies become more resilient to attacks, ensuring the protection of national interests.

Implications for national and global security

As industries become more digitally interconnected, the need for secure hardware has never been more pressing. Secure firmware plays an essential role in protecting data at the hardware level, preventing unauthorized access and ensuring the integrity of information even in the event of physical tampering. This level of protection is vital not only for corporations but also for government institutions and critical sectors that rely on unbreachable security measures.

The ongoing development and refinement of firmware security in semiconductors align with broader global priorities surrounding cybersecurity. Through cutting-edge technologies like self-encrypting drives, secure firmware helps mitigate the risks associated with cyberattacks, such as data theft or system compromise, providing a layer of defense that supports global digital infrastructure.

The semiconductor industry is constantly evolving, pushing the boundaries of what is possible in terms of speed, efficiency, and security. As part of this progress, companies in the semiconductor industry are investing heavily in the development of advanced security measures embedded in their hardware solutions. This innovation is not only crucial for the companies themselves but has far-reaching implications for industries that rely on secure technology, from finance to healthcare, education, and government.

This innovation is not only beneficial for the industries adopting these technologies but also plays a significant role in driving broader policy and technological advancements. As organizations continue to develop and deploy secure semiconductor systems, they contribute to a more resilient and trustworthy digital ecosystem, indirectly bolstering national interests and global technological leadership.

The ongoing development of firmware security in semiconductor systems represents a critical effort in the fight against cyber threats and data breaches. While often unnoticed, the impact of these technologies is profound, helping to secure the digital infrastructure that underpins modern society.

As the semiconductor industry continues to innovate in this space, it contributes to the ongoing enhancement of security standards, indirectly supporting global technological leadership and contributing to a more secure digital world.

Karan Puniani is a staff test engineer at Micron Technology.

Related Content

The post What you need to know about firmware security in chips appeared first on EDN.

Another PWM controls a switching voltage regulator

Fri, 05/16/2025 - 16:52

A recent Design Idea, “Three discretes suffice to interface PWM to switching regulators, demonstrated one way to use PWM to control the output of a typical switching voltage regulator. There were some discussions in the comments section about circuit behavior, which influenced design modification. Here’s a low-cost design that evolved in light of those discussions. A logic gate IC, an op-amp, and a few resistors and capacitors buffer a PWM and supplies a signal to the regulator’s feedback pin, Figure 1.

Figure 1 A microprocessor produces a 12-bit, 20 MHz, PWM signal that controls a switching voltage regulator using an inverter IC, an op-amp, resistors, and capacitors to buffer the signal for the regulator’s feedback pin.

Wow the engineering world with your unique design: Design Ideas Submission Guide

For various reasons, it’s difficult, if not impossible, to control a regulator’s output voltage beyond a certain level of accuracy. This design proceeds with a PWM having 12 bits of resolution in mind, operating at a frequency of approximately 4900 Hz.

It’s easy these days to find microprocessors (µPs) that can produce a PWM clocked at 20 MHz. However, the supply currents running through that IC’s power supply bonding wires can cause voltage drops. This means that the PWM signals don’t quite swing to the rails. Worse yet, if the currents vary significantly with the µP’s tasks, the voltage drops can change, and it might not be possible to calibrate the errors out. A simple solution is to buffer the µP with logic gates (typically inverters), which draw negligible current except during switching transients. The gates can be powered from a clean, accurate voltage source, the same as or close in value to that which powers the µP.

The inverter in Figure 1 is a 74LVC2G14GW,125 IC whose paralleled outputs drive an op-amp-based low-pass filter whose passive components are of sufficiently high impedances to negligibly load those outputs. When powered from 3 V or more, this dual inverter has an output resistance of less than 15 Ω from -40°C to +85°C. (If you need to operate the µP at 1.8 V, parallel the 6 inverters of a 74LVC14AD,118 for a less than 19 Ω result.)

The TLV333IDBVR op-amp has a maximum input offset voltage of 15 µV (the maximum offset is specified for a 5-V supply; an unknown increase can be expected if the supply voltage is lower).

Its typical (maximum are not specified) input currents are 150 pA from -40°C to +85°C, contributing an offset through R1, R2, and R3 of 115 µV. At 1.8 V, ½ LSb for a 12-bit signal is 220 µV (370 µV with a 3.0-V supply.) The filter settles to much less than that 12-bit ½ LSb voltage in 10mS and has a peak (not peak-peak) ripple of less than 50µV.

R4 and R5 should be chosen so that the intended most positive op-amp output voltage multiplied by R4 / (R4 + R5) is at most slightly greater than Vfb. This allows a regulator output of Vfb. This ratio could be smaller if the minimum desired regulator output is larger than Vfb. The resistors’ parallel combination should be the value specified by the regulator for the single resistor connected between Vfb and ground, typically 10 kΩ. R6 should be set in accordance with the desired range of output voltages.

The allowed range of PWM duty cycles should exclude extremes for at least two reasons. First, the op-amp output is only guaranteed to swing within 70 mV of each rail (with a 10k load connected to half of the supply voltage.) Second, the processor, GPIO in particular (but also the gate to some extent), likely has unequal rise and fall times and delays. Although these differences are small, they have their greatest effects on accuracy at duty cycle extremes, which therefore should be avoided. Fortunately, accommodating these limitations has a negligible effect on functionality.

In this design, the output voltage is linear with the duty cycle. The regulator’s loop gain is unchanged from that of standard operation. With the regulator disabled until the PWM filter output settles, there are no startup issues. Finally, there is negligible inherent injection of noise into the feedback pin from an external supply.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post Another PWM controls a switching voltage regulator appeared first on EDN.

Pages