EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 2 години 33 хв тому

A precision digital rheostat

Срд, 02/19/2025 - 16:47
Rheostats

Rheostats are simple and ubiquitous circuit elements, usually comprising a potentiometer connected as an adjustable two terminal resistor. The availability of manual pots with resistances spanning ohms to megohms makes the optimum choice of nominal resistance easy. But when an application calls for a digital potentiometer (Dpot), the problem can be challenging.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Dpots are only available in a resistance range that’s narrow compared to manual pots. They also typically suffer from problematically high wiper resistance and resistance tolerance. These limitations conspire to make Dpots a difficult medium for implementing precision rheostats. Recent EDN design idea (DI) articles have addressed these issues with a variety of strategies and topologies:

While each of these designs corrects one or more complaints on the lengthy list of digital rheostat shortcomings, none fixes them all and some introduce complications of their own. Examples include crossover distortion, unreduced sensitivity to resistance tolerances, resolution-reducing nonlinearity of the programmed resistance, and just plain old complexity.

The design

Figure 1’s circuit isn’t a perfect solution either. But it does synthesize an accurate programmed resistance equal to reference resistor R1 linearly multiplied by U1’s Rbw/Rab digital setting (the ratio between the terminal B to wiper resistance and total element resistance).

Figure 1 A precision digital rheostat that synthesizes an accurate programmed resistance equal to reference resistor R1 linearly multiplied by U1’s Rbw/Rab.

Here’s how it works.

 R = (Va – Vb)/Ia
R = R1/(Raw/Rbw + 1) = R1 Rbw/Rab
Rab = Raw + Rbw = typically 5k to 10k

Where R is the programmed synthetic resistance, R1 is the reference resistor, Raw is the resistance between terminal A and wiper terminal, Rbw is the resistance between B and wiper terminal, and Rab is the total element resistance.

U1 works in “voltage divider” (pot) mode to set the gain of inverting amplifier A2. Pot mode makes gain insensitive to both U1’s wiper resistance (Rw) and Rab. They really don’t matter much—see Figure 4-4 in the Microchip MCP41XXX/42XXX datasheet.

Turning the crank on Figure 1’s design equation math, we get:

Ga2 = Raw/Rbw

Where Ga2 is A2’s gain. Further,

Voltage across R1 = (Va – Vb) + Ga2(Va – Vb) = (Raw/Rbw + 1)(Va – Vb) =  Rab/Rbw(Va – Vb)
Current through R1 = Ia = Rab/Rbw(Va – Vb)/R1

Then, since R = (Va – Vb)/Ia:

R = R1*Rbw/Rab

Va is lightly loaded by A1’s ~10 picoamp (pA) input bias, so R1 can range from hundreds of ohms up to multiple megohms as the application may dictate. It should be precision, certainly 1% or better; then, programming and the math above takes over.

Figure 2 plots the linear relationship between R and Rbw.

Figure 2 Linear relationship between R and Rbw showing the circuit synthesizes an accurate programmed resistance equal to reference resistor R1 linearly multiplied by U1’s Rbw/Rab.

A compensation capacitor (C1) probably isn’t necessary for the parts selection shown in Figure 1 for A2 and U1. But if a faster amplifier or a higher resistance Dpot is chosen, then 10 pF to 20 pF would probably be prudent.

Meanwhile, I think it would be fair to say this design looks competitive with its peers. But earlier I described it as imperfect. Besides being a single-terminal topology (like two others on the list), where else does it fall short of being a complete solution to the ideal digital rheostat (Digistat) problem?

Shortcomings

Here’s where: As Figure 3 shows, when the programmed value for R goes down, A2’s gain (Ga2) must go up. Reading the graph from right to left, we see gain rising moderately as R declines by 75% from R1 to R1/4 where, Rbw/Rab = 64/256 and gain = 3, but then it takes off. This tends to exaggerate errors like input offset, finite GBW and other op-amp nonidealities while creating the possibility of early A2 saturation at relatively low signal levels.

Figure 3 Graphs for Ga2 (red) and R/R1 (black) versus Rbw/Rab on the x-axis. When the programmed value for R goes down, Ga2 must go up.

The severity of the impact of these effects on utility of the design, whether mild, serious, or fatal, will depend on how low you need go in R/R1 and other specifics of the application. So, it’s certainly not perfect, but maybe it’s still useful somewhere.

Two-terminal design

And about that single terminal problem. If you have an application that absolutely requires a two-terminal programmable resistance, you might consider Figure 4. Depending on the external circuitry, it might not oscillate.

Figure 4 Duplicate and cross-connect Figure 1’s circuitry to get a two-terminal programmable resistance.

In closing…

Thanks to frequent contributor Christopher R. Paul for his clever innovations and stimulating discussions on this topic, I would likely never have come up with this design without his help. More thanks go to editor Aalyia Shaukat for her clever creation of this DI section that makes fun teamwork like this possible in the first place. This article would definitely never have happened without her help.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A precision digital rheostat appeared first on EDN.

SoC interconnect automates processes, reduces wire length

Срд, 02/19/2025 - 13:58

A new network-on-chip (NoC) IP aims to dramatically accelerate chip development by introducing artificial intelligence (AI)-driven automation and reducing wire length to lower power use in system-on-chip (SoC) interconnect design. Arteris, which calls its newly introduced FlexGen interconnect IP a smart NoC, claims to deliver a 10x productivity boost, shortening design iterations from weeks to days.

Modern chips—connected by billions of wires—are ever-expanding with growing complexity. Modern SoCs have 5 to 20+ unique NoC instances, and each instance can require 5-10 iterations. As a result, SoC design complexity has surpassed manual human capabilities, which calls for smarter NoC automation.

“In SoC interconnect, while technology has advanced to new levels, a lot of work is still done in manual mode,” said Michal Siwinski, CMO of Arteris. FlexGen accelerates chip design by shortening and reducing iterations from weeks to days for greater efficiency.

“While FlexGen is still using the tried-and-tested NoC IP technology as basic building blocks, it automates the existing infrastructure by employing AI technology,” said Andy Nightingale, VP of product management and marketing at Arteris. “With FlexGen, we automate the NoC IP generation to reduce the manual work while opening high-quality configurations that rival or surpass the manual designs.”

Figure 1 A FlexNoC manual interconnect (above) is shown for an ADAS chip, while an automated FlexGen interconnect (blow) accelerates this chip design by up to 10x. Source: Arteris

According to Nightingale, it enhances engineering efficiency by 3x while delivering expert-quality results with optimized routing and reduced congestion. Dream Chip Technologies, a supplier of advanced driver assistance systems (ADAS) silicon solutions, acknowledges reducing design iterations from weeks to days while using FlexGen in its Zukimo 1.1 automotive ADAS chip design.

“FlexGen’s automated NoC IP generation allows us to create floorplan adaptive topologies with complex automotive traffic requirements within minutes,” said Jens Benndorf, GM at Dream Chip Technologies. “That enabled rapid experimentation to find design sweet spots and to respond quickly to floorplan changes with almost push-button timing closure.”

Shorter wire length

With AI comes a compute performance explosion, and as a result, the complexity of interconnects is going to exponential levels in SoC designs, leading to a huge explosion in the number of wires. FlexGen claims to reduce wire length by up to 30% to improve chip or chiplet power efficiency.

“We are also tackling the big problem of wire length in modern SoC designs,” said Nightingale. “As the gate count size reduces, it inevitably leads to dynamic power issues due to massive data traffic across wires.” By reducing wire length, FlexGen interconnect IP can reduce overall system power and thus help heating problems caused by the energy density of moving massive amounts of data across SoC interconnects.

Figure 2 FlexNoC manual interconnect (above) is shown with the best performance, while automated FlexGen (below) significantly reduces the interconnect wire length. Source: Arteris

Siwinski added that the number of gates doesn’t matter at smaller nodes. “Power from wire length kills you, so we reduce wire length to reduce overall power, performance, and area (PPA) in SoC designs.” That’s crucial as SoCs scale and become more powerful to meet the demands of applications like AI, autonomous driving, and cloud computing.

FlexGen is processor agnostic and supports Arm, RISC-V, and x86 processors. Moreover, its IP generation is highly repeatable to facilitate incremental design.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post SoC interconnect automates processes, reduces wire length appeared first on EDN.

The Google Chromecast Gen 3: Gluey and screwy

Втр, 02/18/2025 - 17:03

In my recent 2nd generation Google Chromecast teardown, “The Google Chromecast Gen 2 (2015): A Form Factor Redesign with Beefier Wi-Fi, Too,” I noted that I subsequently planned on tearing down the Chromecast Ultra, followed by the 3rd generation Chromecast, chronologically ordered per their respective introduction dates (2016 and 2018).

I’ve subsequently decided to flip-flop that ordering, tackling the 3rd generation Chromecast first, in the interest of grouping together devices with output resolution commonality. All three Chromecast generations, also including 2013’s original version, peak-output 1080p video, although the 3rd generation model also bumped up the frame rate from 30 fps to 60 fps; the Ultra variant you’ll see in the near future conversely did 4K. If you’re wondering why I’m referring to them all in the past tense, by the way, it’s because none of them are in production any longer, although everything but the first-generation Chromecast still gets software updates.

Google also claimed at intro that the Chromecast 3 not only came in new color options:

but was also 15% faster than its predecessor (along with adding support for Dolby Digital Plus and fully integrated Chromecast with Nest smart speakers), although the company was never specific about what “15% faster” meant. Was it only in reference to the already mentioned 1080p60 smoother video-playback option? One might deduce that it also referred to more general UI responsiveness, but if true was this due to faster innate processing—which all users would conceivably experience—or higher wireless network performance, only for those with advanced LANs? Or both? I hope this teardown will help answer these and other questions (like why does Wikipedia list no CPU or memory details for this generation?) to at least a degree.

Generally speaking, I try whenever possible to avoid teardowns of perfectly good hardware that end up being destructive, i.e., leaving the hardware in degraded condition that precludes me from putting it back together afterwards and donating it to someone else. In such cases, I instead strive to focus my attention on already nonfunctional “for parts only” devices sourced from eBay and elsewhere. This time, however, all I could find were still-working eBay options:

although the one I picked was not only inexpensive ($10 plus shipping and sales tax, $16.63 total) but was absent its original packaging:

Here’s a closeup of the micro-USB connector—a legacy approach that’s rapidly being obsoleted by the USB-C successor—at the other end of the USB-A power cable:

And here’s a top view of our patient, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Followed by a closeup of the top of the main body:

Same goes for the underside:

That printing on the bottom is quite scuffed at this seeming long-time use point, although I suspect it was already faint from the get-go. In the center are “UL Listed” and HDMI logos, with the phrase “ITE E258392” in-between them. And here’s what it says around the circumference:

Google Model NC2-685
1600 Amphitheater Parkway
Mountain View, CA 94043
CAN ICES-3
(B)NMB-3(B)
IC 10395A-NC26A5
FCC ID A4RNC2-6A5B
Made in Thailand
06041HFADVM445

…whatever (some of, I already get the rest of) that means. And phew!

Here’s the HDMI connector on one end of the cable:

And jumping ahead in time a bit, the other end, entering the partially disassembled body:

Back to where we were before in time, the opposite side of the body showcases, left to right, a hardware reset button (you can also reset via software, if the Chromecast is already activated and mobile device-connected), the aforementioned micro-USB power input and a status LED:

Speaking of sides, you probably already noticed the seam around the circumference of the main body. Also speaking of sides, let’s see if it gets us inside. First off, I used the warmed-up iOpener introduced previously in the Chromecast 2 teardown to heat up the presumed glue holding the two halves together at the seam:

Then I set to work with its iFixit-companion Jimmy:

which got me partly, albeit not completely, to my desired end result, complete with collateral damage in the process:

I suspected that the diminutive dallop of under-case thermal paste I’d encountered with the 2nd-generation Chromecast was more substantially applied this time, to counteract the higher heat output associated with the 3rd-generation unit’s “15% faster” boast. So, I reheated the iOpener, reoriented it on the device, waited some more:

and tried, tried again. Yep, there’s paste inside:

Veni, vidi, vici (albeit, in this case, not particularly swiftly):

My, that’s a lot of (sloppily applied, to boot) glue:

The corresponding paste repository on the inside of the upper-case half is an odd spongy donut-shaped thing. I’ve also scraped away some of the obscuring black paint to reveal the metallic layer beneath it, which presumably acts as a secondary heat sink:

Some rubbing alcohol and a tissue cleaned the blue-green goop off the Faraday cage:

Although subsequently removing the retaining screws on either side of the HDMI cable did nothing to get the cage itself off:

Resigning me to just ripping it off (behavior that, as you’ll soon see, wasn’t a one-time event):

Followed by (most of) the black tape underneath it:

I never actually ever got the HDMI cable detached from the lower-case half, but with the screws removed, I was at least able to disconnect it from the PCB:

enabling me to remove the PCB from the remainder of the case…at least theoretically:

This Chromecast-generation time around, there’s an abundance of thermal paste on both sides:

Even after jamming the Jimmy in the gap in attempting to cut the offending paste in half, I still wasn’t able to separate the PCB from the case, specifically down at the bottom where the micro-USB connector was. The ambient light in the room was starting to get dim and I needed to leave for Mass soon, so I—umm—just gave the PCB a yank, ripping it out of the case:

and quickly snapped the remainder of the photos you’ll see, including the first glimpse of the bottom of the PCB:

When I got back home and reviewed the shots, I was first flummoxed, then horrified, and finally (to this very day, in fact) mortified and embarrassed. And I bet that at least a few of you eagle-eyed readers already know where I’m going with this. What’s that in the bottom left-ish edge of the inside of the back half of the case (with the reset button rubber “spring” to its left and the light guide for the activity LED to its right)? Is that…a still-attached screw? Surrounded by a chunk of PCB?

Yes…yes it is. This dissected device is destined solely for the landfill, “thanks” to my rushed ham-handedness. Sigh:

Guess I might as well get this Faraday cage off too:

And clean off the additional inside paste:

The IC in the upper left is Marvell’s Avastar 88W8887 quad wireless transceiver, supporting 1×1 802.11ac, Bluetooth 4.1, NFC and FM, only some of these functions actually implemented in this design. It’s the same chip used in the 2nd generation Chromecast, so the basis for the “15% faster” claim seemingly doesn’t seemingly source here. Next to it on the right is a SK Hynix H5TC4G63EFR-RDA 4 Gbit LPDDR3-1866 SDRAM. Note too the LED in the lower left corner, and the PCB-embedded antennae on both sides. And, since this PCB is “toast” anyway (yes, note the chunk out of it in the lower right, too), I went ahead and lifted the upper right corner of the cage frame to assure myself (and you) that nothing meaningful was hiding underneath:

Back to the previously seen front side of the PCB:

At far left (with the hardware reset switch below it in the lower left corner…and did I mention yet the missing chunk of PCB to the right of it?), peeking out from the cage frame, is a small, obviously Marvell-sourced, IC labeled as follows (ID, anyone?):

MRVL
G868
952GAX

which I suspect has the same (unknown) function(s) as a similarly (but not identically) labeled chip I’d found in the 2nd-generation Chromecast:

MRVL
G868
524GBD

To its left is the much larger Synaptics MM3006, formerly known as the Marvell 88DE3006 (Synaptics having acquired Marvell’s Multimedia Solutions Business in mid-2017). Again, it’s the same IC as in the 2nd generation Chromecast. And finally, at far right is a Toshiba TC58NVG2S0 4 Gbit NAND flash memory. Same flash memory supplier as before. Same flash memory technology as before. But hey, twice the capacity as before (presumably to provide headroom for the added firmware “support for Dolby Digital Plus and fully integrated Chromecast with Nest smart speakers” mentioned earlier)! So, there’s that…

Aside from a bigger flash memory chip (and, ok, getting rid of the magnet integrated into the Chromecast 2’s HDMI connector), what’s different between the 2nd and 3rd generation Chromecasts, and where does Google’s “15% faster” claim come from? The difference, I suspect, originates with the DRAM. I hadn’t specifically mentioned this in the previous teardown, but the Samsung DRAM found there, while also LPDDR3 in technology and 4 Gbit in capacity, was a “K0” variant reflective of a 1600 MHz speed bin. This time, conversely and as already noted, the DRAM runs at 1866 MHz. My guess is that this uptick also corresponds to a slightly faster speed bin for the Marvell-now-Synaptics application processor. And therein lies, between the two, the modest overall system performance boost.

Agree or disagree, readers? Any other thoughts? Let me know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The Google Chromecast Gen 3: Gluey and screwy appeared first on EDN.

DRAM basics and its quest for thermal stability by optimizing peripheral transistors

Втр, 02/18/2025 - 07:02

For decades, compute architectures have relied on dynamic random-access memory (DRAM) as their main memory, providing temporary storage from which processing units retrieve data and program code. The high-speed operation, large integration density, cost-effectiveness, and excellent reliability have contributed to the widespread adoption of DRAM technology in many electronic devices.

DRAM bit cell—the element that stores one bit of information—has a very basic structure. It consists of one capacitor (1C) and one transistor (1T) integrated close to the capacitor. While the capacitor’s role is to store a charge, the transistor is used to access the capacitor, either to read how much charge is stored or to store a new charge.

The 1T-1C bit cells are arranged in arrays containing word and bit lines, and the word line is connected to the transistors’ gate, which controls access to the capacitor. The memory state can be read by sensing the stored charge on the capacitor via the bit line.

Over the years, the memory community introduced subsequent generations of DRAM technology, enabled by continuous bit-cell density scaling. Current DRAM chips belong to the ’10-nm class’ (denoted as D1x, D1y, D1z, D1a…), where the half pitches of the active area in the memory cell array range from 19 nm down to 10 nm. However, the AI-driven demand for better performing and larger capacity DRAM is propelling R&D into beyond 10-nm generations.

This requires innovations in capacitors, access transistors, and bit cell architectures. Examples of such innovations are high-aspect ratio pillar capacitors, the move from saddle-shaped (FinFET-based) access transistors to vertical-gate architectures, and the transition from 6F2 to 4F2 cell designs—F being the minimum feature size for a given technology node.

A closer look inside a planar 1T-1C DRAM chip: The peripheral circuit

To enable full functionality of the DRAM chip, several other transistors are needed besides the access transistors. These additional transistors play a role in, for example, the address decoder, sense amplifier, or output buffer function. They are called DRAM peripheral transistors and are traditionally fabricated next to the DRAM memory array area.

Figure 1 The 1T-1C-based DRAM memory array and DRAM peripheral area are shown inside a DRAM chip. Source: imec

DRAM peripheral transistors can be grouped into three main categories. The first category is regular logic transistors: digital switches that are repeatedly turned on and off. The second category is sense amplifiers—analog types of transistors that sense the difference in charge between two-bit cells. A small positive change is amplified into a high voltage (representing a logic 1) and a small negative change into zero voltage (representing a logical 0).

These logical values are then stored in a structure of latches called the row buffer. The sense amplifiers typically reside close to the memory array, consuming a significant area of the DRAM chip. The third category is row decoders: transistors that pass a relatively high bias (typically around 3 V) to the memory element to support the write operation.

To keep pace with the node-to-node improvement of the memory array, the DRAM periphery evolves accordingly in terms of area reduction and performance enhancement. In the longer term, more disruptive solutions may be envisioned that break the traditional ‘2D’ DRAM chip architecture. One option is to fabricate the DRAM periphery on a separate wafer, and bond it to the wafer that contains the memory array, following an approach introduced in 3D NAND.

Toward a single and thermally stable platform optimized for peripheral transistors

The three groups of peripheral transistors come with their own requirements. The regular logic transistors must have good short channel control, high on current (Ion), and low off current (Ioff). With these characteristics, they closely resemble the logic transistors that are part of typical systems-on-chips (SoCs). They also need to enable multiple threshold voltages (Vth) to satisfy different design requirements.

The other two categories have more dissimilar characteristics and do not exist in typical logic SoCs. The analog sense amplifier requires good amplification, benefitting from a low threshold voltage (Vth). In addition, since signals are amplified, the mismatch between two neighboring sense amplifiers must be as low as possible. The ideal sense amplifier, therefore, is a very repeatable transistor with good analog functionality.

Finally, the row decoder is a digital transistor that needs an exceptionally thick gate oxide—compared to an advanced logic node—to sustain the higher bias. This makes the transistor inherently more reliable at the expense of being slower in operation.

Figure 2 Here are the main steps needed to fabricate a transistor for DRAM peripheral applications; the critical modules requiring specific developments are underlined. Source: PSS

In addition to these specific requirements, there are a number of constraints that apply to all peripheral transistors. One critical issue is the thermal stability. In current DRAM process flows with DRAM memory arrays sitting next to the periphery, peripheral transistors are fabricated before DRAM memory elements. The periphery is thus subjected to several thermal treatments imposed by the fabrication of the storage capacitor, access transistor, and memory back-end-of-line.

Peripheral transistors must, therefore, be able to withstand ‘DRAM memory anneal’ temperatures as high as 550-600°C for several hours. Next, the cost-effectiveness of DRAM chips must be preserved, driving the integration choices toward simpler process solutions than what logic flows are generally using.

To keep costs down, the memory industry also favors a single technology platform for various peripheral transistors—despite their individual needs. Additionally, there is a more aggressive requirement for low leakage and low power consumption, which benefits multiple DRAM use cases, especially mobile ones.

The combination of all these specifications makes a direct copy of the standard logic process flow impossible. It requires optimization of specific modules, including the transistors’ gate stack, source/drain junctions, and source/drain metal contacts.

Editor’s Note: This is first part of the article series about the latest advancements in DRAM designs. This part focuses on DRAM basics, peripheral circuits, and the journey toward a single, cost-effective, and thermally stable technology platform optimized for peripheral transistors. The second part will provide a detailed account of DRAM periphery advancements.

Alessio Spessot, technical account director, has been involved in developing advanced CMOS, DRAM, NAND, emerging memory array, and periphery during his stints at Micron, Numonyx, and STMicroelectronics.

Naoto Horiguchi, director of CMOS device technology at imec, has worked at Fujitsu and the University of California Santa Barbara while being involved in advanced CMOS device R&D.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post DRAM basics and its quest for thermal stability by optimizing peripheral transistors appeared first on EDN.

DIY custom Tektronix 576 & 577 curve tracer adapters

Пн, 02/17/2025 - 18:00
A blast from the past

Older folks may recall the famous Tektronix 576 and 577 curve tracers from half a century ago. A few of these have survived the decades and ended up in some lucky engineer’s home/work lab.

Wow the engineering world with your unique design: Design Ideas Submission Guide

We were able to acquire a Tek 577 curve tracer with a Tek 177 standard test fixture from a local surplus house that had been used at Sandia Labs, but it was not functional. Even being non-functional, these old relics still command a high price, which set us back $500!! With the help of an on-line service manual and some detailed troubleshooting, the old 577 was revived by replacing all the power supply’s electrolytic capacitors and a few op-amps, plus removing the modifications Sandia had installed.

Once operational we went looking for the various Tek device under test (DUT) adapters for the Tek 177 standard test fixture, these adapters are indeed rare and likewise expensive which sent us on the DIY journey!

The DIY journey

Observing a similar path to Jay_Diddy_B1 [1], we set out to develop a DIY adapter to replace the rare and expensive Tek versions. Like the popular T7 multi-function component tester, which employs a small ZIF socket for leaded-component attachment that works very well; we decided to do the same for the custom Tek adapter using the right and left sides of the ZIF socket to facilitate DUT comparisons with the Tek 177 standard test fixture. This can be seen in Figure 1.

Figure 1 Custom ZIF-based Tek 577 adapter where the right and left sides of the ZIF socket facilitate DUT comparisons with the Tek 177 standard test fixture.

As shown in Figure 2, a low-cost PCB was developed with SMD ferrites added to the nominal “Collector” and “Base” 577 terminals to help suppress any parasitic DUT oscillations. Connectors were also added to allow for external cables if desired (something we have never used). The general idea was to use the 6 banana jacks as support for holding the PCB in place with the ZIF on top of the PCB where one can directly attach various DUTs.

This approach has worked well and allows easy attachment of various leaded-components including the T0-126 and T0-220 power devices.

Figure 2 The custom adapter installed on the Tek 177 standard test fixture.

Applying the curve tracer to an SMD DUT

However, this still leaves the SMD types in need of a simple means to apply with the Tek 577 curve tracer with the 177 fixture; we set out to investigate this.

After studying the methods Tektronix utilized, we discovered some low-cost toggle clamps (found on AliExpress and elsewhere) which are used for clamping planar objects to a surface for machining. Figure 3 shows the custom toggle clamps used on a custom SMD DUT along with the custom adapter installed on the Tek 177 standard text fixture.

Figure 3 Custom toggle-type SMD adapter for the Tek 577 where using the pair of toggle arms allows both the right and left sides of the Tek 177 fixture to be utilized for direct SMD component comparisons.

These clamps could be repurposed to act as a clamp to hold a SMD DUT in place, which resulted in a custom PCB being developed to mount directly on top of the ZIF-based PCB previously discussed (Figure 4).

Figure 4 The custom SMD PCB that can be used with toggle clamps. This can be installed on the Tek 177 fixture for the Tek 577.

The toggle arms allow slight pressure to be applied to the SMD DUT where the leads make contact with the PCB’s exposed surfaces. Using a pair of toggle arms allows both the right and left sides of the Tek 177 fixture to be utilized for direct SMD component comparisons.

A connector on the rear of the PCB is mounted on the bottom side and mates with another connector on the ZIF type PCB, which allows connection to the 6 banana jacks that plug into the Tek 177 Fixture. Four nylon standoffs provide mechanical support and hold the two PCBs together. This setup allows easy SMD component installation and removal with little effort. Figure 5 shows both adapters for the Tek 577 with 177 standard test fixture.

Figure 5 Both adapters for the Tek 577 with 177 standard test fixture.

Both the ZIF and the SMD Adapters have served well and allow most components to be easily evaluated with the old Tektronix 576 and 577 curve tracers. Figure 6 shows the custom toggle-type SMD adapter in action with pair of DUTs.

Figure 6 Custom toggle-type SMD adapter in operation with pair of DUTs.

A word of caution

Just a word of caution when using these and any adapters, fixtures, or leads with the Tek 576 and 577 curve tracers: these instruments can produce lethal voltages across the exposed terminals. The Tek 177 standard test fixtures were originally supplied with plastic protective covers and sensor switches which removed the DUT stimulus when the plastic cover was open. In the old Tek service manuals, there was a modification method to defeat the Tek 177 sensor switch which many engineers employed, and many also removed the plastic protective covers.

Anyway, I hope some folks lucky enough to have an old Tek 576 or 577 curve tracer with Tek 177 standard test fixture find these custom DIY adapters useful if they don’t already have the old Tek OEM elusive adapters!

Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Ex-elis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989. All posts by Mike Wyatt below:

References

  1. “EEVblog Electronics Community Forum.” SMD Test Fixture for the Tektronix 576 Curve Tracer – Page 1, www.eevblog.com/forum/projects/smd-test-fixture-for-the-tektronix-576-curve-tracer/.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post DIY custom Tektronix 576 & 577 curve tracer adapters appeared first on EDN.

Intel: The disintegration of a semiconductor giant?

Пн, 02/17/2025 - 12:00

What’s going on at Intel, the largest beneficiary of the U.S. push to onshore chip manufacturing? While the semiconductor industry was still reeling from a Bloomberg report about TSMC in talks for a controlling stake in Intel Foundry at the Trump administration’s request, The Wall Street Journal threw another stunner about Broadcom considering a bid for Intel’s CPU business.

Figure 1 Despite its financial woes, Intel has the largest and most advanced chip manufacturing operation owned by a U.S. company.

Intel, once a paragon of semiconductor technology excellence, has been on a losing streak for nearly a decade. Pat Gelsinger, the company’s overly ambitious former CEO, made an expensive bid to take Intel into the chip contract manufacturing business, which eventually became a liability for the Santa Clara, California-based semiconductor giant.

Meanwhile, it continued to lose market share in its bread-and-butter CPU business to archrival AMD and largely ceded the artificial intelligence (AI) chips boom to Nvidia. In this backdrop, according to Bloomberg, the previous U.S. administration considered Intel Foundry’s merger with GlobalFoundries (GF), which produces older generation chips and abandoned cutting-edge process nodes years ago.

While that was a non-starter, the present U.S. administration seems to have taken a more pragmatic approach by engaging TSMC to take partial ownership of Intel’s fabs, thus throwing a financial lifeline to money-losing Intel. Moreover, TSMC, in full control of its chip manufacturing operations, is expected to bring stability with its highly successful fabrication process recipes.

However, as the Bloomberg report points out, these talks are in an early stage and it’s not clear what’s in it for TSMC. While Taiwanese super fab expressed its lack of interest in Intel’s foundries a few months ago, its about-face on this matter seems to be linked to the current geopolitical turmoil. More details on this matter are expected to emerge in the coming days.

Broadcom eying Intel’s CPU business

The case for Broadcom potentially acquiring Intel’s CPU and related design businesses is less mysterious. The WSJ story claims that Broadcom is studying the possibility of acquiring Intel’s chip design business. If this matures alongside TSMC’s potential takeover of Intel Foundry, it’ll be the end of the road for the Intel brand as we know it.

However, the report clarifies that, like the TSMC matter, Broadcom’s talks regarding Intel are preliminary and largely informal. Furthermore, Broadcom will only proceed if Intel finds a manufacturing partner; here, it’s important to note that Broadcom and TSMC are working separately.

Figure 2 Despite losing market share to AMD and Nvidia, Intel owns a rich array of semiconductor design resources and patents.

The semiconductor industry rumor mill is in full swing, and we are likely witnessing the fall of an American corporate icon in real-time. This is a stark remainder of the semiconductor industry’s hyper competitive nature, which doesn’t spare missteps of even storied companies like Intel.

Intel’s woes are clearly beyond the reflection phase, and the damage done during Gelsinger’s tenure seems irreparable. However, the U.S. administration also sees Intel as an entity critical to national security. Will that be a blessing in disguise or a catalyst for its quick demise? Time will tell.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Intel: The disintegration of a semiconductor giant? appeared first on EDN.

Infineon expands package choices for SiC MOSFETs

Птн, 02/14/2025 - 16:33

Infineon has introduced Q-DPAK and TOLL package options to its lineup of 650-V CoolSiC Generation 2 (G2) MOSFETs. Leveraging G2 technology, these devices enable faster switching and lower power losses in high- and medium-power switched-mode power supplies for AI servers, EV chargers, and renewable energy equipment.

With thermal cycling onboard, the TOLL package reduces PCB footprint, enabling compact system designs. In SMPS applications, it can also help lower system-level manufacturing costs.

The Q-DPAK expands Infineon’s topside-cooled product family, which includes CoolSiC, CoolMOS, CoolGaN, and OptiMOS devices. Designed for maximum power density and efficiency, these devices achieve 95% direct heat dissipation, allowing both sides of the PCB to be used for improved space management and reduced parasitic effects.

The 650-V CoolSiC G2 MOSFETs in TOLL packages are available with on-resistance values ranging from 10 mΩ to 60 mΩ. Q-DPAK variants are available with on-resistance values of 7 mΩ, 10 mΩ, 15 mΩ, and 20 mΩ.

CoolSiC G2 product page

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Infineon expands package choices for SiC MOSFETs appeared first on EDN.

Diodes afford ESD protection for automotive networks

Птн, 02/14/2025 - 16:33

Nexperia’s PESD1ETH10 diodes protect sensitive electronics in 10Base-T1S in-vehicle networks from ESD damage. With a maximum capacitance of 0.4 pF, they also support higher-speed 100Base-T1 and 1000Base-T1 automotive networks while maintaining signal integrity. Additionally, the devices comply with Open Alliance requirements, ensuring EMC performance and robustness for 10Base-T1S networks.

PESD1ETH10 diodes deliver single-line ESD protection up to 18 kV (IEC 61000-4-2) and up to 15 kV for 1000 discharges (Open Alliance). These diodes cover the full range of automotive board net voltages, including 12 V for cars, 24 V for trucks and commercial vehicles, and 48 V for hybrid and electric vehicles.

High-bandwidth 100Base-T1 and 1000Base-T1 networks drive automotive connectivity, but many legacy systems still use older standards like CAN and LIN. Replacing these with 10Base-T1S simplifies integration, offering a unified network for all automotive applications. The PESD1ETH10 diodes provide ESD protection across all automotive Ethernet networks, streamlining board designs and supply chains.

The PESD1ETH10L-Q is offered in a 1.0×0.6×0.48-mm DFN1006-2 package, while the PESD1ETH10LS-Q comes in a 1.0×0.6×0.37-mm DFN1006BD-2 package with side-wettable flanks for automated optical inspection.

PESD1ETH10 product page

Nexperia

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Diodes afford ESD protection for automotive networks appeared first on EDN.

Software enables seamless IoT device management

Птн, 02/14/2025 - 16:32

congatec’s aReady.IOT software building blocks offer secure IoT connectivity from the company’s aReady.COM computer-on-modules to the cloud. With aReady.IOT, users can focus on their core competencies while congatec simplifies application development, enabling seamless communication and data transfer between systems and devices.

aReady.IOT allows users to remotely monitor, control, and manage their aReady.COM-based applications, connected peripherals, and sensors. These preconfigured blocks support communication via protocols such as OPCUA, MQTT, and REST. Acquired data can be used for maintenance, management, and predictive maintenance. Additionally, data can be processed at the edge for storage and visualization.

Preconfigured modules in aReady.IOT offer a range of scalable services across both application hardware and software layers. The COM Manager, Application Manager, and Fleet Manager each provide unique capabilities to optimize different aspects of the application. Additionally, congatec can offer bi-directional cloud connectivity via the Cloud Connector, supporting services like AWS, Azure, or Telekom Cloud.

aReady.IOT product page 

aReady.COM product page 

congatec

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Software enables seamless IoT device management appeared first on EDN.

QuickLogic enhances eFPGA design tool

Птн, 02/14/2025 - 16:32

Version 2.9 of the Aurora embedded FPGA tool suite from QuickLogic enables seamless integration of block RAM (BRAM) and DSP functions. Along with its new BRAM and DSP IP configurator, the software’s place and route tools improve runtime by up to 2 times.

Other upgrades in Aurora 2.9 include custom function support, which enables the instantiation of lookup table (LUT) macros to create custom functions. The release also introduces interactive path analysis within the new GUI, allowing users to debug design timing in greater detail by highlighting critical path routing. This visibility helps users make informed adjustments to improve timing performance.

Aurora’s inferencing feature streamlines the implementation of reconfigurable computing algorithms by automatically adapting BRAM read/write widths, eliminating the need for manual RTL design modifications.

The Aurora eFPGA development tool suite is now available for Windows 10/11 and major Linux distributions, including CentOS, RedHat, and Ubuntu, via a unified Linux installer. The Aurora Pro version supports Synopsys Synplify for logic synthesis.

Aurora product page

QuickLogic

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post QuickLogic enhances eFPGA design tool appeared first on EDN.

u-blox grows Bluetooth LE module portfolio

Птн, 02/14/2025 - 16:32

New variants in the u-blox Nora-B2 Bluetooth LE 6.0 module family integrate Nordic Semiconductor’s entire nRF54L series of ultra-low power wireless SoCs. Offering a choice of antennas and chipsets, these modules consume up to 50% less current than previous-generation devices while doubling processing capacity.

The NORA-B2 series comprises four variants that differ in memory size, design architecture, and price level. Each variant comes with either an antenna pin or embedded antenna.

  • NORA-B20 features an nRF54L15 SoC and integrates a 128-MHz Arm Cortex-M33 processor, a RISC-V coprocessor, and an ultra-low power multiprotocol 2.4-GHz radio. It comes with 1.5 MB of NVM and 256 KB RAM.
  • NORA-B21, based on an nRF54L10 SoC, is designed for mid-range applications. It has 1.0 MB of NVM and 192 KB of RAM and handles multiple wireless protocols simultaneously, including Bluetooth LE, Bluetooth Mesh, Thread, Matter, Zigbee, and Amazon Sidewalk.
  • NORA-B22 employs an nRF54L05 SoC. It is intended for cost-sensitive applications but still provides access to up to 31 GPIOs. It includes 0.5 MB of NVM and 96 KB of RAM.
  • NORA-B26, based on an nRF54L10, is designed for customers using the network coprocessor architecture. It comes pre-flashed with the u-blox u-connectXpress software, allowing customers to easily integrate Bluetooth connectivity into their products with no prior knowledge of Bluetooth LE or wireless security.

All NORA-B2 modules are designed for PSA Certified Level 3 security and meet the Bluetooth Core 6.0 specification, including channel sounding for accurate ranging. They also carry global certification, enabling manufacturers to launch products worldwide with minimal effort.

NORA-B20 samples are available now, while NORA-B21 and B22 are in limited evaluation. A pre-release of u-connectXpress for NORA-B26 is available for early adopters.

NORA-B2 product page 

u-blox 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post u-blox grows Bluetooth LE module portfolio appeared first on EDN.

Why RISC-V is a viable option for safety-critical applications

Птн, 02/14/2025 - 16:30
An intro to RISC-V

As safety-critical systems become increasingly complex, the choice of processor architecture plays an important role in ensuring functional safety and system reliability. Consider an automotive brake-by-wire system, where sensors detect the pedal position, software interprets the driver’s intent, and electronic controls activate the braking system. Or commercial aircraft relying on flight control computers to interpret pilot inputs and maintain stable flight. Processing latencies or failures in these systems could result in unintended behaviors and degraded modes, potentially leading to fatal accidents.

The RISC-V architecture’s inherent characteristics—modularity, simplicity, and extensibility—align with the demands of functional safety standards like ISO 26262 for automotive applications and DO-178C for aviation software. Unlike proprietary processor architectures, RISC-V is an open standard instruction set architecture (ISA) developed by the University of California, Berkeley, in 2011. The architecture follows reduced instruction set computing (RISC) principles, emphasizing performance and modularity in processor design.

RISC-V is set apart by its open, royalty-free nature combined with a clean-slate design that eliminates the legacy compatibility constraints of traditional architectures. The ISA is structured as a small base integer set with optional extensions, allowing processor designers to implement only the features needed for their specific applications.

This article examines the technical advantages and considerations of implementing RISC-V in safety-critical environments.

Benefits for safety-critical industries

Traditional proprietary architectures, such as Arm, have served safety-critical industries well, but challenges around supplier diversity, customization needs, and safety certification requirements have driven interest in RISC-V.

The following sections describe characteristics of RISC-V that make it a viable option for safety-critical development teams.

Architectural independence

One fundamental challenge in developing safety-critical systems is mitigating supply chain risks. Traditional processor architectures require licensing agreements and create vendor lock-in, which impacts long-term system maintainability and cost.

RISC-V’s open model provides several advantages. The ability to work with multiple silicon vendors reduces single-point-of-failure risks in the supply chain. This is particularly important for long-lifecycle applications in aerospace and automotive, where systems may need to be maintained and supported for decades. When using RISC-V, manufacturers expand their options for semiconductor suppliers and development tool ecosystems, providing flexibility in responding to supply chain issues.

Customization to meet safety-critical requirements

RISC-V’s modular design philosophy allows silicon vendors and system architects to implement custom features at the hardware level. This capability helps address specific safety requirements across mission-specific applications certification standards such as:

  • Custom error detection and correction.
  • Hardware-level monitoring and diagnostic capabilities.
  • Low-latency, deterministic execution features for real-time requirements.

Additionally, RISC-V silicon vendors have products supporting harsh environments, such as processors with radiation hardening and electromagnetic pulse (EMP) protection for space applications.

Memory management

One of RISC-V’s distinguishing features is its approach to cache memory management, helping developers of safety-critical applications requiring deterministic behavior. The ability to implement level 2 cache memory mapping as RAM gives developers greater control over system latency, a crucial factor in real-time safety-critical applications.

This capability addresses challenges covered in aviation safety guidelines like EASA AMC 20-193 and FAA AC 20-193. By providing better solutions for cache contention mitigation than traditional architectures, RISC-V supports more predictable execution timing—a critical requirement for safety certification.

Dissimilar redundancy

Safety-critical systems requiring design assurance level A (DAL-A) certification under DO-178C often implement redundancy to protect against common mode failures. RISC-V’s open architecture provides advantages in implementing dissimilar redundancy strategies:

  • Implementation of different processor configurations within the same system.
  • Diverse redundancy schemes using different vendor solutions.
  • Using different architectures in mixed-criticality systems with varying levels of safety requirements.
Performance considerations

While RISC-V may not always match the raw performance metrics of modern Arm implementations, its architecture provides several advantages specific to safety-critical applications. The ability to implement custom instructions and hardware features allows optimization for specific safety requirements without compromising overall system performance.

Key performance-related features include:

  • Deterministic execution paths for real-time applications.
  • Custom instructions for safety monitoring.
  • Efficient context switching for mixed-criticality systems.
  • Configurable memory protection units to minimize stack and data corruption.
RISC-V’s development tool ecosystem

Over the years, the maturation of development tools and verification environments for RISC-V has expanded to cover the entire software lifecycle. For example, LDRA’s target license package (TLP) for RISC-V architectures supports development and on-target testing with multi-core code coverage analysis, worst-case execution time (WCET) measurement for AMC 20-193 compliance, requirements traceability, and integration with major RISC-V development platforms. This TLP makes RISC-V ready for safety and security.

Additionally, LDRA is highly integrated with RISC-V environments, supporting dynamic testing with hardware and commercial and open-source simulation environments, including silicon-level simulation. These environments support comprehensive hardware-accurate testing and verification to develop and test software as the hardware is developed.

Industry momentum around RISC-V

A growing number of safety-certified RISC-V IP cores offer designers pre-verified components that meet stringent safety requirements. Microchip, SiFive, CAST, and other vendors have released specialized RISC-V implementations with integrated safety features, fault detection mechanisms, and redundancy capabilities tailored for automotive and aerospace applications. Vendors such as Frontgrade Gaisler add to this with radiation-hardened microprocessors and IP cores for space-based systems.

The mix of industry support, technical guidelines, and certification tools creates a positive feedback loop that accelerates RISC-V adoption in safety-critical systems, making it increasingly attractive for organizations developing next-generation applications.

Jay Thomas, technical development manager for LDRA Technology, San Bruno, Calif., and has worked on embedded controls simulation, processor simulation, mission- and safety-critical flight software, and communications applications in the aerospace industry. His focus on embedded verification implementation ensures that LDRA clients in aerospace, medical, and industrial sectors are well grounded in safety-, mission-, and security-critical processes. For more information about LDRA, visit http://www.ldra.com.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Why RISC-V is a viable option for safety-critical applications appeared first on EDN.

Tracking preregulator boosts efficiency of PWM power DAC

Чтв, 02/13/2025 - 16:35

This design idea revisits another: “PWM power DAC incorporates an LM317.” Like the earlier circuit, this one implements a power DAC by integrating an LM317 positive regulator into a mostly passive PWM topology. It exploits the built-in features of that time-proven Bob Pease masterpiece so that its output is proportional to the guaranteed 2% precision of the LM317 internal voltage reference and is inherently protected from overloading and overheating.

Wow the engineering world with your unique design: Design Ideas Submission Guide

However, unlike the earlier design idea that requires a separate 15v DC power input, this remake (shown in Figure 1) adds a switching input boost preregulator so it can run from a 5v logic rail. The previous linear design also has a limited power efficiency that actually drops below single-digit percentages when driving low voltage loads. The preregulator fixes that by tracking the input-output voltage differential across the LM317 and maintains a constant 3v. This is the just adequate dropout-suppressing headroom for the LM317, minimizing wasted power.

Here’s how it works.

Figure 1 LM317 and HC4053 combine to make a PWM power DAC while Q1 forces preregulator U3 to track and maintain a constant 3v U2 I/O headroom differential to improve efficiency.

As described in the earlier DI, switches U1b and U1c accept a 10-kHz PWM signal to generate a 0v to 11.25v “ADJ” control signal for the U2 regulator via feedback networks R1, R2, and R3. The incoming PWM signal is AC coupled so that U1 can “float” on U2’s output. U1c provides a balanced inverse of the PWM signal, implementing active ripple cancellation as described in “Cancel PWM DAC ripple with analog subtraction.”

Note that R1||R2 = R3 to optimize ripple subtraction and DAC accuracy. This feedback arrangement makes U2’s output voltage follow this function of PWM duty factor (DF): 

Vout = 1.25 / (1 – DF(1 – R1/(R1 + R2))) = 1.25 / (1 – 0.9 DF),

as graphed in Figure 2.

Figure 2 Vout (1.25v to 12.5v) versus PWM DF (0 to 1) where Vout = 1.25 / (1 – 0.9 DF).

Figure 3 plots the inverse of  Figure 2, yielding the PWM DF required for any given Vout.

Figure 3 The inverse of Figure 2 or, the PWM DF required for any given Vout, where PWM DF = (1.111 – 1.389/Vout).

About that tracking preregulator thing: Control of U3 to maintain the 3v of headroom required to hold U2 safe from dropout relies on Q1 acting as a simple (but adequate) differential amplifier. Q1 drives U3’s Vfb voltage feedback pin to maintain Vfb = 1.245v. Therefore (where Vbe = Q1’s emitter-base bias):

Vfb/R7 = ((U2in – U2out) – Vbe)/R6
1.245v = (U2in – U2out – 0.6v)/(5100/2700)
U2in – U2out = 1.89 * 1.245v + 0.6v = 3v

Meanwhile, deducing what Q2 does is left as an exercise for the astute reader. Hint: It saves about a third of a wattage over the original DI at Vout = 12v. 

Note, if you want to use this circuit with a different preregulator with a different Vfb, just adjust:

R7 = R6 Vfb/2.4

In closing…

Thanks must go to reader Ashutosh for his clever suggestion to improve power DAC efficiency with a tracking regulator, also (and especially) to editor Aalyia for her creation of a Design Idea environment that encourages such free and friendly cooperation!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Tracking preregulator boosts efficiency of PWM power DAC appeared first on EDN.

The future of cybersecurity and the “living label”

Срд, 02/12/2025 - 17:02

New security standards for IoT devices are being released consistently, showing that security is no longer an afterthought in the design of embedded products. Last month, the White House launched the Cyber Trust Mark; a large move towards the security of IoT devices with a more robust concept of the “living label,” acknowledging the dynamic nature of security over time. The standard essentially requires prerequisite devices to be outfitted with a QR code that can be scanned for security information such as whether or not the device will have automatic software support such as security patches. Vendors of IoT products are now meant to partner up with an “accredited and FCC-recognized CyberLAB to ensure it meets the program’s cybersecurity requirements,” according to the FCC

In a conversation with Silicon Labs’ Chief Security Officer Sharon Hagi, EDN learned a bit more about this new standard, its history, and the potential future security application of this new QR code labelling scheme. 

IoT mania

In the IoT “boom” of the early 2000s that lasted well into the 2010s, companies were anxious to wirelessly-enable practically all devices, and when paired with the right MCU of choice, the applications seemed endless. Use cases from home automation and smart city to agritech and industrial automation were all explored, with supporting industry-specific or open protocols that could vary in spectrum (licensed or unlicensed), modulation technique, topology, transmit power, maximum payload size, broadcast schedule, number of devices, etc. With the growing hype and litany of hardware/protocol options, network security was still mostly discussed at the sidelines, leaving some pretty major holes for bad actors to exploit.  

Cybersecurity history

With time and experience, it has become abundantly clear that IoT security is, in fact, pretty important. Undesirable outcomes like a Mirai botnet could lead to multiple IoT devices to be infected with malware at once allowing for larger-scale attacks such as distributed denial of service (DDoS). Moreover, a massive common vulnerability and exposure (CVE) found that lands a high score on the common vulnerability scoring system (CVSS) can potentially involve the US government’s cybersecurity and infrastructure security agency (CISA) and, if it’s not resolved, lead to fines. This is just adding insult to the reputational injury a company might experience with an exploited security issue. Sharon Hagi expands on IoT-device vulnerabilities, “these devices are in the field, so they’re subjected to different kinds of attack. There’s software-based attacks, remote attacks over the network, and physical attacks like side-channel attacks, glitching, and fault injection,” speaking towards how Silicon Labs included countermeasures for many of these attacks. The company’s initial developments in the area of security, namely centered around its “Secure Vault” technology with a dedicated security core with cryptographic functionality encapsulated within it. The core manages the root of trust (RoT) of the device, manages keys, and governs access to critical interfaces such as the ability to lock/unlock the debug port. 

Hagi went on to describe the background of the US cybersecurity standards that lead to the more recent regulatory frameworks, citing the NIST 8259 specification as the foundational set of cybersecurity requirements for manufacturers to be aware of (Figure 1). Another baseline standard is the ETSI european standard (EN) 303 645 for consumer IoT devices.

Figure 1 NIST 8259A and 8259B technical capabilities and non-technical support activities for manufacturers to consider in their products. Source: NIST

Hagi expanded more on the history of the Cyber Trust Mark, “The history of the Cyber Trust Mark kind of followed right after [the establishment of NIST 8259] in 2021 during the Biden administration with Executive Order 14028,” which had to do with security measures for critical software, “and that executive order  basically directed NIST to work with other federal agencies to further develop the requirements and standards around IoT cybersecurity.” He mentioned how this order specified the need for a labeling program to help consumers identify and judge the security of embedded products (Figure 2). 

Figure 2 NIST labeling considerations for IoT device manufacturers where NIST recommends a binary label that is coupled with a layered approach using either a QR code or a URL that leads consumers to additional details online. Source: NIST

“After this executive order, the FCC took the lead and started implementing what we now know as the Cyber Trust Mark program,” said Hagi, mentioning that Underwriter Laboratories (UL) was the de facto certification and testing lab for compliance with the US Cyber Trust Mark program as well as the requirements of the connectivity security alliance (CSA) with its product security working group (PSWG). 

Evolving security standards

In fact, the PSWG consists of over 200 companies with promoters that include tech giants like Google, Amazon, and Apple as well as OEMs such as Infineon, NXP Semiconductors, TI, STMicroelectronics, Nordic Semiconductor and Silicon Labs. The aim of the PSWG is to unite the disparate emerging regional security requirements including but not limited to the US Cyber Trust Mark, the Cyber Resilience Act (CRA) in the EU with the “CE marking”, and the Singapore Cybersecurity Label Scheme (CLS)

Many of the companies within the PSWG have formulated their own security measures within their chips, NXP, for instance, has their EdgeLock Assurance program, and ST has their STM32Trust security framework. TI has an allocated product security incident response team (PSIRT) that responds to reports of security vulnerabilities for TI products while Infineon created a Cyber Defense Center (CDC) with a corresponding Computer Security Incident Response Teams (CSIRT/CERT) and PSIRT team for the same purpose. Hagi stated that Silicon Labs set itself apart by implementing security “down to the silicon level” in product design early on in the IoT development game. 

These wireless SoCs and MCUs are the keystone of the IoT system, providing the intelligent compute, connectivity, and security of the product. Using more secure SoCs will inevitably ease the process of meeting the ever-changing security compliance standards. Engineers can choose to enable features such as secure boot, secure firmware updates, digitally signed updates with strong cryptographic keys, and anti-tampering, to ultimately enhance the security of their end product. 

Living label use cases

Perhaps the most interesting aspect of the interview were the potential applications of these labeling schemes and how to make them more user-friendly. “The labeling scheme could be compared to a food label,” said Hagi, “You go to the supermarket, take the product off the shelf and it shows you the ingredients and nutritional value and make a decision on whether or not this is something you want to buy.” In the future, a more objectively secure product could be a more pricey option to the more basic alternative, however it would be up to the consumer to decide. While this analogy served its purpose, its similarities ended there. While the label contains all “ingredients” of security built into the product, the Cyber Trust Mark is not meant to be static, since vulnerabilities can still be discovered well after the product is manufactured. 

“You might be able to see the software bill of materials (SBOM) where maybe there is a certain open source library that the product is using and there is a vulnerability that has been reported against it. And maybe, when you get home, you need to update the product with new software to make sure that the vulnerability is patched,” said Hagi as he discussed potential use cases for the label.  

The hardware BOM (HBOM) may also be very relevant in terms of security, bringing into light the entire supply chain that is involved in assembling the end product. The overall goal of the label is to incentivize companies to foster trust and accountability with transparency on both the SBOM and HBOM. 

Hagi continues to go down the checklist of security measures the label might include, “What is the original and development history of the product’s security measures? Can it perform authentication? If so, what kind of authentication? What kind of cryptography does it have? Is this cryptography certified? Does the manufacturer include any guarantees? At what point will the manufacturer stop issuing security updates for the product? Does the product contain measures that would comply with people in specific jurisdictions?” These regional regulations on security do vary between, for instance, the EU’s General Data Protection Regulation (GDPR) and of course, the US Cyber Trust Mark. 

ML brings on another dimension of security considerations to these devices, “The questions would then be what sort of data does the model collect? How secure are these ML models in the device? Are they locked? Are they unlocked? Can they be modified? Can they be tampered with?” The many attributes of the models bring other levels of security considerations with them and avenues of attack. 

The future of the labeling scheme

Ultimately putting this amount of information on a box is impossible, even more pertinent is how users are meant to interpret the sheer amount of information. Consumers were more than likely to not really understand all the information on a robust security label, even if it was human-readable. “Another angle is providing some sort of API so that an automated system can actually interrogate this stuff,” said Hagi. 

He mentioned one example of securely connecting devices from different ecosystems, “Imagine an Amazon device connecting to an Apple device, with this API, security information is fetched automatically letting users know if it is a good idea to connect the device to the ecosystem.” 

As it stands, the labelling scheme is meant to protect the consumer in more of an abstract sense, however it might be difficult for the consumer to accurately understand the security measures put into the product. In order to make full use of a system like this, “it is likely that a bit of automation is necessary for consumers to make appropriate decisions just in time.” This could eventually enable consumers to make informed decisions on product purchasing, replacement, upgrades, connection to a network, and the security risks when throwing out an item that could contain private information in its memory. 

Aalyia Shaukat, associate editor at EDN, has worked in design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

 

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The future of cybersecurity and the “living label” appeared first on EDN.

A class of programmable rheostats

Втр, 02/11/2025 - 17:18
Basic programmable rheostats

For many variable resistor (rheostat) applications, one of the device’s terminals is connected to a voltage source VS. Such a source might be a reference DC voltage, an op amp output carrying an AC plus DC signal, or even ground. If freed from the constraint of (programmable) “floating” rheostats satisfied by recently disclosed solutions in “Synthesize precision Dpot resistances that aren’t in the catalog” and “Synthesize precision bipolar Dpot rheostats,” there is a compelling alternative approach. Yes, it’s slightly simpler in that it avoids MOSFETs, and that the +5V supply for the digital potentiometer is the only supply needed (especially if rail-to-rail input and output op-amps are employed.) But more importantly, it’s distinct in that it exhibits no crossover distortion when there is a change in the sign of an AC signal between terminals A and VS.

Wow the engineering world with your unique design: Design Ideas Submission Guide

As seen in Figure 1, I’m shamelessly appropriating the same digital pot used in those other solutions. (Note the limited operating voltage range of potentiometer U2.)

Figure 1 A basic programmable rheostat leveraging the same digital pot used in other solutions.

The resistance between terminals A and voltage source VS looking into terminal A is res = R1/(1 – αa·α2·αb) where the alphas are the gains of U1a, U2, and U1b respectively. αa and αb are slightly less than unity at DC, falling in value with loop gain as frequency increases. α2 is equal to one of the numerator integers 0, 1, 2… 256 divided by a denominator of 256 as determined by the programming of U2.

By changing the numerator from 0 to 255, it would appear that resistor value ratios of 1:256 could be achieved. Unfortunately, U2’s integral non-linearity (INL) is specified as ± 1 LSB. Strictly following this spec, operation with a numerator of 255 could drive the value of res close to infinity at DC and so should be avoided. But that’s not the only concern. For an α2 numerator value “num”, a resistance error factor EF of roughly ± 1/(256-num) could be encountered because of the ± 1 LSB accuracy. To minimize uncertainty, num should be held to less than some maximum value (solutions in “Synthesize precision Dpot resistances that aren’t in the catalog” and “Synthesize precision bipolar Dpot rheostats” have similar problems for small values of “num”). Another reason for such a limit is that resistance resolution is much better with lower than higher values of “num”. For instance, the ratio of resistor values with numerators of 10 and 11 is 1.004. But the values of 240 and 241 yield a ratio of 1.07, and those of 250 to 251, 1.2.

Enhanced programmable rheostat

The simple addition of U3 and R2 in the Figure 2 circuit mitigates these problems by reducing the required maximum value of “num”. For R2 greater than R1, resistances between R1 and R2 should be implemented by having analog switch U3 select R1 rather than R2. For larger resistances, R2 should be selected.

Figure 2 Enhanced programmable rheostat that mitigates the uncertainties problems of the basic programmable rheostat by reducing the required maximum value of “num”.

To see why Figure 2 offers an enhancement, consider a requirement to provide resistance over the range of 1k to 16k. In Figure 1 and Figure 2 circuits, R1 would be 1k. To produce a value of 1k, “num” would be 0. For 16k, “num” in Figure 1 would be 240, yielding a maximum EF of ± 1/(256 – 240) or approximately 6.3%. But in Figure 2, resistance values of 4K and above would be derived by having U3 switch R1 out in place of a 4k R2. The maximum required value of “num” would be 192, and EF would be reduced by a factor of 4 to 1.6%. It will also be seen that the Figure 2 circuit significantly relaxes op-amp performance requirements for limiting the errors due to finite open loop gains. To see this, some analysis is necessary. Given the maximum allowed fractional resistance error (OAerr) introduced by the op-amp pair, it can be seen that:

Therefore, for closed loop op amp gains:

At DC, op amp voltage follower closed loop gain α is 1/(1-1/a0L), where a0L is the op amp open loop DC gain. To satisfy requirements at DC:

Enhanced programmable rheostat with AC signals

Matters are more complicated with AC signals. At a frequency f Hz, the voltage follower open loop gain HOLG(j·f) is 1 / (1/A0 + j·f/GBW), where GBW is the part’s gain-bandwidth product and j = √-1.

The closed loop gain HCLG(j·f) is 1/( 1 + 1/ HOLG(j·f)). Substitution of HCLG(j·f) for αa and αb in Equation (1) yields a fourth order polynomial due to the real and imaginary terms of HCLG(j·f). It’s easier to solve the problem with a simulation in LTspice than to solve it algebraically.

LTspice offers a user-specifiable op-amp called…well, “opamp”. It can be configured for user-selected values of a0L and GBW. The tool is configured as shown in Figure 3 to solve this problem.

Figure 3 LTspice can be used to determine op-amp requirements for an AC signal application.

The a0L value required for AC signals will be larger than that calculated in equation (3). It’s suggested to start with an a0L default value of 10000 (100 dB) and try different values of GBW. Use the results to select an op amp for the actual circuit and either simulate it if a model exists or at least update the simulation with the minimum specified values of a0L and GBW for the selected op amp.

Table 1 shows some examples of the behaviors of the circuit with different idealized op-amps. It’s clear that DC performance in either circuit is not a challenge for almost any op-amp. But it’s also evident that the AC performance of a given op-amp is notably better in the Figure 2 circuit than in that of Figure 1, and that a given error can be achieved with a lower performance and less costly op-amp in the Figure 2 circuit.

  Figure 1, R1 = 1k Figure 2, R2 = 4k enabled
num 240 192
a0L, dB 69 80 80 100 100 55 80 80 100 100
GBW, MHz 1 10 50 10 50 1 10 50 10 50
DC resistance error due to op-amp pairs, % 1.000 0.299 0.299 0.030 0.030 0.999 0.060 0.060 0.006 0.006
20kHz resistance error due to op-amp pairs, % 15.952 0.495 0.307 0.227 0.038 2.024 0.071 0.060 0.017 0.006
20kHz phase shift, degrees -30.22 -3.42 -0.69 -3.43 -0.69 -6.71 -0.69 -0.14 -0.69 -0.14
equivalent parallel capacitance at 20kHz, pf 84.3 9.5 1.9 9.5 1.9 18.5 1.9 0.4 1.9 0.4

Table 1 Examples of the circuits’ behavior producing 16kΩ with various op-amp parameters.

Note: The cascade of the two op-amps with their AC phase shifts means that there is an effective capacitance in parallel with the resistance R created by the circuits. Because the two op-amps create a second order system, there is no equivalent broadband capacitance. However, a capacitance C at a spot frequency f Hz can be calculated from the phase shift Φ radians at that frequency. C = tan(Φ)/(2·π·f·R). Simulations have shown that over the full range of resistances and operating frequencies of the examples listed in table, phase shift magnitudes are less than 70 degrees.

The approach taken in Figure 2 can be generalized by supporting not just two but four or more different resistors. Doing so further minimizes both op-amp performance requirements and worst-case errors by reducing the maximum required value of “num”. It also extends the range of resistor values achievable for a given error budget.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A class of programmable rheostats appeared first on EDN.

How to measure PSRR of PMICs

Втр, 02/11/2025 - 02:12

Ensuring stable power management is crucial in modern electronics, and the power supply rejection ratio (PSRR) plays a key role in achieving this. This article serves as a practical guide to measuring PSRR for power management ICs (PMICs) and offering clear and comprehensive instructions.

PSRR reflects a circuit’s ability to reject fluctuations in its power supply voltage, directly impacting performance and reliability. By understanding and accurately measuring this parameter, engineers can design more robust systems that maintain consistent operation even under varying power conditions.

Figure 1 Here is the general methodology to measure PSRR. Source: Renesas

PSRR is a vital parameter that assesses an LDO’s capability to maintain a consistent output voltage amidst variations in the input power supply. Achieving high PSRR is crucial in scenarios in which the input power supply experiences fluctuations, thereby ensuring the dependability of the output voltage. Figure 1 below illustrates the general methodology for measuring PSRR.

The mathematical expression to calculate the PSRR value is:

PSRR = 20 log10 VIN/VOUT

Where VIN and VOUT are the AC ripple of the input and output voltage, respectively.

Equipment and setup

To ensure an accurate measurement of the PSRR, it’s essential to set up the test environment with precision. The following design outlines the use of the listed equipment to establish a robust and reliable test configuration.

First, connect the power supply—in our case it’s a Keithley 2460—to the input of the Picotest J2120A line injector. The power supply should be configured to generate a stable DC voltage while the AC ripple component is provided by a Bode 100 network analyzer output using the J2120A line injector to simulate power supply variations.

Note that J2120A line injector includes an internally biased N-channel MOSFET. This means that there is a voltage drop between the J2120A input and output. The voltage drop is non-linear, and its dependency is shown on Figure 2. This means that each time the load current is adjusted, the source power supply must also be adjusted to maintain a constant DC output voltage at the J2120A terminals.

Figure 2 J2120A’s resistance and voltage drop is shown versus output current. Source: Renesas

For example, to get 1.2 V at the input of the LDO regulator, and depending on the current load, it might be required to set the voltage on the input of the line injector from 2.5 V to 3.5 V. The MOSFET operates as open loop so not to become unstable when connected to the external regulator.

Next, a digital multimeter is used to monitor both the input and output voltages of the PMIC. Ensure that proper grounding is used, and minimal interference is present in the connections to maintain measurement integrity.

Finally, a Bode 100 from Omicron Lab is used to record and analyze the measurements. This data can be used to compute the PSRR values and evaluate the PMIC’s ability to maintain a stable output despite variations in the input supply.

By carefully following this setup, one can ensure accurate and reliable PSRR measurements, contributing to the development of high-performance and dependable electronic systems.

Table 1: Here is an outline of the instruments used in PSRR measurements. Source: Renesas

Table 2 See the test conditions for LDOs. Source: Renesas

Settings for PSRR bench measurements setup

Figure 3 Block diagram shows the key building blocks of PSRR bench measurement. Source: Renesas

The PSRR measurement is performed with the Bode 100. The Gain/Phase measurement type should be chosen in the Bode Analyzer Suite software as shown on Figure 4.

Figure 4 Start menu is shown in the Bode Analyzer Suite software. Source: Renesas

Set the Trace 1 format to Magnitude (dB).

Figure 5 This is how to set Trace 1. Source: Renesas

To get the target PSRR measurement, choose the following settings in the “Hardware Setup”:

  1. Frequency: Change the Start frequency to “10 Hz” and Stop frequency to “10 MHz”.
  2. Source mode: Choose between Auto off or Always on. In Auto off mode, the source will be automatically turned off whenever it’s not used (when a measurement is stopped). In Always on mode, the signal source stays on after the measurement has finished. This means that the last frequency point in a sweep measurement defines the signal source frequency and level.
  3. Source level: Set the constant source level to “-16 dB” or higher for the output level. The unit can be changed in the options. By default, the Bode 100 uses dBm as the output level unit. 1 dBm equals 1 mW at 50 Ω load. “Vpp” can be chosen to display the output voltage in peak-to-peak voltage. Note that the internal source voltage is two times higher than the displayed value and valid when a 50 Ω load is connected to the output.
  4. Attenuator: Set the input attenuators 20 dB for Receiver 1 (Channel 1) and 0 dB for Receiver 2 (Channel 2).
  5. Receiver bandwidth: Select the receiver bandwidth used for the measurement. Higher receiver bandwidth increases the measurement speed. Reduce the receiver bandwidth to reduce noise and to catch narrow-band resonances.

Figure 6 The above diagram shows hardware setup in Gain/Phase Measurement mode and measurement configuration. Source: Renesas

Before starting the measurement, the Bode 100 needs to be calibrated. This will ensure the accuracy of the measurements. Press the “Full Range Calibration” button as shown in Figure 7. To achieve maximum accuracy, do not change the attenuators after external calibration is performed.

Figure 7 Press the “Full Range Calibration” button to ensure measurement accuracy. Source: Renesas

Figure 8 Here is how Full Range Calibration Window looks like. Source: Renesas

Connect OUTPUT, CH1, and CH2 as shown below and perform the calibration by pressing the Start button.

Figure 9 In calibration setup, Connect OUTPUT, CH1 and CH2, and press the Start button. Source: Renesas

Figure 10 This is how performed Calibration Window looks like. Source: Renesas

For all LDOs:

  1. The input capacitor will filter out some of the signals injected into the LDO, so it’s best to remove the input capacitors for the tested LDO or keep one as small as possible.
  2. Configure the network analyzer; use the power supply to power the line injector and connect the output of the network analyzer to the open sound control (OSC) input of the line injector.
  3. Power up the device under test (DUT) and configure the tested LDO’s output voltage. To prevent damage to the PMIC, the LDO’s input voltage should be less than or equal to the max input voltage. It’s highly recommended to power up the LDO without a resistive load, then apply the load and adjust the input voltage.
  4. Configure the LDO VOUT as specified in Table 2.
  5. Enable the LDO under test and use a voltmeter to check the output voltage.
  6. To ensure that the start-up current limit does not prevent the LDO from starting correctly, connect the resistive load to the LDO once the VOUT voltage has reached its max level.
  7. Adjust the voltage at the J2120A OUT terminals to their target VIN.
  8. Connect the first channel (CH1) of the network analyzer to the input of the LDO under test using a short coaxial cable.
  9. Connect the second channel (CH2) of the network analyzer to the output of the LDO under test using a short coaxial cable.
  10. Monitor the output voltage of the line injector on an oscilloscope. Perform a frequency sweep and check that the minimum input voltage and an appropriate peak to peak level for test are achieved. Make sure that the AC component is 200 mVpp or lower.

Figure 11 This simplified example shows headroom impact on the ripple magnitude. Source: Renesas

Note that headroom for the PSRR is not the same as the dropout voltage parameter (Vdo) specified in the datasheets (see Figure 11). Headroom in the context of PSRR refers to the additional voltage margin above the output voltage that an LDO requires to effectively reject variations in the input voltage.

Essentially, it ensures that the LDO can maintain a stable output despite fluctuations in the input power supply. Dropout voltage (Vdo), on the other hand, is a specific parameter defined in the datasheets of LDOs.

It’s the minimum difference between the input voltage (VIN) and the output voltage (VOUT) at which the LDO can still regulate the output voltage correctly under static DC conditions. When the input voltage drops below this minimum threshold, the LDO can no longer maintain the specified output voltage, leading to potential performance issues.

Figure 12 Example highlights applied ripple and its magnitude with DC offset for LDO’s input. Source: Renesas

  1. Set up the network analyzer by using cursors to measure the PSRR at each required frequency (1 kHz, 100 kHz and 1 MHz). Add more cursors if needed to measure peaks as shown in Figure 13.

Figure 13 This is how design engineers can work with cursors. Source: Renesas

  1. Capture images for each measured condition.

Figure 14 Example shows captured PSRR graph for the SLG51003 LDO. Source: Renesas

Figure 15 Bench measurement setup is shown for the SLG51003 PSRR.

Clear and precise PSRR measurement

This methodology provides a clear and precise approach for measuring the PSRR for the SLG5100X family of PMICs using the Omicron Lab Bode 100 and Picotest J2120A. Accurate PSRR measurements in the 10 Hz to 10 MHz frequency range are crucial for validating LDO performance and ensuring robust power management.

The accompanying figures serve as a valuable reference for setup and interpretation, while strict adherence to these guidelines enhances measurement reliability. By following this framework, engineers can achieve high-quality PSRR assessments, ultimately contributing to more efficient and reliable power management solutions.

Oleh Yakymchuk is applications engineer at Renesas Electronics’ office in Lviv, Ukraine.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How to measure PSRR of PMICs appeared first on EDN.

LED headlights: Thank goodness for the bright(nes)s

Пн, 02/10/2025 - 16:57

My wife’s 2018 Land Rover Discovery looks something like this:

with at least one important difference, related (albeit not directly) to the topic of this writeup: hers doesn’t have fog lights. They’re the tiny shiny things at the upper corners of the front bumper of the “stock” photo, just below the air intake “scoops”. In her case, bumper-colored plastic pieces take their places (and the on/off control switch normally at the steering wheel doesn’t exist either, of course, nor apparently does the intermediary wiring harness).

More generally, the from-factory headlights were ridiculously dim yellow-color temperature things, halogen-based and H7 in form factor. This vehicle, unlike most (I think) uses two identical pairs of H7, albeit aimed differently, one for the “low” (i.e. “dipped” or “driving”) set and the other for the “high” (i.e. “full” or “bright”) set. Land Rover didn’t switch to LED-based headlights until 2021, but the halogens were apparently so bad that at least one older-generation owner contracted with a shop to update them with the newer illumination sets both front and rear.

On a hunch, I purchased a set of Auxito LED-based replacement bulbs from Amazon for ~$30, figuring them to be a fiscally rationalizable experiment regardless of the outcome success-or-not. These were the fanless 26W 800 lumen variant found on the manufacturer’s website:

Here’s an accompanying “stock” video:

Auxito also sells a brighter (1000 lumens), more power-demanding (30W) variant with a nifty-looking integrated cooling fan:

When they arrived, they slipped right into where the halogens had been; the removal-and-replacement process was a bit tedious but not at all difficult. I’d been pre-warned from my preparatory research (upfront in the manufacturer’s product page documentation both on its and Amazon’s websites, in fact, which was refreshing) that dropping in LEDs in place of halogens can cause various issues, resulting from their ongoing connections to the vehicle’s CAN bus communication network system, for example:

LED upgrade lights are great. They’re rugged, they last far longer than conventional bulbs, and they offer brilliant illumination. But in some vehicles, they can also trigger a false bulb failure warning. Some cars use the vehicle’s computer network (CANbus) system to verify the functioning of the vehicle’s lights. Because LED bulbs have a lower wattage and draw much less power than conventional bulbs, when the system runs a check, the electrical resistance of an LED may be too low to be detected. This creates a false warning that one of the lights has failed.

Here’s the other common problem:

A lot of auto manufacturers use PWM (or pulse width modulation) to precisely control the voltage to a bulb. One of the benefits of doing this is to improve bulb life. These quick, voltage pulses (PWM) do not give a bulb filament time to cool down and dim, so for halogen bulbs the pulses are not noticeable. However, with an LED bulb, these pulses are enough to turn the LEDs off and on very quickly, which results in a flashing of the light.

Philips sells LED CANbus adapters which claim to fix both issues. Auxito also says that it will ship free adapters to customers who encounter problems, albeit noting (in charming broken English):

Built-in upgraded CANBUS decoder, AUXITO H7 bulbs is perfectly compatible with 98% of vehicles. A few extremely sensitive vehicles may require an additional decoder.

I’m delighted to be able to say—hopefully not jinxing myself in the process—that I’m apparently one of those 98%. The LED replacement bulbs fired up glitch-free and have remained problem-less for the multiple months that we’ve used them so far. The color temperature mismatch between them (6500K) and the still-present halogen high beams, which we sometimes also still need to use and which I’m guessing are closer to 3000K, results in a merged  illumination pattern beyond the hood that admittedly looks a bit odd, but I’ve bought a second Auxito LED H7 headlight set that I plan to install in the high-beam bulb sockets soon (I promise, honey…).

I’ve also bought a third set, actually, one bulb for use as a spare and the other for future-teardown purposes. In visual sneak-peek preparation, here are some photos of an original halogen bulb, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

and the LED-based successor, first boxed (I’m only showing the meaningful-info box sides):

and then standalone:

For above-and-beyond (if I do say so myself) reader-service purposes, I also scanned the user manual, whose PDF you can find here:

And with that hopefully illuminating (see what I did there?) info out of the way, I’ll close for today, with an as-usual invitation for reader-shared thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post LED headlights: Thank goodness for the bright(nes)s appeared first on EDN.

How shielding protects electronic designs from EMI/RFI disruptions

Пн, 02/10/2025 - 11:09

Electromagnetic interference (EMI) and radiofrequency interference (RFI) refer to electromagnetically generated noise that can interfere with products’ performance and reliability. RFI is a subset of EMI that refers to radiated emissions such as those from power or communication lines.

Design engineers must strategically reduce EMI and RFI at every opportunity, especially since some sources are naturally occurring and impossible to remove from the environment.

Engineering professionals should begin by using design choices that mitigate these unwanted effects. For example, trace placement can reduce undesirable interference since a PCB’s traces carry current from drivers and receivers.

One widely established tip is to keep the distance between traces at least several times the width of individual traces. Similarly, designers should separate signal-related traces from others, including those associated with audio or video transmission.

The design-centered tools can help all parties test different possibilities to find the ones most likely to work in the real world. One such tool allows designers to ease the transition from design to manufacture by creating a digital twin of the production environment. This format-agnostic platform also enables real-time collaboration, shortening the time required for clients to approve designs.

Select appropriate internal filters and shields

Besides following design-related best practices, professionals building electronics while reducing EMI and RFI must identify opportunities to suppress and deflect them without adding too much weight to the devices. That is especially important in cases where people build electronics for aerospace and automotive applications.

The general process is identifying trouble spots after making all appropriate design-related improvements. Engineers should then proceed by applying filtering circuits on the inputs and outputs. Next, they can apply shields. These products surround at-risk components, creating a protective barrier.

The shields are typically metal or polyester, and engineers use industrial machines to form them into the desired shapes. While filters allow harmless frequencies to pass through them, shields block and redistribute EMI to mitigate their potentially dangerous effects.

A particular point is that filters only block EMI moving through physical connections such as cables. EMI transmission occurs through the air and needs no entry point. Additionally, designers will get the best results by scrutinizing how the electronic device functions and acting accordingly. One possibility is to install filters at heat sinks to control the EMI that would otherwise come through the holes that promote thermal management.

Consider electrospray technologies

An emerging EMI protection is to deposit electrospray materials onto surfaces or components. In addition to its cost-effectiveness, this solution offers customizable results because engineers can add as much as their applications require.

Although many of these efforts are in the early stages, design engineers should monitor their progress and consider how to incorporate them into their future products. One example comes from a mechanical engineering doctoral student exploring how to apply protective layers to electronics by dispensing aerosols or liquids onto them with electricity. This approach could be especially valuable to manufacturers that create increasingly small products for which traditional shielding techniques are less suitable.

The student argues that electrospray technologies for shielding can open opportunities for protecting miniaturized devices. Her technique deposits a silver layer onto the surface, minimizing the space and costs required to protect devices.

This strategy and similar efforts could also be ideal for engineers who want to safeguard delicate electronics without adding weight. Many consumers perceive lightweight, tiny devices as more innovative than heavier, larger ones. Electrospray caters to these devices while meeting modern manufacturing requirements.

Take project-specific approaches

In addition to following these tips, electronics designers must always engage with their clients throughout their work. Such engagements allow engineering professionals to understand specific needs and identify the most effective ways to achieve successful outcomes.

What worked well in one case may be less suitable for others that seem similar. However, client feedback ensures everyone is on the same page.

Ellie Gabel is a freelance writer as well as associate editor at Revolutionized.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How shielding protects electronic designs from EMI/RFI disruptions appeared first on EDN.

Basic oscilloscope operation

Птн, 02/07/2025 - 21:19

Whether you just received a new oscilloscope or just got access to a revered lab instrument that you are unfamiliar with, there is a learning curve associated with using the instrument. Having run a technical support operation for a major oscilloscope supplier, I know that most technical people don’t read manuals. This article (shorter than the typical user manual) is intended to help those who need to use the instrument right away get the instrument up and running.

The front panel

Oscilloscopes from different manufacturers look different, but they all have many common elements. If the oscilloscope has a front panel, it will have basic controls for vertical, horizontal, and trigger settings like the instrument shown in Figure 1.

Figure 1 A typical oscilloscope front panel with controls for vertical, horizontal, and trigger settings. Source: Teledyne LeCroy

Many controls have alternate actions evoked by pushing or, in some cases, pulling the knob. These are generally marked on the panel.

Many oscilloscopes, like this one, use the Windows operating system and can be controlled from the display using a pointing device or a touch screen. Feel free to use any interface that works for you.

Getting a waveform on the screen

It’s crucial to note that digital oscilloscopes retain their last settings. If you’re using the oscilloscope for the first time, it’s a smart practice to recall its default setting. This step ensures you’re starting from a known setting’s state. Some oscilloscopes, like the one used here, have a dedicated button on the front panel; recalling the default setting can also be done using a pulldown menu (Figure 2).

Figure 2 Recalling the default setup of an oscilloscope places the instrument in a known operational state. Source: Arthur Pini

In the example shown, the default setting is recalled from the “Recall Setup” dialog box using the Recall Default button, highlighted in orange.

Auto Setup

Using the oscilloscope’s “Auto Setup” feature to obtain a waveform on the screen from the default state is simple.

As a basic experiment, connect channel 1 of the oscilloscope to the calibration signal on the oscilloscope’s front panel using one of the high-impedance probes included with the oscilloscope. This calibration signal is a low-frequency square wave used to adjust the low-frequency compensation of the probe’s attenuator.

Press the oscilloscope’s Auto Setup button on the front panel or use the Vertical pulldown menu to select Auto Setup (Figure 3).

Figure 3 The “Auto Setup” is either a front-panel push button or a selection on a pulldown menu, as shown here. Source: Arthur Pini

“Auto Setup” in this instrument scans all the input channels in order and configures the instrument based on the first signal it detects. Based on the detected signal(s), the vertical scale (volts/div) and vertical offset are adjusted. The trigger is set to an edge trigger with a trigger level of fifty percent of the amplitude of the first signal found. The horizontal timebase (time/div) is set so that at least ten signal cycles are displayed on the display screen.

Different oscilloscopes handle this function differently. In some, the signal must be connected to channel 1. Other models, like the one shown, will search through all the channels and set up the first signal found. “Auto Setup” in all oscilloscopes should get you to a point where you have a waveform on the screen.

The basic controls—vertical settings

The basic oscilloscope controls include vertical, horizontal, or timebase and trigger. In Figure 3, these appear, in that order from left to right, as pull-down menus on the menu bar. These controls are duplicated on the front panel and grouped under the same headings. Either of the control types can be used.

Vertical controls, either on the front panel or on the screen, are used to set up the individual input channels. Selecting a channel creates a dialog box for controlling the corresponding channel. The vertical channel controls include vertical sensitivity (volts/div) and offset. The channel setup controls include coupling, bandwidth, rescaling, and processing (Figure 4).

Figure 4 The vertical channel setup includes the principal controls, including vertical scaling, offset, and coupling. Source: Arthur Pini

The vertical scaling should be set so that the waveform is as close to full scale as possible to maximize the oscilloscope’s dynamic range. This oscilloscope has a “Find Scale” function icon the channel setup, which will scale the vertical gain and offset to get the waveform centered on the screen with a reasonable amplitude. It is good practice not to overdrive the input amplifier by having the waveform exceed the selected full-scale voltage limits. Use the zoom display to expand the trace for a closer look at tiny features. The offset control centers the waveform on the display. Coupling offers a choice of a 50 Ω DC coupling or 1 MΩ input termination and AC or DC coupling.

The other controls include a selection of input bandwidth limiting filters, the ability to rescale the voltage reading based on the probe attenuation factor, and the ability to rescale the amplitude reading in a sensor or a probe’s units of measure (e.g., amperes for a current probe). Signal processing in the form of averaging or digital (noise) filtering can be applied to improve the signal-to-noise ratio of the acquired signals.

Channel annotation boxes, like the one labeled C1 in Figure 4, show the vertical scale setting, offset, and coupling for channel 1. When the cursors are turned on, cursor amplitude readouts can also appear in this box.

Timebase settings

Selecting “Horizontal Settings” from the “Timebase” pull-down menu or using the front panel horizontal controls adjusts the horizontal scaling and delay of the horizontal axis, the acquisition sampling modes, the acquisition memory length, and the sampling rate (Figure 5).

Figure 5 The timebase setup controls the sampling mode, horizontal scale, time delay, and acquisition setup. Source: Arthur Pini

The “Horizontal” controls simultaneously affect all the input channels. Generally, three standard sampling modes are real-time, sequence, and roll mode. Real-time is the normal default mode, sampling the input signal at the sampling rate for the entire duration set by the horizontal scale. Sequence mode breaks the acquisition memory into a user-set number of segments and triggers and acquires a signal in each segment before displaying them. Sequence mode acquisitions provide a minimum dead time between acquisitions. Roll mode is for long acquisition times with low sampling rates. Data is written to the display as it is acquired, producing a display that looks like a strip chart recorder.

The time per division (T/div) setting sets the horizontal time scale. The acquisition duration will be ten times the T/div setting. The acquisition delay shifts the trigger point on the display. The default delay is zero. Negative delays shift the trace to the left, and positive delays shift it to the right.

The “Maximum Sample Points” field sets the maximum length of the acquisition memory. By selecting “Set Maximum Memory”, the memory length varies as the T/div setting is changed until the maximum memory is allocated. Beyond that point, increasing the T/div will cause the sampling rate to drop. Basically, the time duration of the acquisition is equal to the number of samples in the memory divided by the sampling rate. If the fixed sampling rate mode is selected, the oscilloscope sampling rate will remain at the user-entered sampling rate as the T/div setting changes. The T/div setting will be restricted to settings compatible with the selected sampling rate.

The sample rate also affects the span of the fast Fourier transform (FFT) math operation, while the time duration of the acquisition affects its frequency resolution.

This oscilloscope allows the user to select the number of active channels. Note that the memory is shared among the active channels.   

The “Navigation Reference” setting controls how the oscilloscope behaves when you adjust T/div. The centered (50%) selection keeps the current center time point fixed, and other events move about the center as T/div changes. With this setting, the trigger point could move off the grid as the scale changes. The “Lock to Trigger” setting holds the trigger point location fixed. The trigger event remains in place as T/div changes, while other events move about the trigger location.

Basic trigger settings

Oscilloscopes require a trigger, usually derived from or synchronous with the acquired waveform. The function of the trigger is to allow the acquired waveform to be displayed stably. The trigger setup, either on the front panel or using the “Trigger” pulldown provides access to the trigger setup dialog box (Figure 6).

Figure 6 The basic setup for an edge trigger will allow the acquired waveform to be displayed stably. Source: Arthur Pini

The edge trigger is the traditional default trigger type. In edge trigger, the scope is triggered when the source trace crosses the trigger threshold voltage level with the user-specified positive or negative slope. Trigger sources can be any input channel, or an external trigger applied to the EXT. input. Edge trigger is the most commonly used trigger method and is selected in the figure. The current scope settings shown use channel 1 as the trigger source. The trigger is DC coupled with a trigger threshold level of nominally 500 millivolts (mV) and a positive slope. Note the “Find Level” button in the “Level” field will automatically find the trigger level of the source signal. The trigger annotation box on the right side of the screen summarizes selected trigger settings.

The trigger mode, which can be stop, automatic (auto), normal, or single, is selected from the trigger pulldown menu. The trigger mode determines how often the instrument acquires a signal. The default trigger mode is auto; in this mode, if a trigger event does not occur within a preset time period, one will be forced. This guarantees that something will be displayed. Normal trigger mode arms the oscilloscope for a trigger. When the trigger event occurs, it acquires a trace which is then displayed. After the acquisition is complete, the trigger automatically re-arms the instrument for the next trigger. Traces are displayed continuously as the trigger events occur. If there are no trigger events, acquisitions stop until one occurs.

In single mode, the user arms the trigger manually. The oscilloscope waits until the trigger event occurs and makes one acquisition, which is displayed. It then stops until it is again re-armed. If a valid trigger does not occur, invoking Single a second time will force a trigger and display the acquisition. Stop mode ceases acquisitions until one of the other three modes is evoked. Other, more complex triggers are available for more complex triggering requirements; however, they are beyond the scope of this article.

Display

The oscilloscope display is controlled from the display pull-down menu. The type of display can be selected from the pull-down, or the “Display Setup” can be opened (Figure 7).

Figure 7 “Display Setup” allows for the selection of the number of grids and other display-related settings. This example shows the selection of a quad grid with four traces. Source: Arthur Pini

This oscilloscope allows the user to select the number of displayed grids. There is also an “Auto Grid” selection, which turns on a new grid when each trace is activated. Multiple traces can be located in each grid, allowing comparison of the waveforms. Having a single trace in each grid provides an unimpeded view while maintaining the full dynamic range of the acquisition. In addition to normal amplitude versus time displays, the “Display Setup” includes cross plots of two traces producing an X-Y plot.

Display expansion-zoom

Zoom magnifies the view of a trace horizontally and vertically. The traditional method to leverage the zoom functions uses the pull-down “Math” menu to open “Zoom Setup” as shown in Figure 8.

Figure 8 Zoom traces can be turned on using the Zoom Setup under the Math pull-down menu. Source: Arthur Pini

Many oscilloscopes have a “Zoom” button on the front panel to open a zoom trace for each displayed waveform. Oscilloscopes with touch screens support drop and drag zoom. Touch the trace near the area to be expanded and then drag the finger diagonally. A box will be displayed; continue dragging your finger until the box encloses the area to be expanded. Remove the finger, and the zoom trace can be selected to show the expanded waveform.

A quick start guide

This should get you started. Most Windows-based oscilloscopes have built-in help screens that may be context-sensitive and provide helpful information about settings. If you get stuck, contact the manufacturer’s customer service line; they will get you going quickly. If all else fails, consider reading the manual.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Basic oscilloscope operation appeared first on EDN.

Rad-tolerant RF switch works up to 50 GHz

Чтв, 02/06/2025 - 21:22

Teledyne’s TDSW050A2T wideband RF switch operates from DC to 50 GHz with low insertion loss and high isolation. The radiation-tolerant device, fabricated with 150-nm pHEMT InGaAs technology, is well-suited for complex aerospace and defense applications.

Based on a MMIC design process, the reflective SPDT switch maintains high performance across frequencies, including millimeter-wave bands. It has a typical input P1dB of 23 dBm and port isolation of 23 dB at 50 GHz. The TDSW050A2T operates from ±5-V power supplies with minimal DC power consumption and is controlled with TTL-compatible voltage levels.

The switch withstands 100 krads (Si) TID, making it useful for satellite systems exposed to radiation. It meets MIL-PRF-38534 Class K equivalency for space applications and operates over an extended temperature range of -40°C to +85°C. The TDSW050A2T is supplied as a 1.15×1.47×0.1-mm die for hybrid assembly integration.

The TDSW050A2T RF switch is available now for immediate shipment from Teledyne HiRel and authorized distributors.

TDSW050A2T product page

Teledyne HiRel Semiconductors

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Rad-tolerant RF switch works up to 50 GHz appeared first on EDN.

Сторінки