EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 4 min ago

Neon lamp blunder

Wed, 02/07/2024 - 16:26

There was this test system that comprised a huge row of equipment racks into which various items of test equipment would be mounted. Those items were a digital multimeter, an oscilloscope, several signal generators and so forth. Each section of the rack assembly had a neon lamp mounted at its base which was supposed to indicate that 400 Hz AC line voltage was turned on or turned off for the equipment mounted in that rack section.

Planned essentially as follows in Figure 1, the idea did not work.

Figure 1 Neon lamp indicator plan where line voltage was always present and applied to the equipment installed within each section via a power relay where singular SPST contact set operated that section’s neon lamp.

Line voltage was always present but would be applied to installed equipment within each section via a power relay of which one SPST contact set was to operate that section’s neon lamp. The problem was that each section’s neon lamp would always stay lit, no matter the state of the relay and the state of equipment power application.

No neon lamp would ever go dark.

There was much ado about this with all kinds of accusations and posturing, finger pointing, scoldings, searching for a fall guy and so forth but the problem itself was never solved. What had been overlooked is shown as follows in Figure 2.

Figure 2 The culprit was the stray capacitance from the wiring harness that each SPST contact was wired through that kept each neon lamp visibly lit.

Each SPST contact was wired through a harness which imposed a stray capacitance across the contacts of the intended switch. When the SPST was set to be open, that stray capacitance provided a low enough impedance for AC current to flow anyway and that current level was sufficient to keep the neon lamp visibly lit.

Brilliant, huh?

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Neon lamp blunder appeared first on EDN.

∆Vbe differential thermometer needs no calibration

Tue, 02/06/2024 - 17:17

Differential temperature measurement is a handy way to quantify the performance of heatsinks, thermoelectric coolers (TECs), and thermal control in electronic assemblies. Figure 1 illustrates an inexpensive design for a high-resolution differential thermometer utilizing the ∆Vbe effect to make accurate measurements with ordinary uncalibrated transistors as precision temperature sensors. 

Here’s how it works.

Figure 1 Transistors Q1 and Q2 perform self-calibrated high resolution differential temperature measurements.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Diode connected transistors Q1 and Q2 do duty as precision temperature sensors driven by switches U1and U1c and respective resistors R2, R3, R13, and R14. The excitation employed comprises alternating-amplitude current-mode signals in the ratio of (almost exactly):

10:1 = (100 µA via R3 and R13):(10 µA via R2 and R14).

With this specific 10:1 excitation, most every friendly small-signal transistor will produce an AC voltage signal accurately proportional to absolute temperature with peak-to-peak amplitude given by:

∆Vbe = Absolute Temperature / 5050 = 198.02 µV/oC.

The temperature-difference-proportional signals from Q1 and Q2 are boosted by ~100:1 gain differential amplifier A1a and A1d, synchronously demodulated by U1b, then filtered by R11, C2, and C3 to produce a DC signal = 20 mV/oC. This is then scaled by a factor of 2.5 by A1c to produce the final Q1–Q2 differential temperature signal output of 50 mV/oC, positive for Q1 warmer than Q2, negative for Q2 warmer than Q1.

Some gritty design minutiae are:

  1. Although the modulation-current setting resistors are in an exact 10:1 current ratio, the resulting modulation current ratio isn’t quite…The ∆Vbe signal itself subtracts slightly from the 100 µA half-cycle, which reduces the actual current ratio from exactly 10:1 to 9.9:1. This cuts the ∆Vbe temperature signal by approximately -1%.
  2. Luckily, the gain of the A1a/d amplifier isn’t exactly the advertised 100 either but is actually (100k/10k + 1) =101. This +1% “error” neatly cancels the ∆Vbe signal’s -1% “error” to result in a final, acceptably accurate 20mV/oC demodulator output.
  3. The modulating/demodulating frequency Fc generated by the A1b oscillator is deliberately set by the R4C1 time constant to be half the power mains frequency (30 Hz for 60 Hz power and 25 Hz for 50 Hz) via the choice of R4 (160 kΩ for 60 Hz and 200 kΩ for 50 Hz). This averages a couple mains-frequency cycles into each temperature measurement and thus improves immunity to stray pickup of power-line coupled noise. It’s a useful trick because some differential-thermometry applications may involve noise-radiating, mains-frequency-powered heaters. For convenience, the R5/R6 ratio was chosen so that Fc = 1/(2R4C1).
  4. Resistor values adorned with an asterisk in the schematic denote precision metal-film types. Current-ratio-setting R2, R3, R13, and R14 are particularly critical to minimizing zero error and would benefit from being 0.1% types. The others are less so and 1% tolerance is adequate. No asterisk means 5% is good enough.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post ∆Vbe differential thermometer needs no calibration appeared first on EDN.

Faraday to manufacture 64-bit Arm processor on Intel 1.8-nm node

Tue, 02/06/2024 - 16:16

The paths of RISC processor powerhouse Arm and x86 giant Intel have finally converged after they signed a collaboration pact to manufacture chips on Intel’s 1.8 nm process node in April 2023. Hsinchu, Taiwan-based contract chip designer Faraday Technology will manufacture Arm Neoverse cores-based server processors on Intel Foundry Services (IFS) using the Intel 18A process technology.

Chip design service provider Faraday is designing a 64-core processor using Arm’s Neoverse Compute Subsystems (CSS) for a wide range of applications. That includes high-performance computing (HPC)-related ASICs and custom system-on-chips (SoCs) for scalable hyperscale data centers, infrastructure edge, and 5G networks. Though ASIC designer won’t sell these processors, it hasn’t named its end customers either.

Figure 1 Faraday’s chip manufactured on the 18A process node will be ready in the first half of 2025. Source: Intel

It’s a breakthrough for Arm to have its foot in the door for large data center chips. It’s also a design win for Arm’s Neoverse technology, which provides chip designers with whole processors unlike individual CPU or GPU cores. Faraday will use interface IPs from the Arm Total Design ecosystem as well, though no details have been provided.

Intel, though not so keen to see Arm chips in the server realm, where x86 chips dominate, still welcomes them to its brand-new IFS business. It will likely be one of the first Arm server processors manufactured in an Intel fab. It also provides Intel with an important IFS customer for its advanced fabrication node.

Intel’s 18A fabrication technology for 1.8-nm process node—boasting gate-all-around (GAA) RibbonFET transistors and PowerVia backside power delivery—offers a 10% performance-per-watt improvement over its 20A technology for 2-nm process. It’s expected to be particularly suitable for data center applications.

Figure 2 The 18A fabrication technology is particularly considered suitable for data center chips. Source: Intel

Intel has already got orders to manufacture data center chips, including one for 1.8-nm chips from the U.S. Department of Defense. Now, a notable chip designer from Taiwan brings Intel Arm-based chips, boosting IFS’ fabrication orders as well as its credentials for data center chips.

The production of this Faraday chip is expected to be complete in the first half of 2025.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Faraday to manufacture 64-bit Arm processor on Intel 1.8-nm node appeared first on EDN.

Walmart’s Mobile Scan & Go: Who it’s For, I really don’t know

Mon, 02/05/2024 - 17:47

During Amazon’s annual Prime Day (which is two days, to be precise, but I’m being pedantic) sale mid-July last year, Walmart coincidentally (right) ran a half-off promotion for its normally $98/year competing Walmart+ membership service in conjunction with its competing Walmart+ Week (four days—I know, pedantic again) sale. Copy-and-pasted from the help page:

Walmart+ is a membership that helps save you time and money. You’ll need a Walmart.com account and the Walmart app to access the money and time-saving features of membership.

 Benefits include:

  • Early access to promotions and events
  • Video Streaming with Paramount+
  • Free delivery from your store
  • Free shipping, no order minimum
  • Savings on fuel
  • Walmart Rewards
  • Mobile Scan & Go

Free shipping absent the normal $35 order minimum is nice, as is free delivery from my local store. Unfortunately, for unknown reasons, only three Walmarts in all of Colorado, none of them close to me, offer fuel service. Truth be told, though, my primary signup motivation was that my existing Paramount+ streaming service was nearing its one-year subscription renewal date, at which time the $24.99/year (plus a free Amazon Fire Stick Lite!) promotional discount would end and I’d be back to the normal $49.99/year price. Walmart+ bundles Paramount+ as one of its service offerings, and since the Walmart+ one-year promo price was the same (minus $0.99, to be pedantic) as I’d normally pay for Paramount+ standalone, the decision was easy.

But none of these was the primary motivation for this writeup. Instead, I’ll direct your attention to the last entry in the bullet list, Walmart’s Mobile Scan & Go:

Here’s the summary from Walmart’s website:

Shop & check out fast with your phone in-store. Just scan, pay, & be on your way!

  • Get Walmart Cash by easily claiming manufacturer offers as you scan
  • Check out fast at self-checkout without having to rescan each item
  • See the price of items as you go

 It’s easy in 3 simple steps!

  • Open your Walmart app: Select Scan & go from the Store Mode landing page. Make sure your location access is enabled.
  • Scan your items as you shop: Once your items are scanned, click “View cart” to verify that everything is correct.
  • Tap “Check out”: Tap the blue “Check out” button in the app & head over to self-checkout. Confirm your payment method. Scan QR code at register.

Sounds good, right? I’d agree with you, at least at first glance. And even now, after using the service with some degree of regularity over the past few months, I remain “gee-whiz” impressed with many aspects of the underlying technology. Take this excerpt, for example:

Open your Walmart app: Select Scan & go from the Store Mode landing page. Make sure your location access is enabled.

To elaborate: if you’ve enabled location services for the Walmart app on your Android or iOS device, it’ll know when you’re at a store, automatically switching the user interface to one more amenable to helping you find which aisle (and region in that aisle) a product you’re looking for can be found (to wit, “Store Mode”), versus the more traditional online-inventory search. And if you’re also logged into the app, it knows who you are and will, among other things, auto-bill your in-store purchases to the credit card associated with your account.

Keep in mind, however, that (IMHO) the fundamental point of the app (as well as the broader self-checkout service option) is to reduce the per-store employee headcount by shifting the bulk of the checkout labor burden to you. Which would be at least somewhat OK, putting aside the obvious unemployment rate impact, if it also translated into lower consumer prices versus just higher shareholder profits. Truly enabling you to just “Scan & Go” would also be nice. Reality unfortunately undershoots the hype, at least in the current service implementation form.

Note, for example, the “scan your items” phrase. For one thing, scanning while you’re shopping is only relevant for items with associated UPS or other barcodes. The app won’t auto-identify SpaghettiOs if you just point the smartphone camera at the pasta can, for example:

not that I’m sure you’d even want it to be able to do that, considering the potential privacy concerns in comparison to a conceptually similar but fixed-orientation camera setup at the self-checkout counter. Consider, for example, the confidentiality quagmire of a small child in the background of the image captured by your smartphone and uploaded to Walmart’s servers…

The app also can’t standalone handle, perhaps obviously, variable-priced items such as per-pound produce that must be weighed to determine the total charge, and which therefore must instead be set aside and segregated in your shopping cart for further processing at checkout. And about that self-checkout counter…it unfortunately remains an essential step in the purchase process, pragmatically ensuring that you’re not “gaming the system”. After you first scan a QR code that’s displayed on your smartphone, you then deal with any remaining items (such as the aforementioned produce) and pay. And then, as you exit the self-checkout area, there’s a Walmart employee parked there who may (or may not) double-check your receipt against the contents of your cart, including in your bags, to ensure you haven’t “forgotten” to scan anything or “accidentally” scanned a barcode for a less expensive alternative item instead.

Still, doesn’t sound too bad, does it? Well, now consider these next-level nuances, which I’m conceptually aware of from a comparative standpoint versus the Meijer Shop & Scan alternative offered back in Indiana, the state of my birth.

In upfront fairness, at least some of what follows may be specifically reflective of my relatively tiny local Walmart versus the larger stores “down the hill” in Denver and elsewhere (against which I haven’t yet compared), versus a more general comparative critique:

  • There’s no way to get a printed receipt at self-checkout; you can only view it online post-transaction completion. This one’s utterly baffling to me, given that conventional self-checkouts offer it. And speaking of which…
  • At my store, at least, you’re forced to route through the same self-checkout lines as folks who are tediously doing full self-checkouts (thereby neutering the “Go” promise), versus also offering dedicated faster “Mobile Scan & Go” lines as Meijer does with Shop & Scan.
  • Meijer also offers self-weighing stations right at the produce department, linked to the store’s app and broader service, further speeding up the final checkout step. There aren’t any at Walmart, at least at my local store, where I instead need to weigh and accept the total per-item prices at checkout.
  • Not to mention the fact that “Mobile Scan & Go” is only available to subscribers of the paid-for-by-consumer Walmart+ service! You’d think that if the company was mostly motivated to reduce headcount costs, it’d at least offer “Mobile Scan & Go” for free, as it does with conventional self-checkout. You’d think…but nope. Pay up, suckers.

First-world “problems”? Sure. Rest assured that I haven’t lost sight of my longstanding big-picture perspective. But nonetheless irritating? Absolutely.

Service “upgrades” that seemingly benefit only the provider, not also the user, are destined for rapid backlash and a speedy demise. Consumers won’t use them and may even take their entire business elsewhere. While this case study is specific to grocery store shopping, I suspect the big-picture issues it raises may also resonate with related situations in your company’s existing and/or under-consideration business plans. Don’t listen solely to the accountants, who focus predominantly-to-completely on short-term cost, revenue and profit targets, folks!

Reader thoughts are as-always welcomed in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Relate Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Walmart’s Mobile Scan & Go: Who it’s For, I really don’t know appeared first on EDN.

The profile of a power simulation tool for SiC devices

Mon, 02/05/2024 - 14:06

Power electronics design is a critical aspect of modern engineering, influencing the efficiency, reliability, and performance of numerous applications. Developing circuits that meet stringent requirements while considering manufacturing variations and worst-case scenarios demands precision and sophisticated tools.

At the same time, the landscape of power electronics design is rapidly evolving, ushering in an era of high-speed, high-efficiency components. Amidst this evolution, simulation tools need to redefine the way engineers conceptualize, design, and validate power systems. Take Elite Power Simulator and Self-Service PLECS Model Generator (SSPMG), which allows power electronics engineers to reduce time-to-market. Collectively, these tools offer a precise depiction of the operational behavior of the circuit when using EliteSiC line of silicon carbide (SiC) products.

Figure 1 Elite Power Simulator and Self-Service PLECS Model Generator provides a precise depiction of the operational behavior of power circuits. Source: onsemi

This simulation platform aims to empower engineers to visualize, simulate, and refine complex power electronic topologies with unparalleled ease. It does that by offering engineers a unique digital environment to test and refine their designs. Here, the underlying PLECS models and their accuracy are a critical component to the effectiveness of the Elite Power Simulator. The simulator allows engineers to upload custom PLECS models that are generated with the SSPMG.

The heart of this simulation tool is its ability to accurately simulate a wide array of power electronic topologies, including AC-DC, DC-DC, and DC-AC converters, among others. With over 40 topologies available, it provides engineers with an extensive library to explore and fine-tune their designs. For instance, in industrial applications, it supports critical systems such as fast DC charging, uninterruptible power supplies (UPS), energy storage systems (ESS), and solar inverters. Similarly, the tool is suited for onboard chargers (OBC) and traction inverter systems serving the automotive industry.

Figure 2 Engineers can select application and topology in Elite Simulator. Source: onsemi

Challenges in creating PLECS models

The traditional method of creating PLECS models in the industry relies on measurement-based loss tables aligned with manufacturer datasheets. However, this approach faces several key challenges:

  • Dependency on measurement setups: The switching energy loss data is influenced by the specific parasitics of the application layouts and circuits used, leading to variations and inaccuracies.
  • Limited data density: Conduction and switching energy loss data are often insufficiently dense, hindering accurate interpolation within PLECS and often necessitating extrapolation, which can compromise accuracy.
  • Nominal semiconductor conditions: Loss data typically represents nominal semiconductor process conditions, potentially overlooking variations and real-world scenarios.
  • Validity for hard switching only: Models derived from datasheet’s double-pulse-generated loss data are applicable only to hard switching topologies. They become highly inaccurate when applied to soft switching topology or for synchronous rectification simulations.

These challenges associated with the conventional approach of depending on measurement-based loss tables for PLECS model generation are addressed by introducing the SSPMG. It optimizes models by considering specific passive elements’ impact on energy losses, providing denser and more detailed data for accurate simulations.

Figure 3 Dense loss table is one of the key SSPMG features. Source: onsemi

SSPMG includes semiconductor process variations for realistic models and creates adaptable models suited for soft switching topologies, ensuring reliability beyond hard switching scenarios. PLECS models designed with SSPMG can be seamlessly uploaded to the Elite Power Simulator or downloaded for use in stand-alone PLECS.

Figure 4 Soft switching simulation is another key SSPMG feature. Source: onsemi

Simulator capabilities

Central to the tool’s prowess is PLECS operating in the background. PLECS is a system-level simulator that makes it easier to model and simulate whole systems by using device models that are designed for speed and accuracy. It combines an easy-to-use web-based environment, simplifying things for engineers during the design process.

The significance of this tool extends beyond its simulation capabilities. It’s not merely a tool for simulating; it can also aid engineers in selecting suitable components for their applications. Engineers can seamlessly navigate through various product generations to understand performance-cost trade-offs and make informed decisions.

Moreover, PLECS is not a SPICE-based circuit simulator, where the focus is on low-level behavior of circuit components. The PLECS models, referred to as “thermal models”, are composed of lookup tables for conduction and switching losses, along with a thermal chain in the form of a Cauer or Foster equivalent network.

The simulator has an intuitive loss and thermal data plotting utility that enables engineers to visualize the loss behavior of their chosen switch. This multifunctional 3D visualization tool works with device conduction loss, switching energy loss, and thermal impedance.

Next, the simulator has a utility to design custom heat sink models, enabling users to accurately predict junction temperatures and optimize cooling solutions tailored to their specific needs.

The simulation stage within this tool is highly detailed, offering insights into various parameters such as losses, efficiency, and junction temperature in transient and steady state conditions. Furthermore, the tool has an easy mechanism to compare runs with different devices, circuit parameters, cooling designs, and loss models.

Figure 5 Loss plotting is another important feature offered by Elite Power Simulator. Source: onsemi

The simulator and SSPMG are adaptable to diverse semiconductor technologies. While initially focusing on SiC products, both tools will be expanding to other power devices. This versatility ensures that engineers can leverage the tools across various devices, tailoring simulations to their specific requirements.

Simulating beyond datasheet conditions

The utilization of simulation tools in virtual prototyping has brought about substantial transformation in design flows. Engineers and designers are now able to comprehend the performance of these electronic circuits prior to their mass production in their quest for first time right performance. Accuracy is a critical component when it comes to simulating intricate electronic circuits.

Simulating beyond datasheet conditions is key due to the access to unlimited data that normally would be difficult to acquire via physical testing. This facilitates the optimization and analysis of circuit performance virtually.

By employing precise simulations, one can prevent the underestimation or overestimation of system performances.

James Victory is a fellow for TD modeling and simulation solutions at onsemi’s Power Solutions Group.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The profile of a power simulation tool for SiC devices appeared first on EDN.

Creating a very fast edge rate generator for testing (or taking the pulse of your scope)

Fri, 02/02/2024 - 19:33

I recently purchased a new oscilloscope for home use. It’s a 250 MHz scope, but I was curious what the actual -3dB frequency was as most scopes have a bit more upper end margin than their published rating. The signal generators I have either don’t go up to those frequencies or, the amplitudes at these frequencies are questionable. That meant I didn’t have a way to actually input a sine wave and sweep it up in frequency until the amplitude dropped down 3 dB to find the true bandwidth. So, I needed another way to find the bandwidth.

Wow the engineering world with your unique design: Design Ideas Submission Guide

You may have seen the technique of using a fast rise time pulse to measure the scope’s bandwidth (you can read how this relation works here). The essence is that you send a pulse, with a fast rising and/or falling edge to the scope and measure the rise or fall time at the fastest sweep rate available. You can then calculate the scopes bandwidth with the Equation (1):

(Note: there is much discussion about the use of 0.35 in the formula. Some claim it should be 0.45, or even 0.40. It really comes down to the implementation of the anti-aliasing filter before the ADC in the scope. If it is a simple single pole filter the number should be 0.35. Newer, higher priced scopes may use a sharper filter and claim the number is 0.45. As my new scope is not one of the expensive laboratory level scopes, I am assuming a single pole filter implying 0.35 as the correct number to use.)

OK, now I needed to find a fast-edged square-wave pulse generator. If we assume my scope has a bandwidth of 300 MHz, then it’s capable of showing a rise time of around:

The rise time actually seen on the scope will be slower than its maximum because the viewed rise time is a combination of the scope’s maximum rise time and the pulse generator’s rise time. In fact, the relationship is based on a “root sum squared” formula shown in Equation (3):

Where:

  • Rv is the rise time as viewed on the scope
  • Rp is the rise time of the pulse generator
  • Rm is the scope minimum, or shortest, rise time as limited by its bandwidth

If Rp is much less than Rm, then we may be able to ignore it as it would add very little to Rv. For example, the gold standard for this type of test is the Leo Bodnar Electronics 40 ps pulse generator. If we used this, the formula would show the expected rise time on the scope to be:

As you can see, in this case the pulse generator rise time contributes a negligible amount to the rise time viewed on the scope.

As nice as the Bondar generator is, I didn’t want to spend that much on a device I would only use a few times. What I needed was a simple pulse generator with a reasonable fast edge—something in the 500-ns-or-better range.

I checked the frequency generators available to me, but the fastest rise time was around 3 ns which would be much too large, so I decided to build a pulse generator. There are a few fast pulse generator designs floating around, some using discrete components and some using Schmitt trigger ICs, but these didn’t quite fit what I wanted. What I ended up designing is based on an Analog Devices LTC6509-80 IC. The spec sheet states it can output pulses with rise time of 500 ps—more on that later. But is 500 ps fast enough? Let’s explore this. What happens if we use a pulse with a rise time in the 500 ns range? Then:

Even if the final design could attain a 500 ps rise time, this would be too large to ignore as it could give an error in the 10% range. But if we assumed a value for Rp (or better yet pre-measured it) we could remove it after the fact.

As discussed earlier, the rise time that will be seen on the scope can be seen in Equation (1). Manipulating this, we can see that the maximum rise time is:

So, if we can establish the generator’s rise time, we can subtract it out. In this case “establishing” could be a close enough educated guess, an LTspice simulation, or measuring it on some other equipment. An educated guess is: Based on the LTC6905 data sheet, I should be able to get a ~500 ps rise time in a design. The LTspice path didn’t work out as I couldn’t get a reasonable number out of the simulation—probably operator error. I got lucky though and got some short access to a very high-end scope. I’ll share the results later in the article. But first, let’s look at the design. First, the schematic as shown in Figure 1.

Figure 1 Schematic 1 with the LTC6905 IC to generate a square wave, a capacitor, resistor, and a BNC connector.

The first thing you may notice is that it is very simple: an IC, capacitor, resistor, and a BNC connector. The LTC6905 generates square waves of a fixed frequency and a fixed 50% duty cycle. The version of the IC that I used produces an 80, 40, or 20 MHz output depending on the state of pin 4 (DIV). In this design, this pin is grounded which selects a 20 MHz output. The 33 Ω resistor is in series with the 17 Ω internal impedance thereby producing 50 Ω to match the BNC connector impedance. Matching the impedance reduces any overshoot or ringing in the output. (Using the Tiny Pulser on a 50 Ω scope setting will result in an output 50 mA peak or ~25 mA average output current. It seemed like it might be high for the IC but the spec for the LTC6905 states that the output can be shorted indefinitely. I also checked the temperature of the IC with a thermal camera, and it was minimal.)

I also tried some designs using various resistor values and some with a combination of resistors and capacitors, in series, between pin 5 and the BNC. The idea here was to reduce the capacitance as seen by the IC output. The oscilloscope has an input impedance of around 15 pF (in parallel with 1 MΩ) and adding a capacitor in series could reduce this, as seen by the IC. These were indeed faster but with significant overshoot.

So, Figure 1 is the design I followed through on. The only thing to add to this is a BNC connector, an enclosure (with 4 screws), and a USB cable to power the unit. This simple design, and the fact that the IC is a tiny SOT-23 package, allows for a very small design as seen in Figure 2.

Figure 2 The Tiny Pulser prototype with a 3D printable enclosure based on the schematic in Figure 1 that is roughly the size of a sugar cube.

The 3D printable enclosure is roughly the size of a sugar cube, so I named the device the “Tiny Pulser”. Figure 3 shows the PCB in the enclosure while Figure 4 displays the PCB assembly.

Figure 3 The PCB enclosure of the Tiny Pulser showing the BNC, IC, and passives used in Figure 1.

Figure 4 Tiny Pulser 6-pin SOT-23 PCB assembly with only a few components and jumper wires to solder to the PCB itself.

The PCB is a 6 pin SOT-23 adapter available from various sources (a full BOM is included in the download link provided at the end of the article). As you can see in Figure 4, there are only a few things to solder to the PCB including a jumper. Three wires are attached including the +5 V and ground from the USB cable. The other ground wire needs to be soldered to the BNC body. To do this, I had to break out the old Radio Shack 100 W soldering gun to get enough heat on the BNC base by the solder cup. Scratching up the surface also helped. The PCB is then attached to the BNC by soldering the output pad of the PCB (backside) to the BNC solder cup. (More pictures of this are included in the download.)

So how does it perform? The best performance is obtained when using a 50 Ω scope input and measuring the fall time which was a bit faster than the rise time. In Figure 5 we see the generated pulse train of 20 MHz while Figure 6 is a screenshot showing a fall time of 1.34 ns.

Figure 5 The generated pulse train of 20 MHz of the Tiny Pulser using a 50 Ω scope input.

Figure 6 Fall time measurement (1.34 ns) of the Tiny Pulser circuit made on a 50 Ω scope input.

You can see the pulse train is pretty clean with a bit of overshoot. Note that the 1.34 ns fall time is a combination of the scopes fall time and the Tiny Pulsers fall time. Now we need to figure out the actual fall time of the Tiny Pulser.

As I said I got a chance to use a high-powered scope (2.5 GHz, 20 GS/s) to measure the rise and fall times, Figure 7 shows the results (pardon the poor picture):

Figure 7 Picture of the high-end oscilloscope (2.5 GHz, 20 GS/s) display measuring the rise and fall times of the Tiny Pulser.

You can see that the Tiny Pulser delivers a very clean pulse with a rise time of 510 ps and a fall time of 395 ps. We now have all the information we need to make our bandwidth calculations. (The formulas we have developed are as applicable to fall time as they are to rise time, so we will not change the variable names.) Using the scopes measured fall time and the 395 ps Tiny Pulser fall time, we calculate the bandwidth of the scope, first by calculating the scopes maximum fall time [Equation (6)]:

And now use this to calculate the bandwidth [Equation (1)]:

A gut check tells me this is a reasonable number for an oscilloscope sold as a 250 MHz model.

I tested another scope I have that is rated as 200 MHz. It displayed a fall time of 1.51 ns which works out to be 240 MHz. This number agrees to within a few percent of other numbers I have found on the internet. It seems like the Tiny Pulser works well for measuring scope bandwidth!

Another use for a fast pulse

A better-known use for a fast rise time is probably in a time-domain reflectometer (TDR). A TDR is used to measure the length, distance to faults, or distance to an impedance change in a cable. To do this with the Tiny Pulser, add a BNC tee adapter to your scope and connect the cable (coax, twisted pair, zip cord, etc.), to be tested, to one side of the tee adapter (use a BNC to banana jack adapter if needed). Do not short the end of the wire. Next, connect the Tiny Pulser to the other side of the tee adapter as seen in the setup in Figure 8.

Figure 8 A TDR set up using the Tiny Pulser with a BNC tee adapter to connect the circuit as required (e.g., via coax, twisted pair, etc.).

Now power up the Tiny Pulser and adjust the sweep rate to around 10 ns/div so you see something like the upper part of the screen in Figure 9. I find that the high impedance setting on the scope works better than the 50 Ω setting for the wire I was testing. This may vary with the wire you are testing. You can see that the square wave is distorted which is due to the signal reflecting from the end of the wire. If your scope has a math function to display the derivative (or differential) of the trace you will be able to see what’s happening clearer. This can be seen in the lower trace in Figure 9 when I connected a 53 inch piece of 24 AWG solid twisted pair.

Figure 9 Using the high impedance setting on the scope to perform a TDR test on a 53” piece of 24 AWG wire. The math function displays the derivative of the trace to view results more clearly.

To find the timing of the reflection, measure from the start of the pulse rising (or falling) to the distorted part of the pulse where it is rising (or falling) again. Or, if using the math differential function, measure the time from the tall bump to the smaller bump—I find this much easier to see.

In Figure 9 the falling edge of the pulse is marked by cursor AX and the reflected pulse is marked with the cursor BX. On the right side we can see the time between these pulses is 13.2 ns.

The length of the cable or distance to an impedance change can now be calculated but we first need the speed of the wavefront in the wire. For that we need the velocity factor (VF) for the cable that is being tested. This is multiplied by the speed of light to obtain the speed of the wavefront. The velocity factor for some cables may be found here.

In the case of Figure 9, the velocity factor is 0.707. Multiplying this with the speed of light in inches gives us 8.34 inches/ns. So, multiplying 13.2 ns by 8.34 inches/ns yields 110 inches. But this is the time up and down the wire, so we divide this by 2 giving us 55 inches. There are a few inches of connector also, so the answer is very close to the 53 inches of wire.

Note that, because we have a pulse rate of 20 MHz, we are limited to identifying the reflections up to about 22 ns, after which reflection pulses will blend with the next edge generated pulse. This is about 90 inches of cable.

One last trick

An interesting use of the TDR setup is to discover a cable’s impedance. Do this by adding a potentiometer across the end of the cable and adjust the pot until the TDR reflections disappear and the square wave looks relatively restored. Then measure the pot’s resistance and this is the impedance of your cable.

More info

A link to the download for the 3D printable enclosure, BOM, and various notes and pictures to explain the assembly, can be found at: https://www.thingiverse.com/thing:6398615.

I hope you find this useful in your lab/shop and if you have other uses for the Tiny Pulser, please share them in a comment below.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Creating a very fast edge rate generator for testing (or taking the pulse of your scope) appeared first on EDN.

Chiplets diary: Three anecdotes recount design progress

Fri, 02/02/2024 - 14:45

The chiplet design movement representing multi-billion-dollar market potential is marching ahead with key building blocks falling in place while being taped out at advanced process nodes like TSMC’s 3 nm. These multi-die packaging devices can now mix and match pre-built or customized compute, memory, and I/O ingredients in different process nodes, paving the way for system-in-packages (SiPs) to become the system motherboard of the future.

Chiplets also promise considerable cost reduction and improved yields compared to traditional system-on-chip (SoC) designs. Transparency Market Research forecasts the chiplet market to reach more than $47 billion by 2031, becoming one of the fastest-growing segments of the semiconductor industry at more than 40% CAGR from 2021 to 2031.

Below are a few anecdotes demonstrating how chiplet-enabled silicon platforms are making strides in areas such as packaging, memory bandwidth, and application-optimized IP subsystems.

  1. Chiplets in standard packaging

While chiplet designs are generally associated with advanced packaging technologies, a new PHY solution claims to have used standard packaging to create a multi-die platform. Eliyan’s NuLink PHY facilitates a bandwidth of 64 Gbps/bump on a 3-nm process while utilizing standard organic/laminate packaging with 8-2-8 stack-up.

An efficient combination of compute density and memory bandwidth in a practical package construction will substantially improve performance-per-dollar and performance-per-watt. Moreover, chiplet-based systems in standard organic packages enable the creation of larger SiP solutions, leading to higher performance per power at considerably lower cost and system-level power.

Figure 1 Chiplets in standard packages could encourage their use in inference and gaming applications. Source: Eliyan

Eliyan has announced the tape-out of this die-to-die connectivity PHY at a 3-nm node, and the first silicon is expected in the third quarter of 2024. The tape-out includes a die-to-die PHY coupled with an adaptor layer/link layer controller IP to facilitate a complete solution.

  1. Sub $1 chiplets

Chiplets have mostly been synonymous with high performance computing (HPC) applications, where these multi-die devices cost tens to hundreds of dollars. YorChip has joined hands with Siloxit to develop a data acquisition chiplet at a sub $1 price target in volume.

The two companies will leverage low-cost die-to-die links, physically unclonable function (PUF) security technology, and delta-sigma analog-to-digital (ADC) IP to create a cost-optimized chiplet. That’s how this chiplet aims to develop a low-cost die-to-die footprint that achieves 75% size savings over the competition.

  1. High bandwidth memory (HBM) chiplets

Memory bandwidth is a major consideration alongside compute density and high-speed I/Os in chiplet designs. That makes high bandwidth memory 3 (HBM3) PHY a key ingredient in chiplets for applications such as generative AI and cloud computing. This is especially the case in HPC systems where memory bandwidth per watt is a key performance indicator.

Figure 2 The HBM3 memory subsystem supports data rates up to 8.4 Gbps per data pin and features 16 independent channels, each containing 64 bits for a total data width of 1,024 bits. Source: Alphawave Semi

Alphawave Semi has made available an HBM3 PHY IP that targets high-performance memory interfaces up to 8.6 Gbps and 16 channels. This HBM subsystem integrates the HBM PHY with a JEDEC-compliant, highly configurable HBM controller. It has been taped out at TSMC’s 3-nm node and is tailored for hyperscaler and data infrastructure designs.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Chiplets diary: Three anecdotes recount design progress appeared first on EDN.

Scope probes reach bandwidths up to 52 GHz

Thu, 02/01/2024 - 19:46

InfiniiMax 4 oscilloscope probes from Keysight operate at bandwidths up to 52 GHz (brickwall response) and 40 GHz (Bessel-Thomson response). The company reports that the InfiniiMax 4 is the first high-impedance probe head operating at more than 50 GHz, making it well-suited for high-speed digital, semiconductor, and wafer applications.

InfiniiMax 4 offers DC input resistance of 100 kΩ differential and two input attenuation settings: high-precision 1-Vpp and high-voltage 2-Vpp maximum input range. The probes work with Infiniium UXR-B series oscilloscopes equipped with 1.85-mm and 1.0-mm input connectors. They are also compatible with the AutoProbe III interface.

The InfiniiMax 4 probes feature an RCRC architecture with a flexible PCA probe head that leverages the natural flexibility of the PCA to take the strain off the delicate tip wires. Their modular probe-head amplifier provides multiple access points, eliminating the need for custom evaluation boards or interposers.

Request a price quote for InfiniiMax 4 oscilloscope probes using the link to the product page below.

InfiniiMax 4 product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Scope probes reach bandwidths up to 52 GHz appeared first on EDN.

Low-noise amplifier is radiation-tolerant

Thu, 02/01/2024 - 19:46

Teledyne’s TDLNA0430SEP low-noise amplifier targets space and military communication receivers and radar systems operating in the UHF to S-Band. The radiation-tolerant device offers a low noise figure, minimal power consumption, and small package footprint.

According to the manufacturer, the MMIC amplifier delivers a gain of 21.5 dB from 0.3 GHz to 3 GHz, while maintaining a noise figure of less than 0.35 dB and an output power (P1dB) of 18.5 dBm. The amplifier should be biased at +5.0 VDD and 60 mA IDDQ.

The TDLNA0430SEP low-noise amplifier is built on a 90-nm enhancement-mode pHEMT process. It comes in an 8-pin, 2×2×0.75-mm plastic DFN package and is qualified per Teledyne’s space enhanced plastic flow.

The amp is available now for immediate shipment from the company’s DoD Trusted Facility. An evaluation kit is also available.

TDLNA0430SEP product page

Teledyne e2v HiRel Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Low-noise amplifier is radiation-tolerant appeared first on EDN.

Flyback switcher ICs boost efficiency

Thu, 02/01/2024 - 19:46

InnoSwitch5-Pro programmable flyback switchers from Power Integrations employ zero-voltage switching and SR FET control to achieve >95% efficiency. A switching frequency of up to 140 kHz and a high level of integration combine to reduce the component volume and PCB board area required by a typical USB PD adapter implementation.

The single-chip switchers incorporate a 750-V or 900-V PowiGaN primary switch, primary-side controller, FluxLink isolated feedback, and secondary controller with an I2C interface. They can be used in single and multiport USB PD adapters, including designs that require the USB PD Extended Power Range (EPR) protocol.

Devices accommodate a wide output voltage range of 3 V to 30 V. To maximize efficiency, the switchers support lossless input line voltage sensing on the secondary side for adaptive continuous conduction mode (CCM), discontinuous conduction mode (DCM), and zero voltage sensing (ZVS) control. A post-production tolerance offset enables constant-current accuracy of <2% to support the Universal Fast Charging Specification (UFCS) protocol.

Prices for the InnoSwitch5-Pro flyback switcher ICs start at $2.40 each in lots of 10,000 units.

InnoSwitch5-Pro product page

Power Integrations 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Flyback switcher ICs boost efficiency appeared first on EDN.

Cortex-M85 MCUs improve motor control

Thu, 02/01/2024 - 19:46

The third group in the Renesas RA8 series of MCUs, RA8T1 devices offer real-time control of motors and inverters used in industrial, building, and home automation. Like other RA8 microcontrollers, the RA8T1 group is based on a 480-MHz Arm Cortex-M85 processor that delivers a performance rating of 6.39 CoreMark/MHz. Arm Helium technology boosts performance by as much as 4X for DSP and ML implementations. This enhanced performance can be used to execute AI functions, such as predictive maintenance.

Optimized for motor control, the RA8T1 32-bit MCUs provide advanced PWM timing features, such as three-phase complementary output, 0% and 100% duty output capability, and five-phase counting modes. On-chip analog peripherals include 12-bit ADCs, 12-bit DACs, high-speed comparators, and a temperature sensor.

The RA8T1 devices integrate up to 2 Mbytes of flash memory and 1 Mbyte of SRAM. Multiple connectivity interfaces are available, such as SCI, SPI, I2C, I3C, CAN/CAN-FD, Ethernet, and USB-FS. In addition, Arm TrustZone technology and Renesas Security IP provide advanced security and encryption.

Offered in 224-pin BGA packages, as well as LQFPs with 100, 144, and 176 pins, the RA8T1 MCUs are available now. Samples and development kits can be ordered on the Renesas website or through the company’s network of distributors.

RA8T1 product page 

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cortex-M85 MCUs improve motor control appeared first on EDN.

Cadence debuts AI thermal design platform

Thu, 02/01/2024 - 19:45

Cadence Celsius Studio, an AI-enabled thermal design and analysis platform for electronic systems, aims to unify electrical and mechanical CAD. The system addresses thermal analysis and thermal stress for 2.5D and 3D ICs and IC packaging, as well as electronics cooling for PCBs and complete electronic assemblies.

With Celsius Studio, electrical and mechanical/thermal engineers can concurrently design, analyze, and optimize product performance within a single unified platform. This eliminates the need for geometry simplification, manipulation, and/or translation. Built-in AI technology enables fast and efficient exploration of the full design space to converge on the optimal design.

The multiphysics thermal platform can simulate large systems with detailed granularity for any object of interest, including chip, package, PCB, fan, or enclosure. It combines finite element method (FEM) and computational fluid dynamics (CFD) engines to achieve complete system-level thermal analysis. Celsius Studio supports all ECAD and MCAD file formats and seamlessly integrates with Cadence IC, packaging, PCB, and microwave design platforms.

Customers seeking to gain early access to Celsius Studio can contact Cadence using the product page link below.

Celsius Studio product page

Cadence Design Systems 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cadence debuts AI thermal design platform appeared first on EDN.

“Sub-zero” op-amp regulates charge pump inverter

Thu, 02/01/2024 - 16:55

Avoiding op-amp output saturation error by keeping op-amp outputs “live” and below zero volts is a job where a few milliamps and volts (or even fractions of one volt) of regulated negative rail can be key to achieving accurate analog performance. The need for voltage regulation arises because the sum of positive and negative rail voltages mustn’t exceed the recommended limits of circuit components (e.g., only 5.5 V for the TLV9064 op-amp shown in Figure 1). Unregulated inverters may have the potential (no pun!) to overvoltage sensitive parts and therefore may not be suitable.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the circuit: A simple regulated charge pump inverter based on the venerable and versatile HC4053 triple SPDT CMOS switch and most any low power RRIO op-amp. It efficiently and accurately inverts a positive voltage rail, generating a programmable negative output that’s regulated to a constant fraction of the positive rail. With Vin = 5 V, its output is good for currents from zero up to nearly 20 mA, the upper limit depending on the Vee level chosen by the R1:R2 ratio. It’s also cheap with a cost that’s competitive with less versatile devices like the LM7705. It’s almost unique in being programmable for outputs as near zero as you like, simply set by the choice for R2.

But enough sales pitch.  Here’s how it works.

Figure 1 U1 makes an efficient charge pump voltage inverter with comparator op-amp A1 providing programmable regulation.

 U1a and U1b act in combination with C2 to form an inverting flying-capacitor pump that transfers negative charge to filter capacitor C3 to maintain a constant Vee output controlled by A1. Charge pumping occurs in a cycle that begins with C2 being charged to V+ via U1a, then completes by partially discharging C2 into C3 via U1b. Pump frequency is roughly 100 kHz under control of the U1c Schmidt trigger style oscillator, so that a transfer occurs every 10 µs. Note the positive feedback around U1c via R3 and inverse feedback via R4, R5, and C1. 

Figure 2 shows performance under load with the R2:R1 ratio shown.

Figure 2 Output voltage and current conversion efficiency vs output current for +Vin = 5 V.

No-load current draw is less than 1 mA, divided between U1 and A1, with A1 taking the lion’s share. If Vee is lightly loaded, it can approach -V+ until A1’s regulation setpoint (Vee = – R2/R1 * V+) kicks in. Under load, Max Vee will decline at ~160 mV/mA but Vee remains rock solid so long as the Vee setpoint is at least slightly less negative than Max Vee.

A word about “bootstrapping”: Switch U1b needs to handle negative voltages but the HC4053 datasheet tells us this can’t work unless the chip is supplied with a negative input at pin 7. So U1’s first task is to supply (bootstrap) a negative supply for itself by the connection of pin 7 to Vee.

“Sub-zero” comparator op-amp A1 maintains Vee = – R2/R1 * V+ via negative feedback through R6 to U1 pin 6 Enable. When Vee is more positive than the setpoint, A1 pulls pin 6 low, enabling the charge pump U1c oscillator and the charging of C3. Contrariwise, Vee at setpoint causes A1 to drive pin 6 high, disabling the pump. When pin 6 is high, all U1’s switches open, isolating C2 and conserving residual charge for subsequent pump cycles. R6 limits pin 6 current when Vee < -0.5 V.

Figure 3 shows how a -500-mV sub-zero negative rail can enable typical low-voltage op-amps (e.g., TLV900x) to avoid saturation at zero over the full span of rated operating temperature for output currents up to 10 mA and beyond. Less voltage or less current capability might compromise accurate analog performance.

Figure 3 Vee = -500 mV is ideal for avoiding amplifier saturation without overvoltaging LV op-amps.

U1’s switches are break-before-make, which helps both with pump efficiency and with minimizing Vee noise, but C3 should be a low ESR type to keep the 100 kHz ripple low (about 1 mVpp @ Iee = 10 mA). You can also add a low inductance ceramic in parallel with C3 if high frequency spikes are a concern.

Footnote: I’ve relied on the 4053 in scores of designs over more than a score of years, but this circuit is the very first time I ever found a practical use for pin 6 (-ENABLE). Learn something new every day!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post “Sub-zero” op-amp regulates charge pump inverter appeared first on EDN.

Prosumer and professional cameras: High video quality, but a connectivity vulnerability

Wed, 01/31/2024 - 18:14

As I’ve recently mentioned a few times, I’m ramping up my understanding and skill set on a couple of Blackmagic Design Pocket Cinema Cameras (BMPCCs), both 6K in maximum captured resolution: a first-generation model based on the BMPCC 4K and using Canon LP-E6 batteries:

and the second-generation successor with a redesigned body derived from the original BMPCC 6K Pro. It uses higher-capacity Sony NP-F570 batteries, has an integrated touchscreen LCD that’s position-adjustable, and is compatible with an optional electronic viewfinder (which I also own):

I’m really enjoying playing with them both so far, steep learning curve aside, but as I use them, I can’t shake the feeling that I’ve got ticking time bombs in my hands. As I’ve also mentioned recently, cameras like these are commonly used in conjunction with external “field” monitors, whether wirelessly- or (for purposes of this writeup’s topic) wired-connected to the camera:

And as I’ve also recently mentioned, it’s common to power cameras like these from a beefy external battery pack such as this 155 Wh one from SmallRig:

or a smaller-capacity sibling that’s airplane-travel amenable:

Such supplemental power sources commonly offer multiple outputs, directly and/or via a battery plate intermediary:

enabling you to fuel not only the camera but also the field monitor, a nearby illumination source, a standalone microphone preamp, an external high-performance SSD or hard drive, and the like. Therein lies the crux of the issue I’m alluding to. Check out, to start, this Reddit thread.

The gist of the issue, I’ve gathered (reader insights are also welcomed), is that if you “hot-socket” either the camera or the display (and either the particular device’s power or the common HDMI connection) while the other device is already powered up, there’s a finite chance that the power supply circuit loop (specifically the startup spike) will route through the HDMI connection instead, frying the HDMI transceiver inside the camera and/or display (and maybe other circuitry as well). The issue seems to be most common, but not exclusively the case, when both the camera and display are fed by the same power source, albeit not leveraging a common ground, and when they’re running on different supply voltages.

I ran the situation by my technical contact at Blackmagic after stumbling across it online, and here’s what he had to say:

Our general recommendation is to…

  • Power down all the devices used if they have internal or built-in batteries
  • Connect the external power sources to all devices
  • Connect the HDMI/SDI cable between the devices
  • Power on the devices

Sounds reasonable at first glance, doesn’t it? But what if you’re a professional with clients that pay by the hour and want to keep their costs at a minimum, and you want to keep them happy, or you’re juggling multiple clients in a day? Or if you’re just an imperfectly multitasking prosumer (aka power user) like me? In the rush of the moment, you might forget to power the camera off before plugging in a field monitor, for example. And then…zap.

My initial brainstorm on a solution was to switch from conventional copper-based HDMI cables to optical ones. Two problems with this idea, though: they tend to be bulkier than their conventional counterparts, which is particularly problematic with the short cable runs used with cameras as well as a general desire for svelteness, both again exemplified by SmallRig products:

The other issue, of course, is that optical HDMI cables aren’t completely optical. Quoting from a CableMatters blog post on the topic:

A standard HDMI cable is made up of several twisted pairs of copper wiring, insulated and protected with shielding and silicon wraps. A fiber optic HDMI cable, on the other hand, does away with the central twisted copper pair, but still retains some [copper strands]. At its core are four glass filaments which are encased in a protective coating. Those glass strands transmit the data as pulses of light, instead of electricity. Surrounding those glass fibers are seven to nine twisted copper pairs that handle the power supply for the cable, one for Consumer Electronics Control (CEC), two for sound return (ARC and eARC), and one set for a Display Data Channel (DDC) signal.

My Blackmagic contact also wisely made the following observations, by the way:

It may not be fair to say that Blackmagic Pocket Cinema Cameras are especially susceptible to issues that could affect any camera. Any camera used in the same situation would be affected equally. (Hence the references to Arri camera white papers in the sources you quoted)

He’s spot-on. This isn’t a Blackmagic-specific issue. Nor is it a HDMI-specific issue, hence my earlier allusion to SDI (the Serial Data Interface), which also comes in copper and fiber variants. Here’s a Wikipedia excerpt, for those not already familiar with the term (and the technology).

Serial digital interface (SDI) is a family of digital video interfaces first standardized by SMPTE (The Society of Motion Picture and Television Engineers) in 1989…These standards are used for transmission of uncompressed, unencrypted digital video signals (optionally including embedded audio and time code) within television facilities; they can also be used for packetized data. SDI is used to connect together different pieces of equipment such as recorders, monitors, PCs and vision mixers.

In fact, a thorough and otherwise excellent white paper on the big-picture topic, which I commend to your attention, showcases SDI (vs HDMI) and Arri cameras (vs Blackmagic ones).

To wit, and exemplifying my longstanding theory that it’s possible to find and buy pretty much anything (legal, at least) on eBay, I recently stumbled across (and of course acted on and purchased, for less than $40 total including tax and shipping) a posting for the battery-acid-damaged motherboard of a Blackmagic Production Camera 4K, which dates from 2014. Here are some stock images of the camera standalone:

Rigged out:

And in action:

Now for our mini-teardown patient. I’ll start out with a side view, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Compare this to the earlier stock shot of the camera and you’ll quickly realize that the penny’s location corresponds to the top edge of the camera in its operating orientation. Right-to-left (or, if you prefer, top-to-bottom), the connections are (copy-and-pasting from the user manual, with additional editorializing by yours truly in brackets):

  • LANC [the Sony-championed Logic Application Control Bus System or Local Application Control Bus System] REMOTE: The 2.5mm stereo jack for LANC remote control supports record start and stop, and iris and focus control on [Canon] EF [lens] mount models.
  • HEADPHONES: 3.5 mm [1/8”] stereo headphone jack connection.
  • AUDIO IN: 2 x 1/4 inch [6.35 mm] balanced TRS phono jacks for mic or line level audio.
  • SDI OUT: SDI output for connecting to a switcher [field monitor] or to DaVinci Resolve via capture device for live grading.
  • THUNDERBOLT CONNECTION: Blackmagic Cinema Camera outputs 10-bit uncompressed 1080p HD. Production Camera 4K also outputs compressed Ultra HD 4K. Use the Thunderbolt connection for HD UltraScope waveform monitoring and streaming video to a Thunderbolt compatible computer.
  • POWER: 12 – 30V power input for power supply and battery charging.

Now for an overview shot of the front of the main system PCB I bought:

After taking this first set of photos, I realized that I’d oriented the PCB 180° from how it would be when installed in the camera in its operating orientation (remember, the power input is at the bottom). This explains why the U.S. penny is upside-down in the pictures; I re-rotated the images in more intuitive-to-you orientations before saving them!

Speaking of which, above and to the right of the U.S. penny is the battery acid damage I mentioned earlier; it would make sense to have the battery nearby the power input, after all. One unique thing about this camera versus all the ones I own is that the battery is embedded, not user removable (I wonder how much Blackmagic charged as a service fee to replace it after heavy use had led to the demise of the original?).

Another thing to keep in mind is that the not-shown image sensor is in front of this side of the PCB. Here’s another stock image which shows (among other things) the Super 35-sized image sensor peeking through the lens mount hole:

My guess would be that the long vertical connector on the left side of the PCB, to the right of the grey square thing I’ll get to shortly, mates to a daughter card containing the image sensor.

I bet that many of you had the same thought I did when I first reviewed this side of the PCB…holy cow, look at all those chips! Right? Let’s zoom in a bit for a closer inspection:

This is the left half. Again, note the vertical connector and the mysterious grey square to the left of it (keep holding that thought; I promise I’ll do a reveal shortly!). Both above and below it are Samsung K4B4G1646B-HCK0 4 Gbit (256Mbit x16) DDR3 SDRAMS, four total, for 2 GBytes of total system RAM. I’m betting that, among other things, the RAM array temporarily holds each video frame’s data streamed off the global shutter image sensor (FYI I plan to publish an in-depth tutorial on global vs rolling shutter sensors, along with other differentiators, in EDN soon!) for in-camera processing purposes prior to SSD storage.

And here’s the right half:

Wow, look at all that acid damage! I’m guessing the battery either leaked due to old age or exploded due to excessive applied charging voltage. Other theories, readers?

I realize I’ve so far skipped over a bunch of potentially interesting ICs. And have I mentioned that mysterious grey square yet? Let’s return to the left side, this time zoomed in even more (and ditching the penny) and dividing the full sequence into thirds. That grey patch is thermal tape, and it peeled right off the IC below it (here’s its adhesive underside):

Exposing to view…a FPGA!

Specifically, it’s a Xilinx (now AMD) Kintex 7 XC7K160T. I’d long suspected Blackmagic based its cameras on programmable logic vs an ASIC-based SoC, considering their:

  • Modest production volumes versus consumer camcorders
  • High-performance requirements
  • High functionality, therefore elaborate connectivity requirements, and
  • Fairly short operating time between battery charges, inferring high power consumption.

The only thing that surprised me was that Blackmagic had gone with a classic FPGA versus one with an embedded “hard” CPU core, such as Xilinx-now-AMD’s Arm-based Zynq-7000 family. That said, I’d be willing to bet that there’s still a MicroBlaze “soft” CPU core implemented inside.

Other ICs of note in this view include, at the bottom left corner, a Cypress Semiconductor (now Infineon) CY7C68013A USB 2.0 controller, to the right of and below a mini-USB connector which is exposed to the outside world via the SSD compartment and finds use for firmware updates:

In the lower right corner is the firmware chip, a Spansion (also now Infineon) S25FL256S 256 Mbit flash memory with a SPI interface. And along the right side, to the right of that long tall connector I’ve already mentioned, is another Cypress (now Infineon) chip, the CY24293 dual-output PCI Express clock generator. I’m guessing that’s a PCIe 1.0 connector, then?

Now for the middle segment:

Interesting (at least to me) components here that I haven’t already mentioned include the diminutive coin cell battery in the upper left, surrounded on three sides by LM3100 voltage regulators (I “think” originally from National Semiconductor, now owned by Texas Instruments…there are at least four more LM3100s, along with two LM3102s, that I can count in various locales on the board). Power generation and regulation is obviously a key focus of this segment of the circuitry. That all said, toward the center is another Xilinx-now-AMD programmable logic chip, this one a XC9572XL CPLD. Also note the four conductor strips at top, jointly labeled JT3 (and I’m guessing used for testing).

Finally, the right side:

Connectivity dominates the landscape here, along with acid damage (it gets uglier the closer you get to it, doesn’t it?). Note the speaker and microphone connectors at top. And toward the middle, alongside the dual balanced audio input plugs, are two Texas Instruments TLV320AIC3101 low-power stereo audio codecs; in-between them is a National Semiconductor-now-Texas Instruments L49743 audio op amp.

Last, but not least, let’s look at the other side of the PCB:

It’s comparatively unremarkable, from an IC standpoint compared to the other side, and aside from the oddly unpopulated J14 and U19 sites at the top. What it lacks in chip excitement (unless you’re into surface-mount passives, I guess), it compensates with connector curiosity.

On the left side (I’d oriented the PCB correctly straightaway this time, therefore the non-upside-down Abraham Lincoln on the penny):

there’s first a flex PCB connector up top (J12) which, perhaps obviously given its labeling, is intended for the LCD on the camera’s backside (but not its integrated touch interface…keep reading). In the middle is, I believe, the 2.5” SATA connector for the SSD. And on the bottom edge are, left to right, the connectors for the battery, the cable that runs to the electrical connectors on the lens mount (I’m guessing here based on the “EF POGO” phrase) and a Peltier cooler. Here’s a Wikipedia excerpt on the latter, for those not already familiar with the concept:

Thermoelectric cooling uses the Peltier effect to create a heat flux at the junction of two different types of materials. A Peltier cooler, heater, or thermoelectric heat pump is a solid-state active heat pump which transfers heat from one side of the device to the other, with consumption of electrical energy, depending on the direction of the current. Such an instrument is also called a Peltier device, Peltier heat pump, solid state refrigerator, or thermoelectric cooler (TEC) and occasionally a thermoelectric battery.

Also note the two four-pad conductor clusters, one at the top and the other, although this time (versus the earlier mentioned JT3) unlabeled and on only one side of the board. And what’s under that tape? Glad you asked:

And now for the other (right) side:

Oodles o’passives under the FPGA, as previously noted, plus a few more connectors that I haven’t already mentioned. On the top edge are ones for the back panel touchscreen and the up-front record button, while along the bottom edge (again, left to right) are ones for the additional (back panel, this time) interface buttons and a fan. Yes, this camera contains both a Peltier cooler and a fan!

That’s “all” I’ve got for you today. I welcome any reader thoughts on the upfront HDMI/SDI connectivity issue, along with anything from the subsequent mini-teardown, in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Prosumer and professional cameras: High video quality, but a connectivity vulnerability appeared first on EDN.

HBM memory chips: The unsung hero of the AI revolution

Wed, 01/31/2024 - 12:28

Memory chips like DRAMs, long subject to cyclical trends, are now eying a more stable and steady market: artificial intelligence (AI). Take the case of SK hynix, the world’s second-largest supplier of memory chips. According to its chief financial officer, Kim Woo-hyun, the company is ready to grow into a total AI memory provider by leading changes and presenting customized solutions.

The South Korean chipmaker has been successfully pairing its high-bandwidth memory (HBM) devices with Nvidia’s H100 graphics processing units (GPUs) and others for processing vast amounts of data in generative AI. Large language models (LLMs) like ChatGPT increasingly demand high-performance memory chips to enable generative AI models to store details from past conversations and user preferences to generate human-like responses.

Figure 1 SK hynix is consolidating its HBM capabilities to stay ahead of the curve in AI memory space.

In fact, AI companies are complaining that they can’t get enough memory chips. OpenAI CEO Sam Altman has recently visited South Korea, where he met senior executives from SK hynix and Samsung, the world’s largest memory chip suppliers followed by Micron of the United States. OpenAI’s ChatGPT technology has been vital in spurring demand for processors and memory chips running AI applications.

SK hynix’s HBM edge

SK hynix’s lucky break in the AI realm came when it surpassed Samsung by launching the first HBM device in 2015 and gained a massive head start in serving GPUs for high-speed computing applications like gaming cards. HBM vertically interconnects multiple DRAM chips to dramatically increase data processing speed compared with earlier DRAM products.

Not surprisingly, therefore, these memory devices have been widely used to power generative AI devices operating on high-performance computing systems. Case in point: SK hynix’s sales of HBM3 chips have increased by more than fivefold in 2023 compared to a year earlier. A Digitimes report claims that Nvidia has paid SK hynix and Micron advanced sums ranging from $540 million to $770 million to secure the supply of HBM memory chips for its GPU offerings.

SK hynix plans to proceed with the mass production of the next version of these memory devices—HBM3E—while also carrying out the development of next-generation memory chips called HBM4. According to reports published in the Korean media, Nvidia plans to pair its H200 and B100 GPUs with six and eight HBM3E modules, respectively. HBM3E, which significantly improves speed compared to HBM3, can process data up to 1.15 terabytes per second.

Figure 2 SK hynix is expected to begin mass production of HBM3E in the first half of 2024.

The Korean memory supplier calls HBM3E an AI memory product while claiming technological leadership in this space. While both Samsung and Micron are known to have their HBM3E devices ready and in the certification process at AI powerhouses like Nvidia, SK hynix seems to be a step ahead of its memory rivals. Take, for example, HBM4, currently under development at SK hynix; it’s expected to be ready for commercial launch in 2025.

What’s especially notable about HBM4 is its ability to stack memory directly on processors, eliminating interposers altogether. Currently, HBM stacks integrate 8, 12, or 16 memory devices next to CPUs or GPUs, and these memory devices are connected to these processors using an interface. Integrating memory directly onto processors will change how chips are designed and fabricated.

An AI memory company

Industry analysts also see SK hynix as the key beneficiary of the AI-centric memory upcycle because it’s a pure-play memory company, unlike its archrival Samsung. It’s worth noting that Samsung is also heavily investing in AI research and development to bolster its memory offerings.

AI does require a lot of memory and it’s no surprise that South Korea, housing the top two memory suppliers, aspires to become an AI powerhouse. SK hynix, on its part, has already demonstrated its relevance in designs for AI servers and on-device AI adoption.

While talking about memory’s crucial role in generative AI at CES 2024 in Las Vegas, its CEO Kwak Noh-Jung vowed to double the company’s market cap in three years. That’s why it’s now seeking to become a total AI memory provider while seeking a fast turnaround with high-value HBM products.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post HBM memory chips: The unsung hero of the AI revolution appeared first on EDN.

Power Tips #125: How an opto-emulator improves reliability and transient response for isolated DC/DC converters

Tue, 01/30/2024 - 17:40

In high-voltage power-supply designs, safety concerns require isolating the high-voltage input from the low-voltage output. Designers typically use magnetic isolation in a transformer for power transfer, while an optocoupler provides optical isolation for signal feedback.

One of the main drawbacks of optocouplers in isolated power supplies is their reliability. The use of an LED in traditional optocouplers to transmit signals across the isolation barrier leads to wide part-to-part variation in the current transfer ratio over temperature, forward current, and operating time. Optocouplers are also lacking in terms of isolation performance, since they often use weak insulation materials such as epoxies or sometimes just an air gap.

A purely silicon-based device that emulates the behavior of an optocoupler such as the Texas Instruments (TI) ISOM8110 remedies these issues since it removes the LED component, uses a resilient isolation material such as silicon dioxide, and is certified and tested under a much more stringent standard [International Electrotechnical Commission (IEC) 60747-17] compared to the IEC 60747-5-5 optocoupler standard (see this application note for more details).

An optocoupler’s lack of reliability over time and temperature has meant that many sectors, such as automotive and space, have had to rely on primary-side regulation or other means to regulate the output. An opto-emulator contributes to improved reliability and also provides substantial improvements in transient and loop response without increasing the output filter.

Typically, the limiting factor in the bandwidth of an isolated power supply is the bandwidth of the optocoupler. This bandwidth is limited by the optocoupler pole, formed from its intrinsic parasitic capacitance and the output bias resistor. Using an opto-emulator eliminates this pole, which leads to higher bandwidth for the entire system without any changes to the output filter. Figure 1 and Figure 2 show the frequency response of an isolated flyback design tested with an optocoupler and opto-emulator, respectively.

Figure 1 Total bandwidth of an isolated power supply using the TCMT1107 optocoupler. Source: Texas Instruments

Figure 2 Total bandwidth of an isolated power supply using the ISOM8110 opto-emulator. Source: Texas Instruments

The target for both designs was to increase the overall bandwidth while still maintaining 60 degrees of phase margin and 10dB of gain margin. Table 1 lists the side-by-side results.

 

Optocoupler

Opto-emulator

Bandwidth (kHz)

8.6

38.2

Phase margin (degrees)

60.2

67.4

Gain margin (dB)

18.7

11.62

Table 1 Optocoupler versus opto-emulator frequency response results.

The increased bandwidth of the opto-emulator helps achieve nearly a quadruple increase in the overall bandwidth of the design while maintaining phase and gain margins. Figure 3 highlights the changes made to the compensation network of the opto-emulator board versus the optocoupler board. As you can see, these changes are minimal and only require changing a total of three passive components. Another benefit of the opto-emulator is that it is pin-for-pin compatible with most optocouplers, so it doesn’t require a new layout for existing designs.

Figure 3 Schematic changes made to the compensation network of the opto-emulator board versus the optocoupler board. Source: Texas Instruments

Only the compensation components around the TL431 shunt voltage regulator were modified from one design to the other. Other than C19, C22 and R20, the rest of the design was identical, including the power-stage components, which include the output capacitance.

Because of the quadruple increase in the bandwidth, we were able to improve transient response significantly as well, without adding any more capacitance to the output. Figure 4 and Figure 5 show the transient response of the optocoupler and opto-emulator designs, respectively.

Figure 4 The transient response for the optocoupler design. Source: Texas Instruments

Figure 5 The transient response for the opto-emulator design showing a greater than 50% reduction in overall transient response. Source: Texas Instruments

The load step and the slew rate were the same in both tests. The load-step response went from –1.04 V in the optocoupler to –360 mV in the opto-emulator, and the load-dump response decreased from 840 mV to 260 mV. This is a > 50% reduction in the overall transient response, without adding more output capacitors.

Opto-emulator benefits

Because of the significant bandwidth improvement that an opto-emulator provides over an optocoupler, designers can reduce the size of their output capacitor without sacrificing transient performance in isolated designs that are cost- and size-sensitive.

An opto-emulator also provides more reliability than an optocoupler by enabling secondary-side regulation in applications that could not use optocouplers before, such as automotive and space. With the increase in bandwidth, an opto-emulator can provide higher bandwidth for the overall loop of the power supply, leading to significantly better transient response without increasing the output capacitance. For existing designs, an opto-emulator’s pin-for-pin compatibility with most optocouplers allows for drop-in replacements, with only minor tweaks to the compensation network.

Sarmad Abedin has been a systems engineer at Texas Instruments since 2011. He works for the Power Design Services team in Dallas, TX. He has been designing custom power supplies for over 10 years specializing in low power AC/DC applications. He graduated from RIT in 2010 with a BS in Electrical Engineering.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #125: How an opto-emulator improves reliability and transient response for isolated DC/DC converters appeared first on EDN.

Conquer design challenges: Skills for power supplies

Mon, 01/29/2024 - 19:10

In the dynamic world of engineering design, the escalating requirements placed on power systems frequently give rise to intricate design challenges. The evolving landscape of DC power systems introduces complexities that can manifest stumbling blocks in the design process. Fundamental skills related to power supplies play a crucial role in mitigating these challenges.

Today’s advanced DC power systems are not immune to design problems, and a solid foundation in power supply knowledge can prove instrumental in navigating and overcoming hurdles. Whether it’s discerning the intricacies of device under test (DUT) voltage or current, addressing unforeseen temperature fluctuations, or managing noise sensitivity; a fundamental understanding of power supplies empowers designers to identify and tackle the nuanced issues embedded within a power system.

Understanding constant voltage and constant current

One of the most important concepts for anyone using power supplies is understanding constant voltage (CV) and constant current (CC). For engineers getting started with power supplies they must know some of the basic rules when it comes to CV and CC. The output of a power supply can operate in either CV or CC mode depending on the voltage setting, current limit setting, and load resistance.

In scenarios where the load current remains low and the drawn current falls below the preset current limit, the power supply seamlessly transitions into CV mode. This mode is characterized by the power supply regulating the output voltage to maintain a constant value. In essence, the voltage becomes the focal point of control, ensuring stability, while the current dynamically adjusts based on the load requirements. This operational behavior is particularly advantageous when dealing with varying loads, as it allows the power supply to cater to diverse current demands while steadfastly maintaining a consistent voltage output.

In instances where the load current surges to higher levels, surpassing the predefined current setting, the power supply shifts into CC mode. This response involves the power supply imposing a cap on the current, restricting it to the pre-set value. Consequently, the power supply functions as a guardian, preventing the load from drawing excessive current.

In CC mode, the primary focus of regulation shifts to the current, ensuring it remains consistent and in line with the predetermined setting. Meanwhile, the voltage dynamically adjusts in response to the load’s requirements. This operational behavior is particularly crucial in scenarios where the load’s demands fluctuate, as it ensures a stable and controlled current output, preventing potential damage to both the power supply and the connected components. Understanding this interplay between voltage and current dynamics is essential for engineers and users to harness the full potential of power supplies, especially in applications with varying load conditions.

Most power supplies are designed in such a way that it is optimized for CV operation. This means that the power supply will look at the voltage setting first and adjust all other secondary variables to achieve the programmed voltage. For a visual representation, see Figure 1 on the operating locus of a CC/CV power supply.

Figure 1 The operating locus of a CC/CV power supply. Source: Keysight

Boosting voltage or current

In instances where the demands of an application exceed the capabilities of a single power supply, a practical solution is to combine two or more power supplies strategically. This can be particularly useful when users need more voltage or current than a single power supply unit can deliver.

For scenarios demanding higher voltage, the method involves connecting the outputs of the power supplies in series. This arrangement effectively adds the individual voltage outputs, resulting in an aggregate voltage that meets the specified requirements. On the other hand, requiring a higher current, connecting the power supply outputs in parallel proves advantageous. This configuration combines the current outputs, providing a cumulative current output that satisfies the application’s demands.

To achieve optimal results, it is crucial to set each power supply output independently. This ensures that the voltages or currents align harmoniously, summing up to the total desired value. By following these simple yet effective steps, users can harness the collective power of multiple power supplies, tailoring their outputs to meet the specific voltage and current requirements of the application.

For higher voltage, first set each output to the maximum desired current limit the load can safely handle. Then, equally distribute the total desired voltage to each power supply output. For example, if engineers are using three outputs, set each to one third the total desired voltage:

  1. Never exceed the floating voltage rating (output terminal isolation) of any of the outputs.
  2. Never subject any of the power supply outputs to a reverse voltage.
  3. Only connect outputs that have identical voltage and current ratings in series.

For higher current, equally distribute the total desired current limit to each power supply:

  1. One output must operate in constant voltage (CV) mode and the other(s) in constant current (CC) mode.
  2. The output load must draw enough current to keep the CC output(s) in CC mode.
  3. Only connect outputs that have identical voltage and current ratings in parallel.

See Figure 2 for a visual representation of a series connection with remote sense to a load.

Figure 2 Series connection to a load with remote sense. Source: Keysight

In the parallel setup, the CV output determines the voltage at the load and across the CC outputs (Figure 3). The CV unit will only supply enough current to fulfill the total load demand.

Figure 3 Parallel connection to the load with remote sense; the CV output determines the voltage at the load and across the CC outputs. Source: Keysight

Dealing with unexpected temperature effects

Temperature fluctuations not only impact the behavior of DUTs but also exert a significant influence on the precision of measurement instruments. For example, during a chilly winter day, an examination of Lithium-ion batteries at room temperature yielded unexpected results. Contrary the user’s anticipation of a decrease, the voltage of the cells drifted upward over time.

This phenomenon was attributed to the nighttime drop in room temperature, which paradoxically led to an increase in cell voltage. This effect proved more pronounced than the anticipated decrease resulting from cell self-discharge during the day. It’s worth noting that the power supplies responsible for delivering power to these cells are also susceptible to temperature variations.

To accurately characterize the output voltage down to microvolts, it becomes imperative to account for temperature coefficients in the application of power. This adjustment ensures a more precise understanding of the voltage dynamics, accounting for the impact of temperature on both the DUTs and the measurement instruments.

The following is an example using a power supply precision module that features a low-voltage range. The test instrumentation specification table documents the valid temperature range at 23°C ±5°C after a 30-minute warm-up.

  1. To apply a temperature coefficient, engineers must treat it like an error term. Let’s assume that the operating temperature is 33°C, or 10°C above the calibration temperature of 23°C and a voltage output of 5.0000 V.

Voltage programming temperature coefficient = ± (40 ppm + 70 μV) per °C

  1. To correct for the 10°C temperature difference from calibration temperature, engineers will need to account for the difference in the operating temperature and voltage range specification. The low voltage range spec is valid at 23°C ±5°C or up to 28°C. Engineers will need to apply the temperature coefficient for the (5°C) difference in the operating temperature (33°C) and low voltage range spec (28°C):

± (40 ppm * 5 V + 70 μV) * 5°C = 40ppm * 5 V * 5 °C + 70 μV * 5 °C = 1.35 mV

  1. The temperature-induced error must be added to the programming error for the low-voltage range provided in the N6761A specification table:

± (0.016 % * 5 V + 1.5 mV) = 2.3 mV

  1. Therefore, the total error, programming plus temperature, will be:

± (1.35 mV + 2.3 mV) = ±3.65 mV

  1. That means user output voltage will be somewhere between 4.99635 V and 5.00365 V when attempting to set the voltage to 5.0000 V in an ambient temperature of 33°C. Since the 1.35 mV part of this error is temperature-induced, as the temperature changes, this component of the error will change, and the output of the power supply will drift. The measurement drift with temperature can be calculated using the supply’s temperature coefficient.

Dealing with noise sensitive DUTs

If the DUT is sensitive to noise, engineers will want to do everything they can to minimize noise on the DC power input. The easiest thing users can do is use a low-noise power supply. But if one is not available, there are a couple of other things engineers can do.

The links between the power supply and the DUT are vulnerable to interference, particularly noise stemming from inductive or capacitive coupling. Numerous methods exist to mitigate this interference, but employing shielded two-wire cables for both load and sense connections stands out as the most effective solution. It is essential, however, to pay careful attention to the connection details.

For optimal noise reduction, connect the shield of these cables to earth ground at only one end, as illustrated in Figure 4.

Figure 4 To reduce noise connect the shield to earth ground only on one end of the cable. Source: Keysight

Avoid the temptation to ground the shield at both ends, as this practice can lead to the formation of ground loops, as depicted in Figure 5. These loops result from the disparity in potential between the supply ground and DUT ground.

Figure 5 Diagram where the shield is connected to ground at both ends resulting in a ground loop. Source: Keysight

The presence of a ground loop current can induce voltage on the cabling, manifesting as unwanted noise for your DUT. By adhering to the recommended practice of grounding the shield at a single end, you effectively minimize the risk of ground loops and ensure a cleaner, more interference-resistant connection between your power supply and the DUT.

Also, common-mode noise is generated when common-mode current flows from inside a power supply to earth ground and produces voltage on impedances to ground, including cable impedance. To minimize the effect of common-mode current, equalize the impedance to ground from the plus and minus output terminals on the power supply. Engineers should also equalize the impedance from the DUT plus and minus input terminals to ground. Use a common-mode choke in series with the output leads and a shunt capacitor from each lead to ground to accomplish this task.

Choosing the right power supply

Navigating the selection process for a power supply demands careful consideration of the specific requirements. Whether in need of a basic power supply or one with more advanced features tailored for specific applications, the ramifications of choosing a power supply with excessive power capacity can result in numerous challenges.

Common issues associated with opting for a power supply with too much power include increased output noise, difficulties in setting accurate current limits, and a compromise in meter accuracy. These challenges can be particularly daunting, but developing basic skills related to power supplies can significantly aid in overcoming these design obstacles.

By cultivating a foundational understanding of power supply principles, such as the nuances of CV and CC modes, engineers can effectively address issues related to noise, current limits, and meter accuracy. This underscores the importance of not only selecting an appropriate power supply but also ensuring that users possess the essential skills to troubleshoot and optimize the performance of the chosen power supply in their specific applications. Striking a balance between power capacity and application needs, while honing basic skills, is key to achieving a harmonious and effective power supply setup.

Andrew Herrera is an experienced product marketer in radio frequency and Internet of Things solutions. Andrew is the product marketing manager for RF test software at Keysight Technologies, leading Keysight’s PathWave 89600 vector signal analyzer, signal generation, and X-Series signal analyzer measurement applications. Andrew also leads the automation test solutions such as Keysight PathWave Measurements and PathWave Instrument Robotic Process Automation (RPA) software.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Conquer design challenges: Skills for power supplies appeared first on EDN.

The Intel-UMC fab partnership: What you need to know

Mon, 01/29/2024 - 13:35

Intel joins hands with Taiwanese fab United Microelectronics Corp. (UMC) in a new twist in the continuously evolving and realigning semiconductor foundry business. What does Intel’s strategic collaboration with UMC mean, and what will these two companies gain from this tie-up? However, before we delve into the motives of this semiconductor manufacturing partnership, below are the basic details of this foundry deal.

Intel and UMC will jointly develop a 12-nm photolithography process for high-growth markets such as mobile, communication infrastructure and networking, and the production of chips at this 12-nm node will begin at Intel’s Fabs 12, 22, and 32 in Arizona in 2027. The 12-nm process node will be built on Intel’s FinFET transistor design, and the two companies will jointly share the investment.

Figure 1 UMC’s co-president Jason Wang calls this tie-up with Intel a step toward adding a Western footprint. Source: Intel

Besides fabrication technology, the two companies will jointly offer EDA tools, IP offerings, and process design kits (PDKs) to simplify the 12-nm deployment for chip vendors. It’s worth mentioning here that Intel’s three fabs in Arizona—already producing chips on 10-nm and 14-nm nodes—aim to leverage many of the tools for the planned 12-nm fabrication.

Intel claims that 90% of the tools have been transferable between 10-nm and 14-nm nodes; if Intel is able to employ these same tools in 12-nm chip fabrication, it will help reduce additional CapEx and maximize profits.

While the mutual gains these two companies will accomplish from this collaboration are somewhat apparent, it’s still important to understand how this partnership will benefit Intel and UMC, respectively. Let’s begin with Intel, which has been plotting to establish Intel Foundry Services (IFS) as a major manufacturing operation for fabless semiconductor companies.

What Intel wants

It’s important to note that in Intel’s chip manufacturing labyrinth, IFS has access to three process nodes. First, the Intel-16 node facilitates 16-nm chip manufacturing for cost-conscious chip vendors designing inexpensive low-power products. Second is Intel 3, which can produce cutting-edge nodes using extreme ultraviolet (EUV) lithography but sticks to tried-and-tested FinFET transistors.

Third, Intel 18A is the cutting-edge process node that focuses on performance and transistor density while employing gate-all-around (GAA) RibbonFET transistors and PowerVia backside power delivery technology. Beyond these three fab offerings, IFS needs to expand its portfolio to serve a variety of applications. On the other hand, its parent company, Intel, will have a lot of free capacity while it moves its in-house CPUs to advanced process nodes.

So, while Intel moves the production of its cutting-edge process nodes like 20A and 18A to other fabs, Fabs 12, 22, and 32 in Arizona will be free to produce chips on a variety of legacy and low-cost nodes. Fabs 12, 22, and 32 are currently producing chips on Intel’s 7-nm, 10-nm, 14-nm, and 22-nm nodes.

Figure 2 IFS chief Stuart Pann calls strategic collaboration with UMC an important step toward its goal of becoming the world’s second-largest foundry by 2030. Source: Intel

More importantly, IFS will get access to UMC’s large customer base and can utilize its manufacturing expertise in areas like RF and wireless at Intel’s depreciated and underutilized fabs. Here, it’s worth mentioning Intel’s similar arrangement with Tower Semiconductor; IFS will gain from Tower’s fab relationships and generate revenues while fabricating 65-nm chips for Tower at its fully depreciated Fab 11X.

IFS is competing head-to-head with established players such as TSMC and Samsung for cutting-edge smaller nodes. Now, such tie-ups with entrenched fab players like UMC and Tower enable IFS to cater to mature fabrication nodes, something Intel hasn’t done while building CPUs on the latest manufacturing processes. Moreover, fabricating chips at mature nodes will allow IFS to open a new front against GlobalFoundries.

UMC’s takeaway

UMC, which formed the pure-play fab duo to spearhead the fabless movement in the 1990s, steadily passed the cutting-edge process node baton to TSMC, eventually resorting to mature fabrication processes. It now boasts more than 400 semiconductor firms as its customers.

Figure 3 The partnership allows UMC to expand capacity and market reach without making heavy capital investments. Source: Intel

The strategic collaboration with IFS will allow the Hsinchu, Taiwan-based fab to enhance its existing relationships with fabless clients in the United States and better compete with TSMC in mature nodes. Beyond its Hsinchu neighbor TSMC, this hook-up with Intel will also enable UMC to better compete amid China’s rapid fab capacity buildup.

It’ll also give UMC access to 12-nm process technology without building a new manufacturing site and procuring advanced tools. However, UMC has vowed to install some of its specialized tools at Fabs 12, 22, and 32 in Arizona. The most advanced node that UMC currently has in its arsenal is 14 nm; by jointly developing a 12-nm node with Intel, UMC will expand its know-how on smaller chip fabrication processes. It’ll also open the door for the Taiwanese fab on smaller nodes below 12 nm in the future.

The new fab order

The semiconductor manufacturing business has continuously evolved since 1987 when a former TI executive, Morris Chang, founded the first pure-play foundry with major funding from Philips Electronics. UMC soon followed the fray, and soon, TSMC and UMC became synonymous with the fabless semiconductor model.

Then, Intel, producing its CPUs at the latest process nodes and quickly moving to new chip manufacturing technologies, decided to claim its share in the fab business when it launched IFS in 2021. The technology and business merits of IFS aside, one thing is clear. The fab business has been constantly realigning since then.

That’s partly because Intel is the largest IDM in the semiconductor world. However, strategic deals with Tower and UMC also turn it into an astute fab player. The arrangement with UMC is a case in point. It will allow Intel to better utilize its large chip fabrication capacity in the United States while creating a regionally diversified and resilient supply chain.

More importantly, Intel will be doing it without making heavy capital investments. The same is true for UMC, which will gain much-needed expertise in FinFET manufacturing technologies as well as strategic access to semiconductor clients in North America.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The Intel-UMC fab partnership: What you need to know appeared first on EDN.

RFICs improve in-vehicle communications

Thu, 01/25/2024 - 21:50

Two linear power amplifiers (PAs) and two low-noise amplifiers (LNAs) from Guerrilla RF serve as signal boosters to enhance in-cabin cellular signals. Qualified to AEC-Q100 Grade 2 standards, the GRF5507W and GRF5517 PAs and the GRF2106W and GRF2133W LNAs operate over a temperature range of -40°C to +105°C.

The GRF5507W power amp has a tuning range of 0.7 GHz to 0.8 GHz, while the GRF5517W spans 1.7 GHz to 1.8 GHz. Each device delivers up to 23 dBm of output power with adjacent channel leakage ratio (ACLR) performance of better than -45 dBc. Further, this ACLR figure is achieved without the aid of supplemental linearization schemes, like digital pre-distortion (DPD). According to the manufacturer, the ability to beat the -45-dBc ACLR metric without DPD helps meet the stringent size, cost, and power dissipation requirements of cellular compensator applications.

The GRF2106W low-noise amplifier covers a tuning range of 0.1 GHz to 4.2 GHz, and the GRF2133W spans 0.1 GHz to 2.7 GHz. At 2.45 GHz (3.3 V, 15 mA), the GRF2106W provides a nominal gain of 21.5 dB and a noise figure of 0.8 dB. A higher gain level of 28 dB is available with the GRF2133W, along with an even lower noise figure of 0.6 dB at 1.95 GHz (5 V, 60 mA).

Prices for the GRF5507W and GRF5517W PAs in 16-pin QFN packages start at $1.54 in lots of 10,000 units. Prices for the GRF2106W and GRF2133W LNAs in 6-pin DFN packages start at $0.62 and $0.83, respectively, in lots of 10,000 units. Samples and evaluation boards are available for all four components.

GRF5507W product page

GRF5517W product page

GRF2106W product page

GRF2133W product page

Guerrilla RF 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RFICs improve in-vehicle communications appeared first on EDN.

Bluetooth LE SoC slashes power consumption

Thu, 01/25/2024 - 21:49

Renesas offers the DA14592 Bluetooth LE SoC, which the company says is its lowest power and smallest multicore Bluetooth LE device in its class. The device balances tradeoffs between on-chip memory and SoC die size to accommodate a broad range of applications, including crowd-sourced locationing, connected medical devices, metering systems, and human interface devices.

Along with an Arm Cortex-M33 CPU, the DA14592 features a software-configurable Bluetooth LE MAC engine based on an Arm Cortex-M0+. A new low-power mode enables a radio transmit current of 2.3 mA at 0 dBm and a radio receive current of 1.2 mA.

The DA14592 also supports a hibernation current of just 90 nA, which helps to extend the shelf-life of end products shipped with the battery connected. For products requiring extensive processing, the device provides an ultra-low active current of 34 µA/MHz.

Requiring only six external components, the DA14592 lowers the engineering BOM. Packaging options for the device include WLCSP (3.32×2.48 mm) and FCQFN (5.1×4.3 mm) The SoC’s reduced BOM, coupled with small package size, helps designers minimize product footprint. Other SoC features include a sigma-delta ADC, up to 32 GPIOs, and a QSPI supporting external flash or RAM.

The DA14592 Bluetooth LE SoC is currently in mass production. Renesas also offers the DA14592MOD, which integrates all of the external components required to implement a Bluetooth LE radio into a compact module. The DA14592MOD module is targeted for world-wide regulatory certification in 2Q 2024.

DA14592 product page 

Renesas Electronics  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Bluetooth LE SoC slashes power consumption appeared first on EDN.

Pages