EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 41 min 39 sec ago

Cadence debuts AI thermal design platform

Thu, 02/01/2024 - 19:45

Cadence Celsius Studio, an AI-enabled thermal design and analysis platform for electronic systems, aims to unify electrical and mechanical CAD. The system addresses thermal analysis and thermal stress for 2.5D and 3D ICs and IC packaging, as well as electronics cooling for PCBs and complete electronic assemblies.

With Celsius Studio, electrical and mechanical/thermal engineers can concurrently design, analyze, and optimize product performance within a single unified platform. This eliminates the need for geometry simplification, manipulation, and/or translation. Built-in AI technology enables fast and efficient exploration of the full design space to converge on the optimal design.

The multiphysics thermal platform can simulate large systems with detailed granularity for any object of interest, including chip, package, PCB, fan, or enclosure. It combines finite element method (FEM) and computational fluid dynamics (CFD) engines to achieve complete system-level thermal analysis. Celsius Studio supports all ECAD and MCAD file formats and seamlessly integrates with Cadence IC, packaging, PCB, and microwave design platforms.

Customers seeking to gain early access to Celsius Studio can contact Cadence using the product page link below.

Celsius Studio product page

Cadence Design Systems 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cadence debuts AI thermal design platform appeared first on EDN.

“Sub-zero” op-amp regulates charge pump inverter

Thu, 02/01/2024 - 16:55

Avoiding op-amp output saturation error by keeping op-amp outputs “live” and below zero volts is a job where a few milliamps and volts (or even fractions of one volt) of regulated negative rail can be key to achieving accurate analog performance. The need for voltage regulation arises because the sum of positive and negative rail voltages mustn’t exceed the recommended limits of circuit components (e.g., only 5.5 V for the TLV9064 op-amp shown in Figure 1). Unregulated inverters may have the potential (no pun!) to overvoltage sensitive parts and therefore may not be suitable.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the circuit: A simple regulated charge pump inverter based on the venerable and versatile HC4053 triple SPDT CMOS switch and most any low power RRIO op-amp. It efficiently and accurately inverts a positive voltage rail, generating a programmable negative output that’s regulated to a constant fraction of the positive rail. With Vin = 5 V, its output is good for currents from zero up to nearly 20 mA, the upper limit depending on the Vee level chosen by the R1:R2 ratio. It’s also cheap with a cost that’s competitive with less versatile devices like the LM7705. It’s almost unique in being programmable for outputs as near zero as you like, simply set by the choice for R2.

But enough sales pitch.  Here’s how it works.

Figure 1 U1 makes an efficient charge pump voltage inverter with comparator op-amp A1 providing programmable regulation.

 U1a and U1b act in combination with C2 to form an inverting flying-capacitor pump that transfers negative charge to filter capacitor C3 to maintain a constant Vee output controlled by A1. Charge pumping occurs in a cycle that begins with C2 being charged to V+ via U1a, then completes by partially discharging C2 into C3 via U1b. Pump frequency is roughly 100 kHz under control of the U1c Schmidt trigger style oscillator, so that a transfer occurs every 10 µs. Note the positive feedback around U1c via R3 and inverse feedback via R4, R5, and C1. 

Figure 2 shows performance under load with the R2:R1 ratio shown.

Figure 2 Output voltage and current conversion efficiency vs output current for +Vin = 5 V.

No-load current draw is less than 1 mA, divided between U1 and A1, with A1 taking the lion’s share. If Vee is lightly loaded, it can approach -V+ until A1’s regulation setpoint (Vee = – R2/R1 * V+) kicks in. Under load, Max Vee will decline at ~160 mV/mA but Vee remains rock solid so long as the Vee setpoint is at least slightly less negative than Max Vee.

A word about “bootstrapping”: Switch U1b needs to handle negative voltages but the HC4053 datasheet tells us this can’t work unless the chip is supplied with a negative input at pin 7. So U1’s first task is to supply (bootstrap) a negative supply for itself by the connection of pin 7 to Vee.

“Sub-zero” comparator op-amp A1 maintains Vee = – R2/R1 * V+ via negative feedback through R6 to U1 pin 6 Enable. When Vee is more positive than the setpoint, A1 pulls pin 6 low, enabling the charge pump U1c oscillator and the charging of C3. Contrariwise, Vee at setpoint causes A1 to drive pin 6 high, disabling the pump. When pin 6 is high, all U1’s switches open, isolating C2 and conserving residual charge for subsequent pump cycles. R6 limits pin 6 current when Vee < -0.5 V.

Figure 3 shows how a -500-mV sub-zero negative rail can enable typical low-voltage op-amps (e.g., TLV900x) to avoid saturation at zero over the full span of rated operating temperature for output currents up to 10 mA and beyond. Less voltage or less current capability might compromise accurate analog performance.

Figure 3 Vee = -500 mV is ideal for avoiding amplifier saturation without overvoltaging LV op-amps.

U1’s switches are break-before-make, which helps both with pump efficiency and with minimizing Vee noise, but C3 should be a low ESR type to keep the 100 kHz ripple low (about 1 mVpp @ Iee = 10 mA). You can also add a low inductance ceramic in parallel with C3 if high frequency spikes are a concern.

Footnote: I’ve relied on the 4053 in scores of designs over more than a score of years, but this circuit is the very first time I ever found a practical use for pin 6 (-ENABLE). Learn something new every day!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post “Sub-zero” op-amp regulates charge pump inverter appeared first on EDN.

Prosumer and professional cameras: High video quality, but a connectivity vulnerability

Wed, 01/31/2024 - 18:14

As I’ve recently mentioned a few times, I’m ramping up my understanding and skill set on a couple of Blackmagic Design Pocket Cinema Cameras (BMPCCs), both 6K in maximum captured resolution: a first-generation model based on the BMPCC 4K and using Canon LP-E6 batteries:

and the second-generation successor with a redesigned body derived from the original BMPCC 6K Pro. It uses higher-capacity Sony NP-F570 batteries, has an integrated touchscreen LCD that’s position-adjustable, and is compatible with an optional electronic viewfinder (which I also own):

I’m really enjoying playing with them both so far, steep learning curve aside, but as I use them, I can’t shake the feeling that I’ve got ticking time bombs in my hands. As I’ve also mentioned recently, cameras like these are commonly used in conjunction with external “field” monitors, whether wirelessly- or (for purposes of this writeup’s topic) wired-connected to the camera:

And as I’ve also recently mentioned, it’s common to power cameras like these from a beefy external battery pack such as this 155 Wh one from SmallRig:

or a smaller-capacity sibling that’s airplane-travel amenable:

Such supplemental power sources commonly offer multiple outputs, directly and/or via a battery plate intermediary:

enabling you to fuel not only the camera but also the field monitor, a nearby illumination source, a standalone microphone preamp, an external high-performance SSD or hard drive, and the like. Therein lies the crux of the issue I’m alluding to. Check out, to start, this Reddit thread.

The gist of the issue, I’ve gathered (reader insights are also welcomed), is that if you “hot-socket” either the camera or the display (and either the particular device’s power or the common HDMI connection) while the other device is already powered up, there’s a finite chance that the power supply circuit loop (specifically the startup spike) will route through the HDMI connection instead, frying the HDMI transceiver inside the camera and/or display (and maybe other circuitry as well). The issue seems to be most common, but not exclusively the case, when both the camera and display are fed by the same power source, albeit not leveraging a common ground, and when they’re running on different supply voltages.

I ran the situation by my technical contact at Blackmagic after stumbling across it online, and here’s what he had to say:

Our general recommendation is to…

  • Power down all the devices used if they have internal or built-in batteries
  • Connect the external power sources to all devices
  • Connect the HDMI/SDI cable between the devices
  • Power on the devices

Sounds reasonable at first glance, doesn’t it? But what if you’re a professional with clients that pay by the hour and want to keep their costs at a minimum, and you want to keep them happy, or you’re juggling multiple clients in a day? Or if you’re just an imperfectly multitasking prosumer (aka power user) like me? In the rush of the moment, you might forget to power the camera off before plugging in a field monitor, for example. And then…zap.

My initial brainstorm on a solution was to switch from conventional copper-based HDMI cables to optical ones. Two problems with this idea, though: they tend to be bulkier than their conventional counterparts, which is particularly problematic with the short cable runs used with cameras as well as a general desire for svelteness, both again exemplified by SmallRig products:

The other issue, of course, is that optical HDMI cables aren’t completely optical. Quoting from a CableMatters blog post on the topic:

A standard HDMI cable is made up of several twisted pairs of copper wiring, insulated and protected with shielding and silicon wraps. A fiber optic HDMI cable, on the other hand, does away with the central twisted copper pair, but still retains some [copper strands]. At its core are four glass filaments which are encased in a protective coating. Those glass strands transmit the data as pulses of light, instead of electricity. Surrounding those glass fibers are seven to nine twisted copper pairs that handle the power supply for the cable, one for Consumer Electronics Control (CEC), two for sound return (ARC and eARC), and one set for a Display Data Channel (DDC) signal.

My Blackmagic contact also wisely made the following observations, by the way:

It may not be fair to say that Blackmagic Pocket Cinema Cameras are especially susceptible to issues that could affect any camera. Any camera used in the same situation would be affected equally. (Hence the references to Arri camera white papers in the sources you quoted)

He’s spot-on. This isn’t a Blackmagic-specific issue. Nor is it a HDMI-specific issue, hence my earlier allusion to SDI (the Serial Data Interface), which also comes in copper and fiber variants. Here’s a Wikipedia excerpt, for those not already familiar with the term (and the technology).

Serial digital interface (SDI) is a family of digital video interfaces first standardized by SMPTE (The Society of Motion Picture and Television Engineers) in 1989…These standards are used for transmission of uncompressed, unencrypted digital video signals (optionally including embedded audio and time code) within television facilities; they can also be used for packetized data. SDI is used to connect together different pieces of equipment such as recorders, monitors, PCs and vision mixers.

In fact, a thorough and otherwise excellent white paper on the big-picture topic, which I commend to your attention, showcases SDI (vs HDMI) and Arri cameras (vs Blackmagic ones).

To wit, and exemplifying my longstanding theory that it’s possible to find and buy pretty much anything (legal, at least) on eBay, I recently stumbled across (and of course acted on and purchased, for less than $40 total including tax and shipping) a posting for the battery-acid-damaged motherboard of a Blackmagic Production Camera 4K, which dates from 2014. Here are some stock images of the camera standalone:

Rigged out:

And in action:

Now for our mini-teardown patient. I’ll start out with a side view, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Compare this to the earlier stock shot of the camera and you’ll quickly realize that the penny’s location corresponds to the top edge of the camera in its operating orientation. Right-to-left (or, if you prefer, top-to-bottom), the connections are (copy-and-pasting from the user manual, with additional editorializing by yours truly in brackets):

  • LANC [the Sony-championed Logic Application Control Bus System or Local Application Control Bus System] REMOTE: The 2.5mm stereo jack for LANC remote control supports record start and stop, and iris and focus control on [Canon] EF [lens] mount models.
  • HEADPHONES: 3.5 mm [1/8”] stereo headphone jack connection.
  • AUDIO IN: 2 x 1/4 inch [6.35 mm] balanced TRS phono jacks for mic or line level audio.
  • SDI OUT: SDI output for connecting to a switcher [field monitor] or to DaVinci Resolve via capture device for live grading.
  • THUNDERBOLT CONNECTION: Blackmagic Cinema Camera outputs 10-bit uncompressed 1080p HD. Production Camera 4K also outputs compressed Ultra HD 4K. Use the Thunderbolt connection for HD UltraScope waveform monitoring and streaming video to a Thunderbolt compatible computer.
  • POWER: 12 – 30V power input for power supply and battery charging.

Now for an overview shot of the front of the main system PCB I bought:

After taking this first set of photos, I realized that I’d oriented the PCB 180° from how it would be when installed in the camera in its operating orientation (remember, the power input is at the bottom). This explains why the U.S. penny is upside-down in the pictures; I re-rotated the images in more intuitive-to-you orientations before saving them!

Speaking of which, above and to the right of the U.S. penny is the battery acid damage I mentioned earlier; it would make sense to have the battery nearby the power input, after all. One unique thing about this camera versus all the ones I own is that the battery is embedded, not user removable (I wonder how much Blackmagic charged as a service fee to replace it after heavy use had led to the demise of the original?).

Another thing to keep in mind is that the not-shown image sensor is in front of this side of the PCB. Here’s another stock image which shows (among other things) the Super 35-sized image sensor peeking through the lens mount hole:

My guess would be that the long vertical connector on the left side of the PCB, to the right of the grey square thing I’ll get to shortly, mates to a daughter card containing the image sensor.

I bet that many of you had the same thought I did when I first reviewed this side of the PCB…holy cow, look at all those chips! Right? Let’s zoom in a bit for a closer inspection:

This is the left half. Again, note the vertical connector and the mysterious grey square to the left of it (keep holding that thought; I promise I’ll do a reveal shortly!). Both above and below it are Samsung K4B4G1646B-HCK0 4 Gbit (256Mbit x16) DDR3 SDRAMS, four total, for 2 GBytes of total system RAM. I’m betting that, among other things, the RAM array temporarily holds each video frame’s data streamed off the global shutter image sensor (FYI I plan to publish an in-depth tutorial on global vs rolling shutter sensors, along with other differentiators, in EDN soon!) for in-camera processing purposes prior to SSD storage.

And here’s the right half:

Wow, look at all that acid damage! I’m guessing the battery either leaked due to old age or exploded due to excessive applied charging voltage. Other theories, readers?

I realize I’ve so far skipped over a bunch of potentially interesting ICs. And have I mentioned that mysterious grey square yet? Let’s return to the left side, this time zoomed in even more (and ditching the penny) and dividing the full sequence into thirds. That grey patch is thermal tape, and it peeled right off the IC below it (here’s its adhesive underside):

Exposing to view…a FPGA!

Specifically, it’s a Xilinx (now AMD) Kintex 7 XC7K160T. I’d long suspected Blackmagic based its cameras on programmable logic vs an ASIC-based SoC, considering their:

  • Modest production volumes versus consumer camcorders
  • High-performance requirements
  • High functionality, therefore elaborate connectivity requirements, and
  • Fairly short operating time between battery charges, inferring high power consumption.

The only thing that surprised me was that Blackmagic had gone with a classic FPGA versus one with an embedded “hard” CPU core, such as Xilinx-now-AMD’s Arm-based Zynq-7000 family. That said, I’d be willing to bet that there’s still a MicroBlaze “soft” CPU core implemented inside.

Other ICs of note in this view include, at the bottom left corner, a Cypress Semiconductor (now Infineon) CY7C68013A USB 2.0 controller, to the right of and below a mini-USB connector which is exposed to the outside world via the SSD compartment and finds use for firmware updates:

In the lower right corner is the firmware chip, a Spansion (also now Infineon) S25FL256S 256 Mbit flash memory with a SPI interface. And along the right side, to the right of that long tall connector I’ve already mentioned, is another Cypress (now Infineon) chip, the CY24293 dual-output PCI Express clock generator. I’m guessing that’s a PCIe 1.0 connector, then?

Now for the middle segment:

Interesting (at least to me) components here that I haven’t already mentioned include the diminutive coin cell battery in the upper left, surrounded on three sides by LM3100 voltage regulators (I “think” originally from National Semiconductor, now owned by Texas Instruments…there are at least four more LM3100s, along with two LM3102s, that I can count in various locales on the board). Power generation and regulation is obviously a key focus of this segment of the circuitry. That all said, toward the center is another Xilinx-now-AMD programmable logic chip, this one a XC9572XL CPLD. Also note the four conductor strips at top, jointly labeled JT3 (and I’m guessing used for testing).

Finally, the right side:

Connectivity dominates the landscape here, along with acid damage (it gets uglier the closer you get to it, doesn’t it?). Note the speaker and microphone connectors at top. And toward the middle, alongside the dual balanced audio input plugs, are two Texas Instruments TLV320AIC3101 low-power stereo audio codecs; in-between them is a National Semiconductor-now-Texas Instruments L49743 audio op amp.

Last, but not least, let’s look at the other side of the PCB:

It’s comparatively unremarkable, from an IC standpoint compared to the other side, and aside from the oddly unpopulated J14 and U19 sites at the top. What it lacks in chip excitement (unless you’re into surface-mount passives, I guess), it compensates with connector curiosity.

On the left side (I’d oriented the PCB correctly straightaway this time, therefore the non-upside-down Abraham Lincoln on the penny):

there’s first a flex PCB connector up top (J12) which, perhaps obviously given its labeling, is intended for the LCD on the camera’s backside (but not its integrated touch interface…keep reading). In the middle is, I believe, the 2.5” SATA connector for the SSD. And on the bottom edge are, left to right, the connectors for the battery, the cable that runs to the electrical connectors on the lens mount (I’m guessing here based on the “EF POGO” phrase) and a Peltier cooler. Here’s a Wikipedia excerpt on the latter, for those not already familiar with the concept:

Thermoelectric cooling uses the Peltier effect to create a heat flux at the junction of two different types of materials. A Peltier cooler, heater, or thermoelectric heat pump is a solid-state active heat pump which transfers heat from one side of the device to the other, with consumption of electrical energy, depending on the direction of the current. Such an instrument is also called a Peltier device, Peltier heat pump, solid state refrigerator, or thermoelectric cooler (TEC) and occasionally a thermoelectric battery.

Also note the two four-pad conductor clusters, one at the top and the other, although this time (versus the earlier mentioned JT3) unlabeled and on only one side of the board. And what’s under that tape? Glad you asked:

And now for the other (right) side:

Oodles o’passives under the FPGA, as previously noted, plus a few more connectors that I haven’t already mentioned. On the top edge are ones for the back panel touchscreen and the up-front record button, while along the bottom edge (again, left to right) are ones for the additional (back panel, this time) interface buttons and a fan. Yes, this camera contains both a Peltier cooler and a fan!

That’s “all” I’ve got for you today. I welcome any reader thoughts on the upfront HDMI/SDI connectivity issue, along with anything from the subsequent mini-teardown, in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Prosumer and professional cameras: High video quality, but a connectivity vulnerability appeared first on EDN.

HBM memory chips: The unsung hero of the AI revolution

Wed, 01/31/2024 - 12:28

Memory chips like DRAMs, long subject to cyclical trends, are now eying a more stable and steady market: artificial intelligence (AI). Take the case of SK hynix, the world’s second-largest supplier of memory chips. According to its chief financial officer, Kim Woo-hyun, the company is ready to grow into a total AI memory provider by leading changes and presenting customized solutions.

The South Korean chipmaker has been successfully pairing its high-bandwidth memory (HBM) devices with Nvidia’s H100 graphics processing units (GPUs) and others for processing vast amounts of data in generative AI. Large language models (LLMs) like ChatGPT increasingly demand high-performance memory chips to enable generative AI models to store details from past conversations and user preferences to generate human-like responses.

Figure 1 SK hynix is consolidating its HBM capabilities to stay ahead of the curve in AI memory space.

In fact, AI companies are complaining that they can’t get enough memory chips. OpenAI CEO Sam Altman has recently visited South Korea, where he met senior executives from SK hynix and Samsung, the world’s largest memory chip suppliers followed by Micron of the United States. OpenAI’s ChatGPT technology has been vital in spurring demand for processors and memory chips running AI applications.

SK hynix’s HBM edge

SK hynix’s lucky break in the AI realm came when it surpassed Samsung by launching the first HBM device in 2015 and gained a massive head start in serving GPUs for high-speed computing applications like gaming cards. HBM vertically interconnects multiple DRAM chips to dramatically increase data processing speed compared with earlier DRAM products.

Not surprisingly, therefore, these memory devices have been widely used to power generative AI devices operating on high-performance computing systems. Case in point: SK hynix’s sales of HBM3 chips have increased by more than fivefold in 2023 compared to a year earlier. A Digitimes report claims that Nvidia has paid SK hynix and Micron advanced sums ranging from $540 million to $770 million to secure the supply of HBM memory chips for its GPU offerings.

SK hynix plans to proceed with the mass production of the next version of these memory devices—HBM3E—while also carrying out the development of next-generation memory chips called HBM4. According to reports published in the Korean media, Nvidia plans to pair its H200 and B100 GPUs with six and eight HBM3E modules, respectively. HBM3E, which significantly improves speed compared to HBM3, can process data up to 1.15 terabytes per second.

Figure 2 SK hynix is expected to begin mass production of HBM3E in the first half of 2024.

The Korean memory supplier calls HBM3E an AI memory product while claiming technological leadership in this space. While both Samsung and Micron are known to have their HBM3E devices ready and in the certification process at AI powerhouses like Nvidia, SK hynix seems to be a step ahead of its memory rivals. Take, for example, HBM4, currently under development at SK hynix; it’s expected to be ready for commercial launch in 2025.

What’s especially notable about HBM4 is its ability to stack memory directly on processors, eliminating interposers altogether. Currently, HBM stacks integrate 8, 12, or 16 memory devices next to CPUs or GPUs, and these memory devices are connected to these processors using an interface. Integrating memory directly onto processors will change how chips are designed and fabricated.

An AI memory company

Industry analysts also see SK hynix as the key beneficiary of the AI-centric memory upcycle because it’s a pure-play memory company, unlike its archrival Samsung. It’s worth noting that Samsung is also heavily investing in AI research and development to bolster its memory offerings.

AI does require a lot of memory and it’s no surprise that South Korea, housing the top two memory suppliers, aspires to become an AI powerhouse. SK hynix, on its part, has already demonstrated its relevance in designs for AI servers and on-device AI adoption.

While talking about memory’s crucial role in generative AI at CES 2024 in Las Vegas, its CEO Kwak Noh-Jung vowed to double the company’s market cap in three years. That’s why it’s now seeking to become a total AI memory provider while seeking a fast turnaround with high-value HBM products.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post HBM memory chips: The unsung hero of the AI revolution appeared first on EDN.

Power Tips #125: How an opto-emulator improves reliability and transient response for isolated DC/DC converters

Tue, 01/30/2024 - 17:40

In high-voltage power-supply designs, safety concerns require isolating the high-voltage input from the low-voltage output. Designers typically use magnetic isolation in a transformer for power transfer, while an optocoupler provides optical isolation for signal feedback.

One of the main drawbacks of optocouplers in isolated power supplies is their reliability. The use of an LED in traditional optocouplers to transmit signals across the isolation barrier leads to wide part-to-part variation in the current transfer ratio over temperature, forward current, and operating time. Optocouplers are also lacking in terms of isolation performance, since they often use weak insulation materials such as epoxies or sometimes just an air gap.

A purely silicon-based device that emulates the behavior of an optocoupler such as the Texas Instruments (TI) ISOM8110 remedies these issues since it removes the LED component, uses a resilient isolation material such as silicon dioxide, and is certified and tested under a much more stringent standard [International Electrotechnical Commission (IEC) 60747-17] compared to the IEC 60747-5-5 optocoupler standard (see this application note for more details).

An optocoupler’s lack of reliability over time and temperature has meant that many sectors, such as automotive and space, have had to rely on primary-side regulation or other means to regulate the output. An opto-emulator contributes to improved reliability and also provides substantial improvements in transient and loop response without increasing the output filter.

Typically, the limiting factor in the bandwidth of an isolated power supply is the bandwidth of the optocoupler. This bandwidth is limited by the optocoupler pole, formed from its intrinsic parasitic capacitance and the output bias resistor. Using an opto-emulator eliminates this pole, which leads to higher bandwidth for the entire system without any changes to the output filter. Figure 1 and Figure 2 show the frequency response of an isolated flyback design tested with an optocoupler and opto-emulator, respectively.

Figure 1 Total bandwidth of an isolated power supply using the TCMT1107 optocoupler. Source: Texas Instruments

Figure 2 Total bandwidth of an isolated power supply using the ISOM8110 opto-emulator. Source: Texas Instruments

The target for both designs was to increase the overall bandwidth while still maintaining 60 degrees of phase margin and 10dB of gain margin. Table 1 lists the side-by-side results.

 

Optocoupler

Opto-emulator

Bandwidth (kHz)

8.6

38.2

Phase margin (degrees)

60.2

67.4

Gain margin (dB)

18.7

11.62

Table 1 Optocoupler versus opto-emulator frequency response results.

The increased bandwidth of the opto-emulator helps achieve nearly a quadruple increase in the overall bandwidth of the design while maintaining phase and gain margins. Figure 3 highlights the changes made to the compensation network of the opto-emulator board versus the optocoupler board. As you can see, these changes are minimal and only require changing a total of three passive components. Another benefit of the opto-emulator is that it is pin-for-pin compatible with most optocouplers, so it doesn’t require a new layout for existing designs.

Figure 3 Schematic changes made to the compensation network of the opto-emulator board versus the optocoupler board. Source: Texas Instruments

Only the compensation components around the TL431 shunt voltage regulator were modified from one design to the other. Other than C19, C22 and R20, the rest of the design was identical, including the power-stage components, which include the output capacitance.

Because of the quadruple increase in the bandwidth, we were able to improve transient response significantly as well, without adding any more capacitance to the output. Figure 4 and Figure 5 show the transient response of the optocoupler and opto-emulator designs, respectively.

Figure 4 The transient response for the optocoupler design. Source: Texas Instruments

Figure 5 The transient response for the opto-emulator design showing a greater than 50% reduction in overall transient response. Source: Texas Instruments

The load step and the slew rate were the same in both tests. The load-step response went from –1.04 V in the optocoupler to –360 mV in the opto-emulator, and the load-dump response decreased from 840 mV to 260 mV. This is a > 50% reduction in the overall transient response, without adding more output capacitors.

Opto-emulator benefits

Because of the significant bandwidth improvement that an opto-emulator provides over an optocoupler, designers can reduce the size of their output capacitor without sacrificing transient performance in isolated designs that are cost- and size-sensitive.

An opto-emulator also provides more reliability than an optocoupler by enabling secondary-side regulation in applications that could not use optocouplers before, such as automotive and space. With the increase in bandwidth, an opto-emulator can provide higher bandwidth for the overall loop of the power supply, leading to significantly better transient response without increasing the output capacitance. For existing designs, an opto-emulator’s pin-for-pin compatibility with most optocouplers allows for drop-in replacements, with only minor tweaks to the compensation network.

Sarmad Abedin has been a systems engineer at Texas Instruments since 2011. He works for the Power Design Services team in Dallas, TX. He has been designing custom power supplies for over 10 years specializing in low power AC/DC applications. He graduated from RIT in 2010 with a BS in Electrical Engineering.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #125: How an opto-emulator improves reliability and transient response for isolated DC/DC converters appeared first on EDN.

Conquer design challenges: Skills for power supplies

Mon, 01/29/2024 - 19:10

In the dynamic world of engineering design, the escalating requirements placed on power systems frequently give rise to intricate design challenges. The evolving landscape of DC power systems introduces complexities that can manifest stumbling blocks in the design process. Fundamental skills related to power supplies play a crucial role in mitigating these challenges.

Today’s advanced DC power systems are not immune to design problems, and a solid foundation in power supply knowledge can prove instrumental in navigating and overcoming hurdles. Whether it’s discerning the intricacies of device under test (DUT) voltage or current, addressing unforeseen temperature fluctuations, or managing noise sensitivity; a fundamental understanding of power supplies empowers designers to identify and tackle the nuanced issues embedded within a power system.

Understanding constant voltage and constant current

One of the most important concepts for anyone using power supplies is understanding constant voltage (CV) and constant current (CC). For engineers getting started with power supplies they must know some of the basic rules when it comes to CV and CC. The output of a power supply can operate in either CV or CC mode depending on the voltage setting, current limit setting, and load resistance.

In scenarios where the load current remains low and the drawn current falls below the preset current limit, the power supply seamlessly transitions into CV mode. This mode is characterized by the power supply regulating the output voltage to maintain a constant value. In essence, the voltage becomes the focal point of control, ensuring stability, while the current dynamically adjusts based on the load requirements. This operational behavior is particularly advantageous when dealing with varying loads, as it allows the power supply to cater to diverse current demands while steadfastly maintaining a consistent voltage output.

In instances where the load current surges to higher levels, surpassing the predefined current setting, the power supply shifts into CC mode. This response involves the power supply imposing a cap on the current, restricting it to the pre-set value. Consequently, the power supply functions as a guardian, preventing the load from drawing excessive current.

In CC mode, the primary focus of regulation shifts to the current, ensuring it remains consistent and in line with the predetermined setting. Meanwhile, the voltage dynamically adjusts in response to the load’s requirements. This operational behavior is particularly crucial in scenarios where the load’s demands fluctuate, as it ensures a stable and controlled current output, preventing potential damage to both the power supply and the connected components. Understanding this interplay between voltage and current dynamics is essential for engineers and users to harness the full potential of power supplies, especially in applications with varying load conditions.

Most power supplies are designed in such a way that it is optimized for CV operation. This means that the power supply will look at the voltage setting first and adjust all other secondary variables to achieve the programmed voltage. For a visual representation, see Figure 1 on the operating locus of a CC/CV power supply.

Figure 1 The operating locus of a CC/CV power supply. Source: Keysight

Boosting voltage or current

In instances where the demands of an application exceed the capabilities of a single power supply, a practical solution is to combine two or more power supplies strategically. This can be particularly useful when users need more voltage or current than a single power supply unit can deliver.

For scenarios demanding higher voltage, the method involves connecting the outputs of the power supplies in series. This arrangement effectively adds the individual voltage outputs, resulting in an aggregate voltage that meets the specified requirements. On the other hand, requiring a higher current, connecting the power supply outputs in parallel proves advantageous. This configuration combines the current outputs, providing a cumulative current output that satisfies the application’s demands.

To achieve optimal results, it is crucial to set each power supply output independently. This ensures that the voltages or currents align harmoniously, summing up to the total desired value. By following these simple yet effective steps, users can harness the collective power of multiple power supplies, tailoring their outputs to meet the specific voltage and current requirements of the application.

For higher voltage, first set each output to the maximum desired current limit the load can safely handle. Then, equally distribute the total desired voltage to each power supply output. For example, if engineers are using three outputs, set each to one third the total desired voltage:

  1. Never exceed the floating voltage rating (output terminal isolation) of any of the outputs.
  2. Never subject any of the power supply outputs to a reverse voltage.
  3. Only connect outputs that have identical voltage and current ratings in series.

For higher current, equally distribute the total desired current limit to each power supply:

  1. One output must operate in constant voltage (CV) mode and the other(s) in constant current (CC) mode.
  2. The output load must draw enough current to keep the CC output(s) in CC mode.
  3. Only connect outputs that have identical voltage and current ratings in parallel.

See Figure 2 for a visual representation of a series connection with remote sense to a load.

Figure 2 Series connection to a load with remote sense. Source: Keysight

In the parallel setup, the CV output determines the voltage at the load and across the CC outputs (Figure 3). The CV unit will only supply enough current to fulfill the total load demand.

Figure 3 Parallel connection to the load with remote sense; the CV output determines the voltage at the load and across the CC outputs. Source: Keysight

Dealing with unexpected temperature effects

Temperature fluctuations not only impact the behavior of DUTs but also exert a significant influence on the precision of measurement instruments. For example, during a chilly winter day, an examination of Lithium-ion batteries at room temperature yielded unexpected results. Contrary the user’s anticipation of a decrease, the voltage of the cells drifted upward over time.

This phenomenon was attributed to the nighttime drop in room temperature, which paradoxically led to an increase in cell voltage. This effect proved more pronounced than the anticipated decrease resulting from cell self-discharge during the day. It’s worth noting that the power supplies responsible for delivering power to these cells are also susceptible to temperature variations.

To accurately characterize the output voltage down to microvolts, it becomes imperative to account for temperature coefficients in the application of power. This adjustment ensures a more precise understanding of the voltage dynamics, accounting for the impact of temperature on both the DUTs and the measurement instruments.

The following is an example using a power supply precision module that features a low-voltage range. The test instrumentation specification table documents the valid temperature range at 23°C ±5°C after a 30-minute warm-up.

  1. To apply a temperature coefficient, engineers must treat it like an error term. Let’s assume that the operating temperature is 33°C, or 10°C above the calibration temperature of 23°C and a voltage output of 5.0000 V.

Voltage programming temperature coefficient = ± (40 ppm + 70 μV) per °C

  1. To correct for the 10°C temperature difference from calibration temperature, engineers will need to account for the difference in the operating temperature and voltage range specification. The low voltage range spec is valid at 23°C ±5°C or up to 28°C. Engineers will need to apply the temperature coefficient for the (5°C) difference in the operating temperature (33°C) and low voltage range spec (28°C):

± (40 ppm * 5 V + 70 μV) * 5°C = 40ppm * 5 V * 5 °C + 70 μV * 5 °C = 1.35 mV

  1. The temperature-induced error must be added to the programming error for the low-voltage range provided in the N6761A specification table:

± (0.016 % * 5 V + 1.5 mV) = 2.3 mV

  1. Therefore, the total error, programming plus temperature, will be:

± (1.35 mV + 2.3 mV) = ±3.65 mV

  1. That means user output voltage will be somewhere between 4.99635 V and 5.00365 V when attempting to set the voltage to 5.0000 V in an ambient temperature of 33°C. Since the 1.35 mV part of this error is temperature-induced, as the temperature changes, this component of the error will change, and the output of the power supply will drift. The measurement drift with temperature can be calculated using the supply’s temperature coefficient.

Dealing with noise sensitive DUTs

If the DUT is sensitive to noise, engineers will want to do everything they can to minimize noise on the DC power input. The easiest thing users can do is use a low-noise power supply. But if one is not available, there are a couple of other things engineers can do.

The links between the power supply and the DUT are vulnerable to interference, particularly noise stemming from inductive or capacitive coupling. Numerous methods exist to mitigate this interference, but employing shielded two-wire cables for both load and sense connections stands out as the most effective solution. It is essential, however, to pay careful attention to the connection details.

For optimal noise reduction, connect the shield of these cables to earth ground at only one end, as illustrated in Figure 4.

Figure 4 To reduce noise connect the shield to earth ground only on one end of the cable. Source: Keysight

Avoid the temptation to ground the shield at both ends, as this practice can lead to the formation of ground loops, as depicted in Figure 5. These loops result from the disparity in potential between the supply ground and DUT ground.

Figure 5 Diagram where the shield is connected to ground at both ends resulting in a ground loop. Source: Keysight

The presence of a ground loop current can induce voltage on the cabling, manifesting as unwanted noise for your DUT. By adhering to the recommended practice of grounding the shield at a single end, you effectively minimize the risk of ground loops and ensure a cleaner, more interference-resistant connection between your power supply and the DUT.

Also, common-mode noise is generated when common-mode current flows from inside a power supply to earth ground and produces voltage on impedances to ground, including cable impedance. To minimize the effect of common-mode current, equalize the impedance to ground from the plus and minus output terminals on the power supply. Engineers should also equalize the impedance from the DUT plus and minus input terminals to ground. Use a common-mode choke in series with the output leads and a shunt capacitor from each lead to ground to accomplish this task.

Choosing the right power supply

Navigating the selection process for a power supply demands careful consideration of the specific requirements. Whether in need of a basic power supply or one with more advanced features tailored for specific applications, the ramifications of choosing a power supply with excessive power capacity can result in numerous challenges.

Common issues associated with opting for a power supply with too much power include increased output noise, difficulties in setting accurate current limits, and a compromise in meter accuracy. These challenges can be particularly daunting, but developing basic skills related to power supplies can significantly aid in overcoming these design obstacles.

By cultivating a foundational understanding of power supply principles, such as the nuances of CV and CC modes, engineers can effectively address issues related to noise, current limits, and meter accuracy. This underscores the importance of not only selecting an appropriate power supply but also ensuring that users possess the essential skills to troubleshoot and optimize the performance of the chosen power supply in their specific applications. Striking a balance between power capacity and application needs, while honing basic skills, is key to achieving a harmonious and effective power supply setup.

Andrew Herrera is an experienced product marketer in radio frequency and Internet of Things solutions. Andrew is the product marketing manager for RF test software at Keysight Technologies, leading Keysight’s PathWave 89600 vector signal analyzer, signal generation, and X-Series signal analyzer measurement applications. Andrew also leads the automation test solutions such as Keysight PathWave Measurements and PathWave Instrument Robotic Process Automation (RPA) software.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Conquer design challenges: Skills for power supplies appeared first on EDN.

The Intel-UMC fab partnership: What you need to know

Mon, 01/29/2024 - 13:35

Intel joins hands with Taiwanese fab United Microelectronics Corp. (UMC) in a new twist in the continuously evolving and realigning semiconductor foundry business. What does Intel’s strategic collaboration with UMC mean, and what will these two companies gain from this tie-up? However, before we delve into the motives of this semiconductor manufacturing partnership, below are the basic details of this foundry deal.

Intel and UMC will jointly develop a 12-nm photolithography process for high-growth markets such as mobile, communication infrastructure and networking, and the production of chips at this 12-nm node will begin at Intel’s Fabs 12, 22, and 32 in Arizona in 2027. The 12-nm process node will be built on Intel’s FinFET transistor design, and the two companies will jointly share the investment.

Figure 1 UMC’s co-president Jason Wang calls this tie-up with Intel a step toward adding a Western footprint. Source: Intel

Besides fabrication technology, the two companies will jointly offer EDA tools, IP offerings, and process design kits (PDKs) to simplify the 12-nm deployment for chip vendors. It’s worth mentioning here that Intel’s three fabs in Arizona—already producing chips on 10-nm and 14-nm nodes—aim to leverage many of the tools for the planned 12-nm fabrication.

Intel claims that 90% of the tools have been transferable between 10-nm and 14-nm nodes; if Intel is able to employ these same tools in 12-nm chip fabrication, it will help reduce additional CapEx and maximize profits.

While the mutual gains these two companies will accomplish from this collaboration are somewhat apparent, it’s still important to understand how this partnership will benefit Intel and UMC, respectively. Let’s begin with Intel, which has been plotting to establish Intel Foundry Services (IFS) as a major manufacturing operation for fabless semiconductor companies.

What Intel wants

It’s important to note that in Intel’s chip manufacturing labyrinth, IFS has access to three process nodes. First, the Intel-16 node facilitates 16-nm chip manufacturing for cost-conscious chip vendors designing inexpensive low-power products. Second is Intel 3, which can produce cutting-edge nodes using extreme ultraviolet (EUV) lithography but sticks to tried-and-tested FinFET transistors.

Third, Intel 18A is the cutting-edge process node that focuses on performance and transistor density while employing gate-all-around (GAA) RibbonFET transistors and PowerVia backside power delivery technology. Beyond these three fab offerings, IFS needs to expand its portfolio to serve a variety of applications. On the other hand, its parent company, Intel, will have a lot of free capacity while it moves its in-house CPUs to advanced process nodes.

So, while Intel moves the production of its cutting-edge process nodes like 20A and 18A to other fabs, Fabs 12, 22, and 32 in Arizona will be free to produce chips on a variety of legacy and low-cost nodes. Fabs 12, 22, and 32 are currently producing chips on Intel’s 7-nm, 10-nm, 14-nm, and 22-nm nodes.

Figure 2 IFS chief Stuart Pann calls strategic collaboration with UMC an important step toward its goal of becoming the world’s second-largest foundry by 2030. Source: Intel

More importantly, IFS will get access to UMC’s large customer base and can utilize its manufacturing expertise in areas like RF and wireless at Intel’s depreciated and underutilized fabs. Here, it’s worth mentioning Intel’s similar arrangement with Tower Semiconductor; IFS will gain from Tower’s fab relationships and generate revenues while fabricating 65-nm chips for Tower at its fully depreciated Fab 11X.

IFS is competing head-to-head with established players such as TSMC and Samsung for cutting-edge smaller nodes. Now, such tie-ups with entrenched fab players like UMC and Tower enable IFS to cater to mature fabrication nodes, something Intel hasn’t done while building CPUs on the latest manufacturing processes. Moreover, fabricating chips at mature nodes will allow IFS to open a new front against GlobalFoundries.

UMC’s takeaway

UMC, which formed the pure-play fab duo to spearhead the fabless movement in the 1990s, steadily passed the cutting-edge process node baton to TSMC, eventually resorting to mature fabrication processes. It now boasts more than 400 semiconductor firms as its customers.

Figure 3 The partnership allows UMC to expand capacity and market reach without making heavy capital investments. Source: Intel

The strategic collaboration with IFS will allow the Hsinchu, Taiwan-based fab to enhance its existing relationships with fabless clients in the United States and better compete with TSMC in mature nodes. Beyond its Hsinchu neighbor TSMC, this hook-up with Intel will also enable UMC to better compete amid China’s rapid fab capacity buildup.

It’ll also give UMC access to 12-nm process technology without building a new manufacturing site and procuring advanced tools. However, UMC has vowed to install some of its specialized tools at Fabs 12, 22, and 32 in Arizona. The most advanced node that UMC currently has in its arsenal is 14 nm; by jointly developing a 12-nm node with Intel, UMC will expand its know-how on smaller chip fabrication processes. It’ll also open the door for the Taiwanese fab on smaller nodes below 12 nm in the future.

The new fab order

The semiconductor manufacturing business has continuously evolved since 1987 when a former TI executive, Morris Chang, founded the first pure-play foundry with major funding from Philips Electronics. UMC soon followed the fray, and soon, TSMC and UMC became synonymous with the fabless semiconductor model.

Then, Intel, producing its CPUs at the latest process nodes and quickly moving to new chip manufacturing technologies, decided to claim its share in the fab business when it launched IFS in 2021. The technology and business merits of IFS aside, one thing is clear. The fab business has been constantly realigning since then.

That’s partly because Intel is the largest IDM in the semiconductor world. However, strategic deals with Tower and UMC also turn it into an astute fab player. The arrangement with UMC is a case in point. It will allow Intel to better utilize its large chip fabrication capacity in the United States while creating a regionally diversified and resilient supply chain.

More importantly, Intel will be doing it without making heavy capital investments. The same is true for UMC, which will gain much-needed expertise in FinFET manufacturing technologies as well as strategic access to semiconductor clients in North America.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The Intel-UMC fab partnership: What you need to know appeared first on EDN.

RFICs improve in-vehicle communications

Thu, 01/25/2024 - 21:50

Two linear power amplifiers (PAs) and two low-noise amplifiers (LNAs) from Guerrilla RF serve as signal boosters to enhance in-cabin cellular signals. Qualified to AEC-Q100 Grade 2 standards, the GRF5507W and GRF5517 PAs and the GRF2106W and GRF2133W LNAs operate over a temperature range of -40°C to +105°C.

The GRF5507W power amp has a tuning range of 0.7 GHz to 0.8 GHz, while the GRF5517W spans 1.7 GHz to 1.8 GHz. Each device delivers up to 23 dBm of output power with adjacent channel leakage ratio (ACLR) performance of better than -45 dBc. Further, this ACLR figure is achieved without the aid of supplemental linearization schemes, like digital pre-distortion (DPD). According to the manufacturer, the ability to beat the -45-dBc ACLR metric without DPD helps meet the stringent size, cost, and power dissipation requirements of cellular compensator applications.

The GRF2106W low-noise amplifier covers a tuning range of 0.1 GHz to 4.2 GHz, and the GRF2133W spans 0.1 GHz to 2.7 GHz. At 2.45 GHz (3.3 V, 15 mA), the GRF2106W provides a nominal gain of 21.5 dB and a noise figure of 0.8 dB. A higher gain level of 28 dB is available with the GRF2133W, along with an even lower noise figure of 0.6 dB at 1.95 GHz (5 V, 60 mA).

Prices for the GRF5507W and GRF5517W PAs in 16-pin QFN packages start at $1.54 in lots of 10,000 units. Prices for the GRF2106W and GRF2133W LNAs in 6-pin DFN packages start at $0.62 and $0.83, respectively, in lots of 10,000 units. Samples and evaluation boards are available for all four components.

GRF5507W product page

GRF5517W product page

GRF2106W product page

GRF2133W product page

Guerrilla RF 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RFICs improve in-vehicle communications appeared first on EDN.

Bluetooth LE SoC slashes power consumption

Thu, 01/25/2024 - 21:49

Renesas offers the DA14592 Bluetooth LE SoC, which the company says is its lowest power and smallest multicore Bluetooth LE device in its class. The device balances tradeoffs between on-chip memory and SoC die size to accommodate a broad range of applications, including crowd-sourced locationing, connected medical devices, metering systems, and human interface devices.

Along with an Arm Cortex-M33 CPU, the DA14592 features a software-configurable Bluetooth LE MAC engine based on an Arm Cortex-M0+. A new low-power mode enables a radio transmit current of 2.3 mA at 0 dBm and a radio receive current of 1.2 mA.

The DA14592 also supports a hibernation current of just 90 nA, which helps to extend the shelf-life of end products shipped with the battery connected. For products requiring extensive processing, the device provides an ultra-low active current of 34 µA/MHz.

Requiring only six external components, the DA14592 lowers the engineering BOM. Packaging options for the device include WLCSP (3.32×2.48 mm) and FCQFN (5.1×4.3 mm) The SoC’s reduced BOM, coupled with small package size, helps designers minimize product footprint. Other SoC features include a sigma-delta ADC, up to 32 GPIOs, and a QSPI supporting external flash or RAM.

The DA14592 Bluetooth LE SoC is currently in mass production. Renesas also offers the DA14592MOD, which integrates all of the external components required to implement a Bluetooth LE radio into a compact module. The DA14592MOD module is targeted for world-wide regulatory certification in 2Q 2024.

DA14592 product page 

Renesas Electronics  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Bluetooth LE SoC slashes power consumption appeared first on EDN.

Smart sensor monitors in-meter water pressure

Thu, 01/25/2024 - 21:49

The 129CP digital water pressure sensor from Sensata allows remote monitoring by utilities to identify distribution issues, leaks, and other non-revenue water events. Integrated into residential and commercial water meters, the 129CP improves metering efficiency and reliability.

Water utilities manage extensive infrastructure to pump and deliver water to residential houses and commercial buildings. According to the International Water Association, 30% of water produced worldwide is wasted due to leaks within the network, metering inaccuracies, unauthorized consumption, or other issues.

By combining precision pressure monitoring with digital I2C communication, the 129CP sensor delivers granular insights into water usage. It monitors pressure from 0 to 232 psi (sealed gauge), while consuming less than 2 µA at a 1-Hz measurement rate.

Rugged construction enables the 129CP to survive 10 to 15 years in challenging high-moisture, high-shock environments. The device operates from a supply voltage of 1.7 V to 3.6 V over a temperature range of +2°C to +85°C. 

129CP product page

Sensata Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Smart sensor monitors in-meter water pressure appeared first on EDN.

MCUs integrate configurable logic block

Thu, 01/25/2024 - 21:49

Microchip’s PIC16F13145 MCUs enable the creation of hardware-based, custom combinational logic functions directly within the MCU. The integration of a configurable logic block (CLB) module in the microcontroller allows designers to optimize the speed and response time of embedded control systems.  It also helps eliminate the need for external logic components.

CLB operation is not dependent on CPU clock speed. The CLB module can be used to make logical decisions while the CPU is in sleep mode, reducing both power consumption and software reliance. According to Microchip, the CLB is easily configured using the GUI-based tool offered as part of MPLAB Code Configurator.

The PIC16F13145 family of CLB-enabled MCUs is intended for applications that use custom protocols, task sequencing, or I/O control to manage real-time control systems in industrial and automotive sectors. Devices include a 10-bit ADC with built-in computation, an 8-bit DAC, and comparators with a 50-ns response time.

Prices for the PIC16F131xx microcontrollers start at $0.47 each in lots of 10,000 units.

PIC16F13145 product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCUs integrate configurable logic block appeared first on EDN.

100-V MOSFET employs double-sided cooling

Thu, 01/25/2024 - 21:48

Alpha & Omega’s AONA66916 100-V N-channel MOSFET comes in a DFN 5×6-mm designed to afford top and bottom side cooling. In addition to improved thermal performance, the device offers low on-resistance of 3.4 mΩ at 10 VGS and a wide safe operating area, making it well-suited for telecom, solar and DC/DC applications.

When using a standard DFN 5×6-mm package, heat dissipation is primarily through the bottom contact, transferring most of the power MOSFET’s heat to the PCB. Alpha & Omega’s latest DFN package enhances heat transfer by maximizing the surface contact area between the exposed top contact and the heat sink.

The AONA66916 MOSFET provides low thermal resistance, with top-side RthJC of 0.5°C/W maximum and bottom-side RthJC of 0.55°C/W maximum. The top-exposed DFN 5×6-mm package of the AONA66916 shares the same footprint of the company’s standard DFN 5×6-mm package, eliminating the need to modify existing PCB layouts.

The AONA66916 MOSFET costs $1.85 each in lots of 1000 units. It is available now in production quantities, with a lead time of 14 to16 weeks.

AONA66916 datasheet

Alpha & Omega Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 100-V MOSFET employs double-sided cooling appeared first on EDN.

“Thin is in” as RF-module shrinkage also enhances thermal performance

Thu, 01/25/2024 - 17:13

Packaging innovation has always been critical to the cooling of components, especially for power-switching devices such as MOSFETs and IGBTs. The non-stop demand to make these devices smaller and lighter also applies to RF PA modules, even though the inner workings of these analog modules are very different than those of on/off switching of power devices. This need for “make it less, but also do more” is especially intensified due to the multichannel requirements of massive MIMO 5G systems.

Nonetheless, when it comes to packaging details, the primary concern of designers is “What does it do for me?” more than “How did you do it?” Yet the “how” part is important, as it defines the capabilities if newer parts and sets the groundwork for future innovations which build on it.

A good example is the top-side cooling (TSC) for RF power amplifier (PA) modules introduced by NXP Semiconductors in 2023. This advance was not “hey, we’ve got a new package in the works” but it was coupled with deliverable parts—always a big plus a world where pre-release hype and promotion are considered normal (thankfully, not so much in the no-nonsense “analog” world from DC to RF).

NXP’s packaging results in an RF PA module which is thinner and lighter than existing designs, with a better thermal path as well. This top-side cooling contrasts with conventional bottom-side cooling (BSC), where the thermally conductive paths transfer heat from the package components—primarily the PA itself—to the PCB, which is thermally bonded to a cold plate or heat sink. While TSC is not unique to NXP (other vendors have somewhat different implementations), the NXP approach is illustrative, Figure 1.

Figure 1 Compared to the bottom-side cooling approach (left), NXP’s top-side cooling (right) flips the placement of the thermal coin as well as active and passive components, for a thinner and more thermally conductive package. Source: NXP

In a usual BSC approach, the dissipation of the PA is conducted through a metallic “coin” in the PCB and then to a heat sink on the underside of the board. The associated module components, including the PA, circulator, and filter, are mounted on the top side of the board, and all are covered by an RF electromagnetic (EM) shield. To complete the signal path, the antenna array is connected to the board.

In contrast, with TSC, the PA chip is connected to a direct-bonded copper-ceramic substate on the top side of the package. The chip is mounted on the surface of the board, thus making direct contact with the external heat sink. The benefit of this arrangement is that it maximizes dissipation and thermal performance, while yielding a smaller package which increases functional density.

Specifically, in the TSC arrangement, the coin is connected instead to the other side of the board and directly to the heat sink, while the circulator and dielectric filter are also mounted there. As a result, all the RF components are on one side of the PCB. At the same time, the shield is integrated into the heat sink rather than on top side of the PCB, which puts the antenna closer to the board with a clean separation of thermal and RF paths. The overall design shortens the connectors, improving RF performance while reducing thickness and weight of the overall assembly.

In contrast, bottom-side cooling is a compromise between thermal performance and use of the board’s real estate since module components can be placed on one side only. The result is lower functional density of the board while it is being challenged to support multiple RF channels.

TSC is not just a preliminary investigation or available as sampling prototypes. Off-the-shelf RF power modules such as the A5M35TG140-TC are available for 32T32R-class, 200-W 5G radios covering 3.3 GHz to 3.8 GHz. The devices combine LDMOS and GaN semiconductor technologies to create 10.5 W (average) fully integrated Doherty PAs to with ~30 dB gain and 46 percent efficiency along with 400 MHz of instantaneous bandwidth—all in a package measuring just 14mm × 10mm × 2 mm thick, Figure 2.

Figure 2 The A5M35TG140-TC is one of three similar multi-GHz PA modules, each with a simple schematic which does not begin to indicate their sophisticated underlying processes or advanced package implementation. Source: NXP

There are also evaluation boards which ease the design task of assessing the PA module performance and characteristics without having to “reinvent the wheel” of a relatively simple-looking schematic and layout which inevitably has its RF subtleties, Figure 3.

Figure 3 Vendor-supplied evaluation boards are essential to speeding up the assessment and design-in process. Source: NXP

All these substantive improvements in packaging still leaves one evasive cooling question: where is this mythical, wonderful place called “away” to which all the dissipated heat is being conveyed? By doing a better job of getting heat away from the package, in addition to shrinking the package itself, are you making your previous thermal problem into someone else’s headache, as they now must contend with heat you toss off? Or would you have had that total amount of heat anyway, but with a different distribution across the PCB and within the chassis? Have you seen any other power-package developments for non-switching devices with which you were impressed?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

References (all from NXP)

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post “Thin is in” as RF-module shrinkage also enhances thermal performance appeared first on EDN.

Quelling a ground loop

Wed, 01/24/2024 - 17:00

As shown in Figure 1, this issue had been previously addressed during pre-internet times in EDN of January 18,1990 but a further detailed look at the idea is warranted thirty-four years later.

Figure 1 Archival design idea “Resistor isolates grounds” from 1990.

Back then, the GPIB controller was the Bertan 200-C488. The analog product being controlled would usually be one of the Bertan 205A/210 high voltage power supplies. The 210 series high voltage supplies delivered in the neighborhood of 200 W, not the highest power level one might encounter, but still high enough to make some heat.

The controller and the controlled when standing alone and apart would look something like Figure 2.

Figure 2 Block diagram of the Bertan 200-C488 GPIB controller and the controlled Bertan 205A/210 high voltage power supply.

The controlled item had an analog ground against which a controlling analog voltage would be applied and against which voltage and current monitoring signals would be obtained. The controlled item also had some rudimentary on/off signals. The controller had inputs and outputs to match.

During product development, the first attempted interface between the two didn’t work too well (See Figure 3).

Figure 3 Controller and controlled with a ground loop current whose current magnitude interfered with both control and monitoring.

The cabling between the controller and the controlled was found to be carrying a current loop whose current magnitude really messed up both control and monitoring. Just a few millivolts arising from heaven only knows what can send a lot of current through a loop of just a few milliohms.

This was a trap that had been inadvertently fallen into during design. (Mea culpa, there.) The prospect of a major redesign with elaborate isolation provisions was appalling so we stepped back a little to look things over and find a simpler remedy.

With the controller still in development, it was realized that its digital and analog grounds needed to be tied together, but that adding a 1 Ω resistance between the two as in Figure 4 would have no disruptive effect.

Figure 4 Modified controller and controlled with the digital and analog grounds of the controller tied together with a 1 Ω resistance between the two.

With that resistor in place, the ground loop situation was very much improved (Figure 5) meaning that the current flow in the ground loop was very much reduced to below inducing any disruptive effects. The current itself didn’t go to zero, but it went down to a much lower and non-disruptive level.

Figure 5 Controller and controlled with a diminished ground loop current.

The hard wired connection between the digital and analog grounds located at the controlled item was of no concern to the controller, but with the magnitude of the ground loop current flow lowered way, way down, that current no longer perceptibly affected anything.

Additionally, If the controller were to be disconnected, the controller was still stable with its revised grounding arrangement. There was no loss of operating stability whether it was connected to the other equipment or not.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Quelling a ground loop appeared first on EDN.

SoC design: When a network-on-chip meets cache coherency

Wed, 01/24/2024 - 08:27

Many people have heard the term cache coherency without fully understanding the considerations in the context of system-on-chip (SoC) devices, especially those using a network-on-chip (NoC). To understand the issues at hand, it’s first necessary to understand the role of cache in the memory hierarchy.

Cache in the memory hierarchy

Inside a CPU are a relatively small number of registers with extremely high speed. These registers can be accessed by the CPU in a single clock cycle. However, their storage capacity is minimal. In contrast, accessing the main memory for reading or writing data takes up many clock cycles. This often results in the CPU being idle most of the time.

In 1965, a British computer scientist, Maurice Wilkes, introduced the concepts of cache memory and memory caching. This involved having a small amount of fast memory called a cache adjoining the CPU. The word “cache” itself comes from the French word “cacher,” meaning “to hide” or “to conceal,” the idea being that the cache hides the main memory from the CPU.

This process operates based on two key points. First, when a program running on the CPU does something involving one location in the main memory, it typically performs operations on several nearby locations. Consequently, when the CPU requests a single piece of data from the main memory, the system brings in data from nearby locations.

A high-level view of a memory hierarchy involving a simple cache is illustrated in Figure 1.

Figure 1 High-level view shows where cache stands in the memory hierarchy. Source: Arteris

This approach ensures that related data is readily available if needed. Second, programs usually conduct numerous operations on the same data sets. Therefore, storing the actively used data in the cache closest to the CPU is beneficial. This proximity allows quicker access and processing of the data.

Cache in the context of an SoC

In the case of an SoC, the cache is implemented on-chip in high-speed, high-power, and low-capacity SRAM. Meanwhile, the main memory is implemented off-chip on the PCB, typically in the form of low-speed, low-power, and high-capacity DRAM.

To minimize latency, designers have added multiple levels of cache in many of today’s SoCs. These typically include three levels: L1, L2 and L3. The L1 cache is closest to the CPU and has the smallest capacity but the fastest access times, usually within 1-2 clock cycles. The L2 cache is a bit further from the CPU and offers higher capacity but slower access times, generally between 4-10 clock cycles. The L3 cache is still further from the CPU and provides the largest capacity among the three, but has the slowest access times, ranging from 10-30 clock cycles.

Multiple cache levels maximize performance while minimizing off-chip accesses to the main memory. Accessing this main memory can consume hundreds of clock cycles. By using multiple cache levels, data can be retrieved more quickly from these caches rather than the slower main memory, enhancing overall system efficiency.

The complexity of all this increases when multiple CPU cores are involved. Consider a common scenario with a cluster of four CPU cores, labeled as cores 0 to 3, each with its own dedicated L1 cache. In some implementations, each core will also have its own dedicated L2 cache, while all four cores share a common L3 cache. In other designs, cores 0 and 1 share one L2 cache, cores 2 and 3 share another L2 cache, and all four collectively use the same L3 cache. These varying configurations impact how data is stored and accessed across different cache levels.

Typically, all processor cores within a single cluster are homogeneous, meaning they are the same type. However, having multiple clusters of processor cores is becoming more common. In many cases, the cores in different clusters are heterogeneous, or of different types. For example, with Arm’s big.LITTLE technology, the “big” cores are designed for maximum performance but are used less frequently.

The “LITTLE” cores are optimized for power efficiency with lower performance and are used most of the time. For instance, in an Arm-based smartphone, the “big” cores might be activated for tasks like Zoom calls, which are relatively infrequent. In contrast, the “LITTLE” cores could handle more common, less demanding tasks like playing music and sending text messages.

Maintaining cache coherency

In systems where multiple processing elements with individual caches share the same main memory, it’s possible to have multiple copies of the shared data. For example, one copy could be in the main memory and more in each processor’s local cache. Maintaining cache coherency requires that any changes to one copy of the data are reflected across all copies. This can be achieved by updating all copies with the new data or marking the other copies invalid.

Cache coherency can be maintained under software control. However, software-managed coherency is complex and challenging to debug. Still, it can be achieved using techniques such as cache cleaning, whereby modified data stored in a cache is marked as dirty, meaning it must be written back to the main memory. Cache cleaning can be performed on the whole cache or with specific addresses, but it is costly in CPU cycles and must be performed on all CPUs holding a copy of the data.

The preferred way to maintain cache coherency is with special hardware built to manage the caches invisibly from software. For example, the caches associated with the cores in a processor cluster typically include hardware required to maintain cache coherence.

To use or not to use

SoCs are composed of large numbers of functional blocks called intellectual property (IP) blocks. A processor cluster would be one such IP block. A common way to connect the IP blocks is to use a NoC.

In many SoC designs, coherence isn’t needed outside of the processor cluster, allowing a non-coherent or IO-coherent AXI5 or AXI5-Lite NoC, such as NI from Arm or FlexNoC from Arteris. However, for SoC designs with multiple processor clusters lacking inherent cache coherence or when integrating third-party IPs or custom accelerator IPs that require cache coherence, a coherent NoC is needed. Examples include CMN from Arm using the AMBA CHI protocol or Ncore from Arteris using AMBA ACE and/or CHI.

Figure 2 In the above example, the main system employs a coherent NoC in conjunction with a safety island employing a non-coherent NoC. Source: Arteris

Applying cache coherency universally across the entire chip can be resource-intensive and unnecessary for specific components. Therefore, isolating cache coherency to a subset of the chip, such as CPU clusters and specific accelerator IPs, allows for more efficient use of resources and reduces complexity, as shown in Figure 2. Coherent NoCs like Ncore excel in scenarios where stringent synchronization is necessary. Meanwhile, non-coherent interconnects, such as FlexNoC, are ideal in scenarios where strict synchronization is unnecessary.

Designers can strategically balance the need for data consistency in specific areas while benefiting from more streamlined communication channels where strict coherence is unnecessary. In today’s sophisticated heterogeneous SoCs, the synergy between coherent and non-coherent interconnects becomes a strategic advantage, enhancing the overall efficiency and adaptability of the system.

Andy Nightingale, VP of product management and marketing at Arteris, has over 36 years of experience in the high-tech industry, including 23 years in various engineering and product management positions at Arm.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post SoC design: When a network-on-chip meets cache coherency appeared first on EDN.

Blink Cameras and their batteries: Functional abnormalities and consumer liabilities

Tue, 01/23/2024 - 17:27

Quite possibly the single biggest thing that drives me nuts about consumer electronics products is when their implementation doesn’t grasp the technical comprehension limitations of consumers. I can’t count the number of products I’ve personally tried out over the years, far from the far greater number of products that I’ve only heard about—which is probably still only a fraction of the total number of products for which what I’m about to say qualifies—that underperformed in the market, if not flat-out failed, primarily-to-completely because the target audience couldn’t figure out how to get them to work properly. Great ideas, many of them…just poorly implemented.

Today’s case study is, I think, a perfect example of this implementation-induced undershoot. Regular readers with long-term memories may recall a four-part series that EDN published on my behalf back in 2019 on my five-camera Blink outdoor security camera system acquired and set up in May of that year:

(I was apparently using U.S. quarters for size-comparison scale purposes back then, versus the pennies I now employ in my photos)

Here’s a reminder of how the system works, from the original writeup in the series:

A Blink system consists of one or multiple tiny cameras, each connected both directly to a common router or to an access point intermediary (and from there to the Internet) via Wi-Fi, and to a common (and equally diminutive) Sync Module control point (which itself then connects to that same router or access point intermediary via Wi-Fi) via a proprietary “LFR” long-range 900 MHz channel.

The purpose of the Sync Module may be non-intuitive to those of you who (like me) have used standalone cameras before…until you realize that each camera is claimed to be capable of running for up to two years on a single set of two AA lithium cells. Perhaps obviously, this power stinginess precludes continuous video broadcast from each camera, a “constraint” which also neatly preserves both available LAN and WAN bandwidth. Instead, the Android or iOS smartphone or tablet app first communicates with the Sync Module and uses it to initiate subsequent transmission from a network-connected camera (generic web browser access to the cameras is unfortunately not available, although you can also view the cameras’ outputs from either a standalone Echo Show or Spot, or a Kindle Fire tablet in Echo Show mode).

I want to requote something I said in the previous paragraph for emphasis: “Each camera is claimed to be capable of running for up to two years on a single set of two AA lithium cells.” Now do the math; I’ve owned and operated the Blink camera system for more than 4.5 years as I write these words in early December 2023. And finally, here’s the kicker: until the other day, they still had the same original two-AA lithium cell sets in them. That’s downright impressive, qualified only by the clarification that except initially and briefly (when all the subsequent alerts drive me to rapidly reconsider my stance) they haven’t been “armed”, i.e., poised to respond to perceived motion in their fields of view. That particular configuration decision means several things: first off, the cameras aren’t perpetually in a partially powered up state, poised to react to the motion-triggered outputs of their PIR sensors. Plus, of course, they aren’t then fully powering up to capture, encode and transmit audio and video streams to the “cloud” via Wi-Fi.

So why the “downer” tone of this writeup’s title? The “downside” to long battery life, if I can mentally stretch to come up with one, is that after so long without cell swaps one forgets when he or she last did such the battery-exchange procedure, or maybe even cease to remember that the cameras run on batteries at all. I admittedly didn’t realize how long it’d been in my case until I went back and figured out when I’d bought them in the first place. As winter approached this year, the “neighbors” were as usual getting more active as they prepped for hibernation:

and so, for this and other security-reassurance reasons, my wife started manually checking all the cameras via the Blink app on her phone each night before retiring to bed. After a few days, she began complaining to me that they were acting erratically. Sometimes, for example, one or a few of them would respond slowly. Or it’d take a few tries before they’d respond. Or the audio and/or video streams would prematurely cease. Or they’d not respond at all.

The next day, when she’d relay these observations to me, I’d try to look at them myself via my various mobile devices (Android phones and iPads) and they’d all work fine each time. And the cameras were mounted in inconvenient-access locations that necessitated tall ladders and such:

Plus, replacement lithium AA batteries are pricey. And, anyway, all the cameras reported both to her (at the time) and me (the next day) that their existing batteries were still “OK”:

So, since she’d recently migrated to a new smartphone, my first suggestion was for her to delete and then reinstall the Blink app. Which seemingly worked at first, but then didn’t. Step two: unplug and plug back in the Sync Module. Same inconsistent and ultimately unsatisfying outcome.

Finally, I “swallowed the bitter pill”, climbed up on the tall wobbly ladder, and swapped out all the batteries in all the cameras. Voila, everything now works fine again. What’s now obvious, as well as I admittedly strongly suspected at the time (ignorance and avoidance are bliss, don’cha know), is that:

  • It’s generally colder outside now than it was earlier this year when the cameras were working reliably.
  • It’s particularly cold at night, when my wife was checking them, versus the next day, when I was debugging them.
  • And colder temperatures, while they may be ideal (to a point) for long-term storage of batteries, are sub-optimal when trying to use them.

We’re techie folks. We already understand such things. But the average consumer doesn’t. All they know is that their cameras aren’t working any more, although they still report that their batteries are “OK”. So, what do these users conclude? Their expensive gear has broken and is destined only for the landfill. They’re never going to buy anything that says “Blink” on it ever again. And they’re going to tell all their friends, family members and colleagues about their negative experiences, too.

On the one hand, putting myself in Blink’s engineers’ shoes, I get it. Again, lithium cells are expensive. You don’t want to prematurely report battery failure, because excessively frequent replacements are a “negative” in their own right. But I’d suggest that the company’s counterbalanced too far in the opposite direction here. Agree or disagree, readers? Let me know in the comments.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Blink Cameras and their batteries: Functional abnormalities and consumer liabilities appeared first on EDN.

Parsing PWM (DAC) performance: Part 1—Mitigating errors

Mon, 01/22/2024 - 20:27

Editor’s Note: This is one of a series of DIs proposing improvements in the performance of a “traditional” PWM—one whose output is a duty cycle-variable rectangular pulse which requires filtering by a low-pass analog filter to produce a DAC. This first part suggests mitigations and eliminations of common PWM error types.

Recently, there has been a spate of design ideas (DIs) published [1-8] which deals with microprocessor-generated pulse width modulators driving low-pass filters to produce DACs. Approaches have been introduced which address ripple attenuation, settling time minimization, and limitations in accuracy. This is the first in what is intended to be a series of DIs proposing improvements in overall PWM-based DAC performance. Each of the series’ recommendations will be implementable independently of the others. This DI addresses common types of PWM errors. Let’s review the kinds that a naked microprocessor (µP) PWM output saddles us with.

Errors

I was surprised to discover that when an output of a popular µP I’ve been using is configured to be a constant logic low or high and is loaded only by a 10 MΩ-input digital multimeter, the voltage levels are in some cases more than 100 mV from supply voltage VDD and ground. (I should note that I have not noticed this problem at the output of a 74HC00 NAND gate that the µP drives, although there are other issues that the use of this gate does not address.) Let’s call this saturation errors. I can guess an explanation for this phenomenon, but for my purposes the reason is irrelevant—the solution I’ll propose eliminates the effects this error might otherwise have.

Wow the engineering world with your unique design: Design Ideas Submission Guide

It’s been noted before that digital logics’ rise and fall times and delays contribute to a loss of accuracy in a PWM signal, let’s call these timing errors. However, it’s the difference between the rise and fall characteristics that matters; the type of error that one adds, the other subtracts. Of course, the errors are not identical. But it’s difficult to imagine that either approaches even 1/2 LSB. For that to occur, the voltage transition from the beginning to end of a single clock cycle would have to look something like a straight line between ground and VDD. And so, we should expect the total error from the rise and fall to be something less than ½ LSB, which I suggest should be tolerable. If further reduction is necessary, rather than incur the cost of measurement of each unit at production time and individually customizing compensation, I’d recommend periodic characterization of a group of samples and implementing a common across-the-board correction to all units.

There is a type of error discussed and addressed by Stephen Woodward [8]. This error results from the fact that the PWM output has different resistances (rlo and rhi) in the logic low and high modes of operation, let’s call this a resistance error. (I am indebted to Mr. Woodward for enlightening me in his DI’s comment section about certain aspects of this problem.) Woodward implements an innovative set of digital calculations to ameliorate these errors by pre-warping the PWM duty cycle in accordance with a measurement of the 50% duty cycle error magnitude, presumably at production time. (Note however that the pre-warping corrections near ground and VDD are reduced to zero, and so cannot compensate for, and if care is not taken could be confused with, saturation errors.)

The errors for all duty cycles and the exact pre-warping calculations necessary are disclosed in Woodward’s DI, but let’s consider the peak error magnitude only, which occurs at that 50% duty cycle. The PWM drives a filter presumed to consist of series resistors and shunt capacitors. At steady state, the capacitors have an average voltage eavg. Let’s assign the resistor connected to the PWM a value of R. Ignoring saturation and timing errors, it is easy to see that:

(VDD – eavg) / (R + rhi) = eavg / (R + rlo).

If rlo and rhi were the same:

eavg = VDD / 2.

Since they are different,

eavg = VDD × (R + rlo) / (R + rlo + R + rhi).

Subtracting ideal from the actual, the error is:

VDD × Δr / (4 × R),

where Δr = rlo – rhi.

Of course, few if any digital logic devices will specify the on-resistance flatness Δr. For my µP at 85°C with a 3 V supply,

rhi = (3V – 2.3V) / 10mA = 70Ω maximum and,

rlo = .6V / 10mA = 60Ω maximum.

If we are to work from these specs, we would have to set Δr to 70Ω, even though this is almost certainly excessive. To keep the error for a b-bit PWM less than ½ LSB, we require that:

R > 2b+1 × Δr / 4.

When b = 12, R must exceed 143kΩ. This presents a challenge, and an even greater one for PWMs with more bits; an op-amp must be interposed between the filter and even a mild DC load with load-induced errors are to be avoided. This incurs the errors of the op-amp’s input offset voltage and the voltage drops across R due to input bias currents.

Of course, there are approaches which avoid filters altogether. Again, the prolific Stephen Woodward offers an innovative and effective solution [3, 8]. However, its accuracy is limited by the dual requirements of matching an analog time constant with a pulse width produced by a digital clock, and by a match between the values of two capacitors. Let’s call those to which this design is subject matching errors.

Amelioration

There is a means of implementing a PWM which precludes saturation and matching errors and mitigates resistance errors. The trick is to configure the µP to control a break-before-make analog switch whose input commutates between ground and a voltage reference of the designer’s choice. Otherwise, the circuit operates as a traditional, simple µP-based PWM requiring a filter. The TS5A63157 is a suitable choice for the switch. Its maximum turn-on and turn-off times with a 3 V supply over temperature for the switched inputs are each 7 ns. This is much less than the 50 ns period of the shortest PWM clock cycle of a typical modern high-speed 20MHz µP. And buoyed by the symmetry of these numbers, we should expect a negligible impact on the already less than ½ LSB µP timing error. The switch has an improved on-resistance flatness of 7 Ω maximum with a 3 V supply over a -40°C to +85°C temperature range and 4 Ω with a 4.5 V supply.

The introduction of an analog switch precludes some errors and mitigates another found in PWM designs that lack such.

The maximum on-resistance flatness has been diminished by a factor of at least 10, reducing the resistance error by the same factor. The requirement for the value of R in the above 12-bit PWM example is now reduced to 14.3 kΩ. The analog switch has no saturation error, and there is no matching error with this approach since there is nothing that requires matching.

Let’s suppose the available power supply is 2.5 V. With that as our full-scale voltage, ½ LSB of a 12-bit PWM is 305 µV. To keep the op-amp-induced error to less than ½ LSB, the input bias current must be less than 305 µV / 14.3 kΩ = 21.3 nA.

For the op-amp, we can use an input/output rail-to-rail OP376 (single), OPA2376 (dual), or an OPA4376 (quad). Their input offset voltage is 25 µV maximum at 25°C, limiting the value from -40°C to 85°C to 90 µV courtesy of the unit’s 1 µV/°C maximum temperature sensitivity. The input bias current is 0.2 pA typical and 10 pA maximum at 25°C, but there is no relevant spec for temperature sensitivity. However, the datasheet’s graph of typical current shows about 50 pA at 100°C. Applying the ratio of 50/0.2 to 10 pA yields 2.5 nA. There seems to be a good deal of margin available here, but Texas Instruments should be consulted for more information.

Future work

It’s a valid concern that no “rail-to-rail’ op-amp output swing can encompass its supply rail voltages. The next DI in this series will address this matter. Following that will be a discussion of PWM filters. After that, I’ll discuss a purely software means of reducing the PWM period while maintaining the same number of bits of resolution, placing less of a burden on the analog filters.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

References/Related Content

  1. Double up on and ease the filtering requirements for PWMs
  2. Optimizing a simple analog filter for any PWM
  3. Fast-settling synchronous-PWM-DAC filter has almost no ripple
  4. Cancel PWM DAC ripple and power supply noise
  5. Cancel PWM DAC ripple with analog subtraction
  6. Cancel PWM DAC ripple with analog subtraction—revisited
  7. Cancel PWM DAC ripple with analog subtraction but no inverter
  8. Fast PWM DAC has no ripple
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Parsing PWM (DAC) performance: Part 1—Mitigating errors appeared first on EDN.

That thing is a 20 kW noise generator!

Mon, 01/22/2024 - 17:48

It’s the mid 1990’s and I am a design engineer for a company that designs and manufactures custom measurement systems and high-power electronics. Our products range from DSP modules for fiber optic current sensors used on high voltage transmission lines to ground power units for military aircraft. For more Tales from the Cube from my time at this company see my article “The start-up transistor works once!

The local electric utility set up a research program to get better load impedance data for their distribution network. They contacted our company and explained their needs for a special test generator.

Do you have a memorable experience solving an engineering problem at work or in your spare time? Tell us your Tale

They have detailed 60 Hz impedance numbers, but they want wideband information. The measurement must be done live, i.e., with power applied. Without power, relays and contactors dropout and the loads are disconnected making the measurements almost useless. Also, many loads are non-linear: the current consumed is not linearly related to the applied voltage, so the operating voltage must be present during the tests to get meaningful data. The distribution lines they want to analyze carry three phase at 14.4 kV line to neutral at 60 Hz.

To get the data they need, they cannot change the power frequency: this would demand too much power and upset proper equipment operation. They cannot add a test sine wave onto the nominal 60 Hz sine wave, this would also require very high power and risk damaging or interfering with proper customer installations.

Instead, they chose a technique used in electroacoustics: Maximum-Length Sequence (MLS) [1] [2], also called pseudo-random sequence. This technique uses a number of flip-flops and a few exclusive OR gates to generate a series of bits. This test sequence is applied to the circuit and both current and voltage are measured. With appropriate data manipulation, the impedance can be found for a broad range of frequencies. The sequence of bits can also be generated with software.

Since the number of bits generated by the method is the longest that can be made with a set circuit, it is “maximum-length” (the ML in MLS). Once all the bits are generated, the sequence repeats exactly as before. In this case, instead of using the sequence as a test signal voltage, the sequence will turn a known load resistor on and off according to the bit value.

Since MLS is not well known, details are given in the sidebar below:

Inside MLS

In the time domain, the bit sequence exhibits pseudorandom characteristics. It is random if analyzed for series of bits shorter than its maximum of (2**n)-1 bits, where n is the number of flip-flops. It is pseudo because the sequence repeats exactly after generating the maximum number of bits and not all combinations of bits are generated. True randomness does not repeat.

The circuit uses a clock to trigger the flip-flops. It has a period of t seconds. The lowest frequency generated is fmin = 1 / t((2**n)-1), which is the inverse of the repeat period.

In the frequency domain, for an analysis bandwidth larger than fmin, the spectrum resembles white noise: equal energy per equal bandwidth. The spectrum is composed of a series sine waves, all whole number harmonics of fmin [3].

A pseudorandom generator circuit built with one exclusive OR gate and with 3 flip-flops clocked at 7 kHz generates a repeating sequence of seven bits. It’s repeat period is 1 ms. Of the 8 possible combinations of three consecutive bits, only one is missing from that sequence. In that sequence there is energy at six discrete frequencies from 1 kHz to 6 kHz before the first null at 7 kHz. All the frequencies have significant energy content.

The exact multiples of the clock frequency, 7 kHz, 14 kHz, 21 kHz…all have zero amplitude. This is caused by the ((sine x)/x)**2 factor for the power spectrum. There is energy at frequencies above the clock frequency but of lower amplitude, as given by the function.

A similar circuit with the same clock frequency and one more flip-flop will have its sequence of 15 bits repeat every 2.14 ms and generate 14 discrete frequencies from 467 Hz to 6.53 kHz each about 3.7 dB lower in amplitude than in the previous case. For a fixed bit voltage, the amplitude of the harmonics is proportional to the number of discrete frequencies produced. In this case, the number of frequencies between the nulls increases from 6 to 14, so the amplitude is lowered by the same amount 10(log(6/14)) or 3.7 dB each.

For a given output power or voltage and a given clock frequency, the finer the frequency resolution, the lower the power available at each frequency.

With a pseudorandom bit sequence as the input stimulation, it is possible to extract the system’s impulse response and from that, the system’s frequency response. By using a known load that is switched on or off according to the maximum-length sequence binary sequence bits value, the network impedance can be computed.

Since a single sequence is needed for the computations, there is no need to repeat the signal and the duration of interference with normal service is minimized. If desired, the test can be repeated to improve the measurement signal to noise ratio.

In this project, the electric utility will provide the main computer and processing software as well as the voltage and current transformers while we will provide the switchable resistor load bank. The test load will be operated at 600 V via a transformer.

They estimated that a 20 kW load would enable them to get reasonably accurate data. The line current at full load is nominally 11.1 A at 600 V.

The test system is to be easily transportable. Most of the equipment will be inside the electric utility truck. The three test transformers—one for power and one each for voltage and current measurement—will be installed onto the 600 V lines. Then the test resistor will be connected.

Due to the resistor size and dissipation, we decide to build it onto a two-wheel moving dolly also called a hand truck. This will make the test resistor easier to move about. A box will house the resistors, a fan, the power and control circuits, and the power supply for the electronics.

The electronics are made up of a six IGBT modules, six isolated gate drivers and some control, as well as the monitoring and interface circuit. The IGBTs will connect the load resistor to the line according to the sequence bit value. The gate drivers have under-voltage, over-voltage, and over-current fault detection. Each gate driver has two optoisolators: one for on-off command input and one for status (OK or fault) output. The data connection to and from the test truck is with fiber optic. To power the electronics and the fan motor, 120 V is taken from the utility truck 120 VAC utility outlet.

During the initial testing, I am asked to give a hand debugging the test resistor. Debugging is done after office hours so that we do not disturb the other tenant’s electric equipment.

Tony, the design engineer, tested the electronics and the resistors at low power with a DC power supply. When I met with him, he has just connected the oscilloscope power plug to an isolation transformer because our newest four-channel 200 MHz digital oscilloscope keeps resetting during tests. Powering the oscilloscope from the isolated output cures the resets. For safety reasons, the oscilloscope frame is still connected to the safety ground.

The test he is running is a low power 3 phase test with 120 V applied to the resistors instead of 600 V. This reduces power to about 800 W. After a few seconds something smells. We cut the power and investigate. We both conclude that what we smell is the odor of dust and oil that has accumulated on the power resistors and that we can safely continue testing.

A minute later a different odour appears. Again, we shut down the power. This time it is the varistor that protects the 120 V power to the electronics. It has overheated and is burnt. We decide to remove it and power the load resistor electronics from the same isolation transformer used for the oscilloscope. The 800 W test signal sent to the building electric system is coming back into the wall plugs and overloaded the varistor. Maybe we should have bought a UPS for this test.

We power up again. After a short time, the system stops because of a driver fault. The IGBT driver IC detects various faults, and all faults are reported into a single output pin. The six driver faults lines are sent through six individual optoisolators. All six optoisolator outputs are connected to one microcontroller input pin. We have no way to identify a specific IGBT.

Tony tells me he has tested the supply for the drivers, and they are safely within the voltage limits, they can drive the current needed to turn-on the IGBTs, and that there are no shorts in the power load assembly.

We continue testing and we get more faults, all the same as before. The faults take a few seconds to a minute or so to appear. It is the longest time the unit has run. Before this time, we had to shut it down because of other problems.

I ask Tony to show me the PCB artwork. He says he can do better, and he gives me an unassembled board. I look at the board layout and everything is good. I search for the gap between the low voltage electronics and the IGBT driver side where optoisolators sit and I can’t find it. It should be easy to spot 12 optoisolators. Finally, I spot it. I had missed it because there is a trace between the isolated sections! One long trace is set between the isolated areas of all the optoisolators. I ask Tony what signal that trace carries: he says it is the fault signal from the drivers.

I tell him I think capacitive coupling is causing the fake fault signals. I tell him we should cut the trace completely from the 6 fault pins and replace it temporarily with a wire placed far from the power section. We do that and test again. The unit runs for 20 minutes without a fault. This confirms it: the trace between the two isolated circuits picked up stray capacitive current from the high voltage side and when it is big enough it is detected as a fault. Removing the trace and replacing the missing connections with a temporary wire kept the unwanted transients from causing false error detection.

I tell Tony to permanently remove the copper trace so that the PCB will be able to carry the full voltage without arcing from the high voltage side to the trace and from the offending trace to the low voltage circuits.

The next day I ask the junior engineer assigned to the PCB design why he used the empty space to place that trace. He says he had one trace left to route and saw the empty space and used it. I tell him about optoisolators isolation, clearance and creepage distances requirements for PCB design [4] to make sure he does not repeat this error.

The unit was later delivered to the customer who used it to complete their project and had no problems with it.

The lessons learned are:

  1. Make sure the PCB designers knows all the requirements for the PCB and know about isolation, clearance, and creepage distances as related to PCBs (IEC or UL 60950-1, section 2.10).
  2. Implement rigorous design reviews not just of the schematic but also of the printed circuit board and use a list of items to check, always keep your list up to date, the list here is a good start [5].
  3. Think of the PCB as a circuit element.
  4. Make a test plan at the start and review it regularly, this will allow you to buy test equipment and design test procedures and set-ups before it is too late.
  5. Know what overheated electronic parts smell like.
  6. Have a clear overall view of the project and its impact on the electrical environment, clearly our test resistor is a 20 kW wideband electric noise generator!

Daniel Dufresne is a retired engineer and has designed digital integrated circuits, active loudspeakers, negative resistors and high power electronic converters. He also was a professor at Cegep de Saint-Laurent. He earned his bachelor’s degree from Ecole Polytechnique de Montreal. He lives in Montreal, Canada and still works on electronic projects and repairs electronic test equipment.

Related Content

References

  1. Article about maximum length sequences on Wikipedia. https://en.wikipedia.org/wiki/Maximum_length_sequence [Page retrieved 2024-01-15].
  2. Jessie MacWilliams and Neil J. A. Sloane. Pseudo-Random Sequences and Arrays. Proceedings of the IEEE, Vol. 64, No 12, December 1976. Pages 1715-1729. It details the mathematical aspects of pseudo-random sequences and arrays.
  3. Hewlett-Packard Journal, September 1967, Volume 19, number 1. Gives detailed information on the spectra of pseudo-binary sequences. http://hparchive.com/Journals/HPJ-1967-09.pdf [Page retrieved 2024-10-18]
  4. A short video about creepage and clearance: What are creepage and clearance? (On line) https://training.ti.com/ti-precision-labs-isolation-what-are-creepage-and-clearance?cu=1135015 [Page retrieved 2024-10-15].
  5. Wallace, Hank. Electronics Design Checklist, (On line) http://www.jldsystems.com/pdf/Electronics%20Design%20Checklist.pdf [Page retrieved 2024-10-15]. There should be items added in regards to isolation, clearance for optoisolators and trace size and current rating, see IPC-2221 and IPC-2152.
googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post That thing is a 20 kW noise generator! appeared first on EDN.

Memory lane: Where SOT-MRAM technology stands in 2024

Mon, 01/22/2024 - 14:28

Spin-orbit transfer magnetic random-access memory (SOT-MRAM) is becoming more visible in next-generation memory offerings for its faster write speeds and much longer endurance. Two recent announcements from prominent semiconductor industry players highlight this new memory technology’s cost and complexity challenges and how they are being addressed to facilitate SOT-MRAM’s mass deployment.

That includes material engineering efforts to further reduce the switching energy per bit and work on optimizing bit cell configuration to shrink the cell area further compared to SRAM. But before we delve into these design initiatives’ technical nitty and gritty, it’s important to understand the fundamentals of SOT-MRAM technology and how it differs from spin-transfer torque magnetic random-access memory (STT-MRAM).

Below are some basic facts about SOT-MRAM and competing STT-MRAM technology and how these non-volatile MRAMs aim to substitute SRAMs in chip designs.

 

Why SOT-MRAM?

In an era of new non-volatile memory (NVM) technologies, MRAMs have stimulated considerable research interest due to their high area density and low leakage power with comparable speed. However, the operational speed of MRAM is relatively lower than the SRAMs. That has led to the assimilation of advanced switching mechanisms like spin-transfer torque (STT) and spin-orbit torque (SOT) into MRAM designs.

At the cell level, STT-MRAM takes 50% less area with a 74% reduction in leakage power dissipation compared to SOT-MRAM. However, SOT-MRAMs are 4x faster than the STT-MRAM. Moreover, at the architectural level, SOT-MRAM outperforms STT-MRAM in terms of read/ write energy and latency at the cost of marginal chip area and leakage power.

That’s why SOT-MRAM is seen as a promising candidate for replacing SRAM as a last-level cache (LLC) memory in high-performance computing (HPC) applications. Like SRAMs, SOT-MRAM offers a high switching speed in the sub-ns regime and promises robust endurance.

Additionally, being non-volatile, SOT-MRAM bit cells achieve lower standby power than SRAMs at high cell density. Next, SOT-MRAM bit cells can potentially be made much smaller than SRAM cells, translating into a higher bit packing density.

Compared to SRAMs, SOT-MRAMs are a superior candidate because they feature switching of free magnetic layer by injecting an in-plane current in an adjacent SOT layer. On the other hand, in STT-MRAMs, the current is injected perpendicularly into the magnetic tunnel junction (MTJ), and the read-and-write operation is performed through the same path.

Figure 1 In SOT-MRAM, the write current passes parallel to, or across, the layers, so the current can be set arbitrarily high without worrying about wear-out. Source: ITRI

Showcased at the International Electron Devices Meeting (IEDM) held in December 2023, two SOT-MRAM design undertakings can be seen as critical steps toward employing SOT-MRAM for cache memory applications.

Two MRAM design initiatives

First, imec showcased extremely scaled SOT-MRAM devices that can achieve a switching energy below 100 femto-Joule per bit and >1015 endurance. “It’s a reduction of 63% compared to conventional designs, and that helps address a remaining challenge of SOT-MRAM, which traditionally requires a high current for the write operation,” said Sebastien Couet, program director for magnetics at imec.

Scaling SOT-MRAM devices to their extreme—with the SOT track and MTJ pillar having comparable dimensions—also improves the memory’s endurance because it reduces Joule heating inside the SOT layer. “With an endurance beyond 1,015 program/erase cycles, we have experimentally validated our assumption that SOT-MRAM cells can have unlimited endurance, an important requirement for cache memories,” Couet added.

Figure 2 The cross-sectional view of extremely scaled SOT device shows that the SOT track has the same length as the MTG cell, unlike conventional SOT-MRAM designs. Source: imec

Imec has been experimenting with the scaling potential and limitations of single perpendicular SOT-MRAM devices processed on 300-mm wafers. With its findings unveiled at the IEDM 2023, imec has demonstrated that scaling the SOT track not only reduces the footprint of the SOT-MRAM cell, but also largely improves the cell’s performance and reliability.

Also, at the IEDM 2023, Taiwan’s Industrial Technology Research Institute (ITRI) and TSMC provided details about their joint design undertaking on SOT-MRAM in a paper. They claimed that their SOT-MRAM array chip boasts a power consumption of merely 1% of an STT-MRAM device.

“We have further co-developed a SOT-MRAM unit cell that achieves simultaneous low power consumption and high-speed operation, reaching speeds as rapid as 10 ns,” said Dr. Shih-Chieh Chang, director general of Electronic and Optoelectronic System Research Laboratories at ITRI. “Its overall computing performance can be further enhanced when integrated with computing in memory circuit design.”

STT-MRAM in 2024

It’s important to note that STT-MRAM technology has made some headway in the commercial arena. These memory devices feature SRAM-like performance with low latency and an extended temperature range to cater to diverse environmental demands. That makes them suitable for low-power applications in industrial and Internet of Things (IoT) designs.

However, SOT-MRAM seems to offer more promise for large LLC memory devices in artificial intelligence (AI) and other HPC applications. That makes it an important embedded memory technology to watch in 2024 and beyond.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Memory lane: Where SOT-MRAM technology stands in 2024 appeared first on EDN.

Using an inductor to improve an existing design

Fri, 01/19/2024 - 18:38

Here is the circuit in Figure 1 from a previous DI:

Figure 1 Simple local low-noise voltage converter that can be used when a simple negative supply of low voltage is required.

Wow the engineering world with your unique design: Design Ideas Submission Guide

It is simple, and its efficiency can be improved by a very quick change. If you fix both values of R1 and R1′ (making the input power fixed also), the output voltage will have an extremum on the graph Eo=Eo(R2). To facilitate the achievement of this extremum the circuit Figure 1 can be modified as shown in Figure 2; here you can change the value of both R2/R2’ with a single potentiometer (R2).

Figure 2 Added inductance to the output of Figure 1 to improve converter efficiency.

But the main alteration is in adding inductance L to the output. A rather low value of inductance (0.1…1.0 mH) is sufficient. (This low value may be counter-intuitive to the low frequency of the multivibrator, which is less than 1 KHz.)

The negative output voltage slowly increases with an increasing inductance: from -0.36 V @ L=0.1 mH to -0.4 V @ L=1 mH.

The main advantage is in this ~25% increase of the output current (voltage). While the circuit in Figure 1 has its maximum output voltage at -0.31 V, the circuit Figure 2 can provide more than -0.39V with the same load (910 Ω).

This increasing is due to… hmmm…we’ll see the explanation in comments…

The second improvement is in output noise: the same inductance L significantly decreases it—the output capacitor in Figure 2 has half the capacity, nevertheless the amplitude of the output noise here is halved.

The values of components are:  L=0.1…1.0 mH, R1=R1’=5.6 k, R2 =~22 k, C1=C1’=0.1 µF. The output capacitors should be of low impedance.

The circuit consumes less than 1.5 mA from +5 V and produces more than -0.39 V on 910 Ω load. The very first circuit (“Photocell makes true-zero output of the op-amp“) with the same output current consumes about 10 times more power, but its output noise is about 100 times less.

With all these circuits there may be one problem though: they produce a low voltage which is not critical for the host system, but if it somehow drops, the results will be distorted and this can be unnoticed.

To make sure any drop of this voltage will be detected, a circuit in Figure 3 can be used. It can be useful to monitor the coherence of the power in any system with dual power.

Figure 3 Circuit that ensures any drop is voltage is detected that may result in distorted results for converters in Figure 1 and Figure 2.

The green LED indicates “Power Good” and can be used as “Power On” indicator for the whole host system. The resistors R1, R2 should be stable 1% at least. The LED should go on when the output voltage grows to e= -20…-100 mV, depending on your buffer parameters.

For values R1, R2 let:

v1 = Vref + |e|,
v2 = 2.5 + |e|, then
R1 = R2 * ((v1 / v2) – 1)

For the given reference and e = 50 mV:

R1 = 0.63 * R2,

For example, R1=38.8 k, R2=62 k. These values may call for some trim because their total value cannot be too low—the output current should be used sparingly. And the input current of TL431 has far more influence when the current through the voltage divider is very low, so some trim is recommended in this case. Finally, any other reference with output voltage more than 2.5 V can be used, but the values of R1, R2 should be recalculated.

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Using an inductor to improve an existing design appeared first on EDN.

Pages