EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 51 min ago

Cadence debuts AI thermal design platform

Thu, 02/01/2024 - 19:45

Cadence Celsius Studio, an AI-enabled thermal design and analysis platform for electronic systems, aims to unify electrical and mechanical CAD. The system addresses thermal analysis and thermal stress for 2.5D and 3D ICs and IC packaging, as well as electronics cooling for PCBs and complete electronic assemblies.

With Celsius Studio, electrical and mechanical/thermal engineers can concurrently design, analyze, and optimize product performance within a single unified platform. This eliminates the need for geometry simplification, manipulation, and/or translation. Built-in AI technology enables fast and efficient exploration of the full design space to converge on the optimal design.

The multiphysics thermal platform can simulate large systems with detailed granularity for any object of interest, including chip, package, PCB, fan, or enclosure. It combines finite element method (FEM) and computational fluid dynamics (CFD) engines to achieve complete system-level thermal analysis. Celsius Studio supports all ECAD and MCAD file formats and seamlessly integrates with Cadence IC, packaging, PCB, and microwave design platforms.

Customers seeking to gain early access to Celsius Studio can contact Cadence using the product page link below.

Celsius Studio product page

Cadence Design Systems 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cadence debuts AI thermal design platform appeared first on EDN.

“Sub-zero” op-amp regulates charge pump inverter

Thu, 02/01/2024 - 16:55

Avoiding op-amp output saturation error by keeping op-amp outputs “live” and below zero volts is a job where a few milliamps and volts (or even fractions of one volt) of regulated negative rail can be key to achieving accurate analog performance. The need for voltage regulation arises because the sum of positive and negative rail voltages mustn’t exceed the recommended limits of circuit components (e.g., only 5.5 V for the TLV9064 op-amp shown in Figure 1). Unregulated inverters may have the potential (no pun!) to overvoltage sensitive parts and therefore may not be suitable.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the circuit: A simple regulated charge pump inverter based on the venerable and versatile HC4053 triple SPDT CMOS switch and most any low power RRIO op-amp. It efficiently and accurately inverts a positive voltage rail, generating a programmable negative output that’s regulated to a constant fraction of the positive rail. With Vin = 5 V, its output is good for currents from zero up to nearly 20 mA, the upper limit depending on the Vee level chosen by the R1:R2 ratio. It’s also cheap with a cost that’s competitive with less versatile devices like the LM7705. It’s almost unique in being programmable for outputs as near zero as you like, simply set by the choice for R2.

But enough sales pitch.  Here’s how it works.

Figure 1 U1 makes an efficient charge pump voltage inverter with comparator op-amp A1 providing programmable regulation.

 U1a and U1b act in combination with C2 to form an inverting flying-capacitor pump that transfers negative charge to filter capacitor C3 to maintain a constant Vee output controlled by A1. Charge pumping occurs in a cycle that begins with C2 being charged to V+ via U1a, then completes by partially discharging C2 into C3 via U1b. Pump frequency is roughly 100 kHz under control of the U1c Schmidt trigger style oscillator, so that a transfer occurs every 10 µs. Note the positive feedback around U1c via R3 and inverse feedback via R4, R5, and C1. 

Figure 2 shows performance under load with the R2:R1 ratio shown.

Figure 2 Output voltage and current conversion efficiency vs output current for +Vin = 5 V.

No-load current draw is less than 1 mA, divided between U1 and A1, with A1 taking the lion’s share. If Vee is lightly loaded, it can approach -V+ until A1’s regulation setpoint (Vee = – R2/R1 * V+) kicks in. Under load, Max Vee will decline at ~160 mV/mA but Vee remains rock solid so long as the Vee setpoint is at least slightly less negative than Max Vee.

A word about “bootstrapping”: Switch U1b needs to handle negative voltages but the HC4053 datasheet tells us this can’t work unless the chip is supplied with a negative input at pin 7. So U1’s first task is to supply (bootstrap) a negative supply for itself by the connection of pin 7 to Vee.

“Sub-zero” comparator op-amp A1 maintains Vee = – R2/R1 * V+ via negative feedback through R6 to U1 pin 6 Enable. When Vee is more positive than the setpoint, A1 pulls pin 6 low, enabling the charge pump U1c oscillator and the charging of C3. Contrariwise, Vee at setpoint causes A1 to drive pin 6 high, disabling the pump. When pin 6 is high, all U1’s switches open, isolating C2 and conserving residual charge for subsequent pump cycles. R6 limits pin 6 current when Vee < -0.5 V.

Figure 3 shows how a -500-mV sub-zero negative rail can enable typical low-voltage op-amps (e.g., TLV900x) to avoid saturation at zero over the full span of rated operating temperature for output currents up to 10 mA and beyond. Less voltage or less current capability might compromise accurate analog performance.

Figure 3 Vee = -500 mV is ideal for avoiding amplifier saturation without overvoltaging LV op-amps.

U1’s switches are break-before-make, which helps both with pump efficiency and with minimizing Vee noise, but C3 should be a low ESR type to keep the 100 kHz ripple low (about 1 mVpp @ Iee = 10 mA). You can also add a low inductance ceramic in parallel with C3 if high frequency spikes are a concern.

Footnote: I’ve relied on the 4053 in scores of designs over more than a score of years, but this circuit is the very first time I ever found a practical use for pin 6 (-ENABLE). Learn something new every day!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post “Sub-zero” op-amp regulates charge pump inverter appeared first on EDN.

Prosumer and professional cameras: High video quality, but a connectivity vulnerability

Wed, 01/31/2024 - 18:14

As I’ve recently mentioned a few times, I’m ramping up my understanding and skill set on a couple of Blackmagic Design Pocket Cinema Cameras (BMPCCs), both 6K in maximum captured resolution: a first-generation model based on the BMPCC 4K and using Canon LP-E6 batteries:

and the second-generation successor with a redesigned body derived from the original BMPCC 6K Pro. It uses higher-capacity Sony NP-F570 batteries, has an integrated touchscreen LCD that’s position-adjustable, and is compatible with an optional electronic viewfinder (which I also own):

I’m really enjoying playing with them both so far, steep learning curve aside, but as I use them, I can’t shake the feeling that I’ve got ticking time bombs in my hands. As I’ve also mentioned recently, cameras like these are commonly used in conjunction with external “field” monitors, whether wirelessly- or (for purposes of this writeup’s topic) wired-connected to the camera:

And as I’ve also recently mentioned, it’s common to power cameras like these from a beefy external battery pack such as this 155 Wh one from SmallRig:

or a smaller-capacity sibling that’s airplane-travel amenable:

Such supplemental power sources commonly offer multiple outputs, directly and/or via a battery plate intermediary:

enabling you to fuel not only the camera but also the field monitor, a nearby illumination source, a standalone microphone preamp, an external high-performance SSD or hard drive, and the like. Therein lies the crux of the issue I’m alluding to. Check out, to start, this Reddit thread.

The gist of the issue, I’ve gathered (reader insights are also welcomed), is that if you “hot-socket” either the camera or the display (and either the particular device’s power or the common HDMI connection) while the other device is already powered up, there’s a finite chance that the power supply circuit loop (specifically the startup spike) will route through the HDMI connection instead, frying the HDMI transceiver inside the camera and/or display (and maybe other circuitry as well). The issue seems to be most common, but not exclusively the case, when both the camera and display are fed by the same power source, albeit not leveraging a common ground, and when they’re running on different supply voltages.

I ran the situation by my technical contact at Blackmagic after stumbling across it online, and here’s what he had to say:

Our general recommendation is to…

  • Power down all the devices used if they have internal or built-in batteries
  • Connect the external power sources to all devices
  • Connect the HDMI/SDI cable between the devices
  • Power on the devices

Sounds reasonable at first glance, doesn’t it? But what if you’re a professional with clients that pay by the hour and want to keep their costs at a minimum, and you want to keep them happy, or you’re juggling multiple clients in a day? Or if you’re just an imperfectly multitasking prosumer (aka power user) like me? In the rush of the moment, you might forget to power the camera off before plugging in a field monitor, for example. And then…zap.

My initial brainstorm on a solution was to switch from conventional copper-based HDMI cables to optical ones. Two problems with this idea, though: they tend to be bulkier than their conventional counterparts, which is particularly problematic with the short cable runs used with cameras as well as a general desire for svelteness, both again exemplified by SmallRig products:

The other issue, of course, is that optical HDMI cables aren’t completely optical. Quoting from a CableMatters blog post on the topic:

A standard HDMI cable is made up of several twisted pairs of copper wiring, insulated and protected with shielding and silicon wraps. A fiber optic HDMI cable, on the other hand, does away with the central twisted copper pair, but still retains some [copper strands]. At its core are four glass filaments which are encased in a protective coating. Those glass strands transmit the data as pulses of light, instead of electricity. Surrounding those glass fibers are seven to nine twisted copper pairs that handle the power supply for the cable, one for Consumer Electronics Control (CEC), two for sound return (ARC and eARC), and one set for a Display Data Channel (DDC) signal.

My Blackmagic contact also wisely made the following observations, by the way:

It may not be fair to say that Blackmagic Pocket Cinema Cameras are especially susceptible to issues that could affect any camera. Any camera used in the same situation would be affected equally. (Hence the references to Arri camera white papers in the sources you quoted)

He’s spot-on. This isn’t a Blackmagic-specific issue. Nor is it a HDMI-specific issue, hence my earlier allusion to SDI (the Serial Data Interface), which also comes in copper and fiber variants. Here’s a Wikipedia excerpt, for those not already familiar with the term (and the technology).

Serial digital interface (SDI) is a family of digital video interfaces first standardized by SMPTE (The Society of Motion Picture and Television Engineers) in 1989…These standards are used for transmission of uncompressed, unencrypted digital video signals (optionally including embedded audio and time code) within television facilities; they can also be used for packetized data. SDI is used to connect together different pieces of equipment such as recorders, monitors, PCs and vision mixers.

In fact, a thorough and otherwise excellent white paper on the big-picture topic, which I commend to your attention, showcases SDI (vs HDMI) and Arri cameras (vs Blackmagic ones).

To wit, and exemplifying my longstanding theory that it’s possible to find and buy pretty much anything (legal, at least) on eBay, I recently stumbled across (and of course acted on and purchased, for less than $40 total including tax and shipping) a posting for the battery-acid-damaged motherboard of a Blackmagic Production Camera 4K, which dates from 2014. Here are some stock images of the camera standalone:

Rigged out:

And in action:

Now for our mini-teardown patient. I’ll start out with a side view, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Compare this to the earlier stock shot of the camera and you’ll quickly realize that the penny’s location corresponds to the top edge of the camera in its operating orientation. Right-to-left (or, if you prefer, top-to-bottom), the connections are (copy-and-pasting from the user manual, with additional editorializing by yours truly in brackets):

  • LANC [the Sony-championed Logic Application Control Bus System or Local Application Control Bus System] REMOTE: The 2.5mm stereo jack for LANC remote control supports record start and stop, and iris and focus control on [Canon] EF [lens] mount models.
  • HEADPHONES: 3.5 mm [1/8”] stereo headphone jack connection.
  • AUDIO IN: 2 x 1/4 inch [6.35 mm] balanced TRS phono jacks for mic or line level audio.
  • SDI OUT: SDI output for connecting to a switcher [field monitor] or to DaVinci Resolve via capture device for live grading.
  • THUNDERBOLT CONNECTION: Blackmagic Cinema Camera outputs 10-bit uncompressed 1080p HD. Production Camera 4K also outputs compressed Ultra HD 4K. Use the Thunderbolt connection for HD UltraScope waveform monitoring and streaming video to a Thunderbolt compatible computer.
  • POWER: 12 – 30V power input for power supply and battery charging.

Now for an overview shot of the front of the main system PCB I bought:

After taking this first set of photos, I realized that I’d oriented the PCB 180° from how it would be when installed in the camera in its operating orientation (remember, the power input is at the bottom). This explains why the U.S. penny is upside-down in the pictures; I re-rotated the images in more intuitive-to-you orientations before saving them!

Speaking of which, above and to the right of the U.S. penny is the battery acid damage I mentioned earlier; it would make sense to have the battery nearby the power input, after all. One unique thing about this camera versus all the ones I own is that the battery is embedded, not user removable (I wonder how much Blackmagic charged as a service fee to replace it after heavy use had led to the demise of the original?).

Another thing to keep in mind is that the not-shown image sensor is in front of this side of the PCB. Here’s another stock image which shows (among other things) the Super 35-sized image sensor peeking through the lens mount hole:

My guess would be that the long vertical connector on the left side of the PCB, to the right of the grey square thing I’ll get to shortly, mates to a daughter card containing the image sensor.

I bet that many of you had the same thought I did when I first reviewed this side of the PCB…holy cow, look at all those chips! Right? Let’s zoom in a bit for a closer inspection:

This is the left half. Again, note the vertical connector and the mysterious grey square to the left of it (keep holding that thought; I promise I’ll do a reveal shortly!). Both above and below it are Samsung K4B4G1646B-HCK0 4 Gbit (256Mbit x16) DDR3 SDRAMS, four total, for 2 GBytes of total system RAM. I’m betting that, among other things, the RAM array temporarily holds each video frame’s data streamed off the global shutter image sensor (FYI I plan to publish an in-depth tutorial on global vs rolling shutter sensors, along with other differentiators, in EDN soon!) for in-camera processing purposes prior to SSD storage.

And here’s the right half:

Wow, look at all that acid damage! I’m guessing the battery either leaked due to old age or exploded due to excessive applied charging voltage. Other theories, readers?

I realize I’ve so far skipped over a bunch of potentially interesting ICs. And have I mentioned that mysterious grey square yet? Let’s return to the left side, this time zoomed in even more (and ditching the penny) and dividing the full sequence into thirds. That grey patch is thermal tape, and it peeled right off the IC below it (here’s its adhesive underside):

Exposing to view…a FPGA!

Specifically, it’s a Xilinx (now AMD) Kintex 7 XC7K160T. I’d long suspected Blackmagic based its cameras on programmable logic vs an ASIC-based SoC, considering their:

  • Modest production volumes versus consumer camcorders
  • High-performance requirements
  • High functionality, therefore elaborate connectivity requirements, and
  • Fairly short operating time between battery charges, inferring high power consumption.

The only thing that surprised me was that Blackmagic had gone with a classic FPGA versus one with an embedded “hard” CPU core, such as Xilinx-now-AMD’s Arm-based Zynq-7000 family. That said, I’d be willing to bet that there’s still a MicroBlaze “soft” CPU core implemented inside.

Other ICs of note in this view include, at the bottom left corner, a Cypress Semiconductor (now Infineon) CY7C68013A USB 2.0 controller, to the right of and below a mini-USB connector which is exposed to the outside world via the SSD compartment and finds use for firmware updates:

In the lower right corner is the firmware chip, a Spansion (also now Infineon) S25FL256S 256 Mbit flash memory with a SPI interface. And along the right side, to the right of that long tall connector I’ve already mentioned, is another Cypress (now Infineon) chip, the CY24293 dual-output PCI Express clock generator. I’m guessing that’s a PCIe 1.0 connector, then?

Now for the middle segment:

Interesting (at least to me) components here that I haven’t already mentioned include the diminutive coin cell battery in the upper left, surrounded on three sides by LM3100 voltage regulators (I “think” originally from National Semiconductor, now owned by Texas Instruments…there are at least four more LM3100s, along with two LM3102s, that I can count in various locales on the board). Power generation and regulation is obviously a key focus of this segment of the circuitry. That all said, toward the center is another Xilinx-now-AMD programmable logic chip, this one a XC9572XL CPLD. Also note the four conductor strips at top, jointly labeled JT3 (and I’m guessing used for testing).

Finally, the right side:

Connectivity dominates the landscape here, along with acid damage (it gets uglier the closer you get to it, doesn’t it?). Note the speaker and microphone connectors at top. And toward the middle, alongside the dual balanced audio input plugs, are two Texas Instruments TLV320AIC3101 low-power stereo audio codecs; in-between them is a National Semiconductor-now-Texas Instruments L49743 audio op amp.

Last, but not least, let’s look at the other side of the PCB:

It’s comparatively unremarkable, from an IC standpoint compared to the other side, and aside from the oddly unpopulated J14 and U19 sites at the top. What it lacks in chip excitement (unless you’re into surface-mount passives, I guess), it compensates with connector curiosity.

On the left side (I’d oriented the PCB correctly straightaway this time, therefore the non-upside-down Abraham Lincoln on the penny):

there’s first a flex PCB connector up top (J12) which, perhaps obviously given its labeling, is intended for the LCD on the camera’s backside (but not its integrated touch interface…keep reading). In the middle is, I believe, the 2.5” SATA connector for the SSD. And on the bottom edge are, left to right, the connectors for the battery, the cable that runs to the electrical connectors on the lens mount (I’m guessing here based on the “EF POGO” phrase) and a Peltier cooler. Here’s a Wikipedia excerpt on the latter, for those not already familiar with the concept:

Thermoelectric cooling uses the Peltier effect to create a heat flux at the junction of two different types of materials. A Peltier cooler, heater, or thermoelectric heat pump is a solid-state active heat pump which transfers heat from one side of the device to the other, with consumption of electrical energy, depending on the direction of the current. Such an instrument is also called a Peltier device, Peltier heat pump, solid state refrigerator, or thermoelectric cooler (TEC) and occasionally a thermoelectric battery.

Also note the two four-pad conductor clusters, one at the top and the other, although this time (versus the earlier mentioned JT3) unlabeled and on only one side of the board. And what’s under that tape? Glad you asked:

And now for the other (right) side:

Oodles o’passives under the FPGA, as previously noted, plus a few more connectors that I haven’t already mentioned. On the top edge are ones for the back panel touchscreen and the up-front record button, while along the bottom edge (again, left to right) are ones for the additional (back panel, this time) interface buttons and a fan. Yes, this camera contains both a Peltier cooler and a fan!

That’s “all” I’ve got for you today. I welcome any reader thoughts on the upfront HDMI/SDI connectivity issue, along with anything from the subsequent mini-teardown, in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Prosumer and professional cameras: High video quality, but a connectivity vulnerability appeared first on EDN.

HBM memory chips: The unsung hero of the AI revolution

Wed, 01/31/2024 - 12:28

Memory chips like DRAMs, long subject to cyclical trends, are now eying a more stable and steady market: artificial intelligence (AI). Take the case of SK hynix, the world’s second-largest supplier of memory chips. According to its chief financial officer, Kim Woo-hyun, the company is ready to grow into a total AI memory provider by leading changes and presenting customized solutions.

The South Korean chipmaker has been successfully pairing its high-bandwidth memory (HBM) devices with Nvidia’s H100 graphics processing units (GPUs) and others for processing vast amounts of data in generative AI. Large language models (LLMs) like ChatGPT increasingly demand high-performance memory chips to enable generative AI models to store details from past conversations and user preferences to generate human-like responses.

Figure 1 SK hynix is consolidating its HBM capabilities to stay ahead of the curve in AI memory space.

In fact, AI companies are complaining that they can’t get enough memory chips. OpenAI CEO Sam Altman has recently visited South Korea, where he met senior executives from SK hynix and Samsung, the world’s largest memory chip suppliers followed by Micron of the United States. OpenAI’s ChatGPT technology has been vital in spurring demand for processors and memory chips running AI applications.

SK hynix’s HBM edge

SK hynix’s lucky break in the AI realm came when it surpassed Samsung by launching the first HBM device in 2015 and gained a massive head start in serving GPUs for high-speed computing applications like gaming cards. HBM vertically interconnects multiple DRAM chips to dramatically increase data processing speed compared with earlier DRAM products.

Not surprisingly, therefore, these memory devices have been widely used to power generative AI devices operating on high-performance computing systems. Case in point: SK hynix’s sales of HBM3 chips have increased by more than fivefold in 2023 compared to a year earlier. A Digitimes report claims that Nvidia has paid SK hynix and Micron advanced sums ranging from $540 million to $770 million to secure the supply of HBM memory chips for its GPU offerings.

SK hynix plans to proceed with the mass production of the next version of these memory devices—HBM3E—while also carrying out the development of next-generation memory chips called HBM4. According to reports published in the Korean media, Nvidia plans to pair its H200 and B100 GPUs with six and eight HBM3E modules, respectively. HBM3E, which significantly improves speed compared to HBM3, can process data up to 1.15 terabytes per second.

Figure 2 SK hynix is expected to begin mass production of HBM3E in the first half of 2024.

The Korean memory supplier calls HBM3E an AI memory product while claiming technological leadership in this space. While both Samsung and Micron are known to have their HBM3E devices ready and in the certification process at AI powerhouses like Nvidia, SK hynix seems to be a step ahead of its memory rivals. Take, for example, HBM4, currently under development at SK hynix; it’s expected to be ready for commercial launch in 2025.

What’s especially notable about HBM4 is its ability to stack memory directly on processors, eliminating interposers altogether. Currently, HBM stacks integrate 8, 12, or 16 memory devices next to CPUs or GPUs, and these memory devices are connected to these processors using an interface. Integrating memory directly onto processors will change how chips are designed and fabricated.

An AI memory company

Industry analysts also see SK hynix as the key beneficiary of the AI-centric memory upcycle because it’s a pure-play memory company, unlike its archrival Samsung. It’s worth noting that Samsung is also heavily investing in AI research and development to bolster its memory offerings.

AI does require a lot of memory and it’s no surprise that South Korea, housing the top two memory suppliers, aspires to become an AI powerhouse. SK hynix, on its part, has already demonstrated its relevance in designs for AI servers and on-device AI adoption.

While talking about memory’s crucial role in generative AI at CES 2024 in Las Vegas, its CEO Kwak Noh-Jung vowed to double the company’s market cap in three years. That’s why it’s now seeking to become a total AI memory provider while seeking a fast turnaround with high-value HBM products.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post HBM memory chips: The unsung hero of the AI revolution appeared first on EDN.

Power Tips #125: How an opto-emulator improves reliability and transient response for isolated DC/DC converters

Tue, 01/30/2024 - 17:40

In high-voltage power-supply designs, safety concerns require isolating the high-voltage input from the low-voltage output. Designers typically use magnetic isolation in a transformer for power transfer, while an optocoupler provides optical isolation for signal feedback.

One of the main drawbacks of optocouplers in isolated power supplies is their reliability. The use of an LED in traditional optocouplers to transmit signals across the isolation barrier leads to wide part-to-part variation in the current transfer ratio over temperature, forward current, and operating time. Optocouplers are also lacking in terms of isolation performance, since they often use weak insulation materials such as epoxies or sometimes just an air gap.

A purely silicon-based device that emulates the behavior of an optocoupler such as the Texas Instruments (TI) ISOM8110 remedies these issues since it removes the LED component, uses a resilient isolation material such as silicon dioxide, and is certified and tested under a much more stringent standard [International Electrotechnical Commission (IEC) 60747-17] compared to the IEC 60747-5-5 optocoupler standard (see this application note for more details).

An optocoupler’s lack of reliability over time and temperature has meant that many sectors, such as automotive and space, have had to rely on primary-side regulation or other means to regulate the output. An opto-emulator contributes to improved reliability and also provides substantial improvements in transient and loop response without increasing the output filter.

Typically, the limiting factor in the bandwidth of an isolated power supply is the bandwidth of the optocoupler. This bandwidth is limited by the optocoupler pole, formed from its intrinsic parasitic capacitance and the output bias resistor. Using an opto-emulator eliminates this pole, which leads to higher bandwidth for the entire system without any changes to the output filter. Figure 1 and Figure 2 show the frequency response of an isolated flyback design tested with an optocoupler and opto-emulator, respectively.

Figure 1 Total bandwidth of an isolated power supply using the TCMT1107 optocoupler. Source: Texas Instruments

Figure 2 Total bandwidth of an isolated power supply using the ISOM8110 opto-emulator. Source: Texas Instruments

The target for both designs was to increase the overall bandwidth while still maintaining 60 degrees of phase margin and 10dB of gain margin. Table 1 lists the side-by-side results.

 

Optocoupler

Opto-emulator

Bandwidth (kHz)

8.6

38.2

Phase margin (degrees)

60.2

67.4

Gain margin (dB)

18.7

11.62

Table 1 Optocoupler versus opto-emulator frequency response results.

The increased bandwidth of the opto-emulator helps achieve nearly a quadruple increase in the overall bandwidth of the design while maintaining phase and gain margins. Figure 3 highlights the changes made to the compensation network of the opto-emulator board versus the optocoupler board. As you can see, these changes are minimal and only require changing a total of three passive components. Another benefit of the opto-emulator is that it is pin-for-pin compatible with most optocouplers, so it doesn’t require a new layout for existing designs.

Figure 3 Schematic changes made to the compensation network of the opto-emulator board versus the optocoupler board. Source: Texas Instruments

Only the compensation components around the TL431 shunt voltage regulator were modified from one design to the other. Other than C19, C22 and R20, the rest of the design was identical, including the power-stage components, which include the output capacitance.

Because of the quadruple increase in the bandwidth, we were able to improve transient response significantly as well, without adding any more capacitance to the output. Figure 4 and Figure 5 show the transient response of the optocoupler and opto-emulator designs, respectively.

Figure 4 The transient response for the optocoupler design. Source: Texas Instruments

Figure 5 The transient response for the opto-emulator design showing a greater than 50% reduction in overall transient response. Source: Texas Instruments

The load step and the slew rate were the same in both tests. The load-step response went from –1.04 V in the optocoupler to –360 mV in the opto-emulator, and the load-dump response decreased from 840 mV to 260 mV. This is a > 50% reduction in the overall transient response, without adding more output capacitors.

Opto-emulator benefits

Because of the significant bandwidth improvement that an opto-emulator provides over an optocoupler, designers can reduce the size of their output capacitor without sacrificing transient performance in isolated designs that are cost- and size-sensitive.

An opto-emulator also provides more reliability than an optocoupler by enabling secondary-side regulation in applications that could not use optocouplers before, such as automotive and space. With the increase in bandwidth, an opto-emulator can provide higher bandwidth for the overall loop of the power supply, leading to significantly better transient response without increasing the output capacitance. For existing designs, an opto-emulator’s pin-for-pin compatibility with most optocouplers allows for drop-in replacements, with only minor tweaks to the compensation network.

Sarmad Abedin has been a systems engineer at Texas Instruments since 2011. He works for the Power Design Services team in Dallas, TX. He has been designing custom power supplies for over 10 years specializing in low power AC/DC applications. He graduated from RIT in 2010 with a BS in Electrical Engineering.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #125: How an opto-emulator improves reliability and transient response for isolated DC/DC converters appeared first on EDN.

Conquer design challenges: Skills for power supplies

Mon, 01/29/2024 - 19:10

In the dynamic world of engineering design, the escalating requirements placed on power systems frequently give rise to intricate design challenges. The evolving landscape of DC power systems introduces complexities that can manifest stumbling blocks in the design process. Fundamental skills related to power supplies play a crucial role in mitigating these challenges.

Today’s advanced DC power systems are not immune to design problems, and a solid foundation in power supply knowledge can prove instrumental in navigating and overcoming hurdles. Whether it’s discerning the intricacies of device under test (DUT) voltage or current, addressing unforeseen temperature fluctuations, or managing noise sensitivity; a fundamental understanding of power supplies empowers designers to identify and tackle the nuanced issues embedded within a power system.

Understanding constant voltage and constant current

One of the most important concepts for anyone using power supplies is understanding constant voltage (CV) and constant current (CC). For engineers getting started with power supplies they must know some of the basic rules when it comes to CV and CC. The output of a power supply can operate in either CV or CC mode depending on the voltage setting, current limit setting, and load resistance.

In scenarios where the load current remains low and the drawn current falls below the preset current limit, the power supply seamlessly transitions into CV mode. This mode is characterized by the power supply regulating the output voltage to maintain a constant value. In essence, the voltage becomes the focal point of control, ensuring stability, while the current dynamically adjusts based on the load requirements. This operational behavior is particularly advantageous when dealing with varying loads, as it allows the power supply to cater to diverse current demands while steadfastly maintaining a consistent voltage output.

In instances where the load current surges to higher levels, surpassing the predefined current setting, the power supply shifts into CC mode. This response involves the power supply imposing a cap on the current, restricting it to the pre-set value. Consequently, the power supply functions as a guardian, preventing the load from drawing excessive current.

In CC mode, the primary focus of regulation shifts to the current, ensuring it remains consistent and in line with the predetermined setting. Meanwhile, the voltage dynamically adjusts in response to the load’s requirements. This operational behavior is particularly crucial in scenarios where the load’s demands fluctuate, as it ensures a stable and controlled current output, preventing potential damage to both the power supply and the connected components. Understanding this interplay between voltage and current dynamics is essential for engineers and users to harness the full potential of power supplies, especially in applications with varying load conditions.

Most power supplies are designed in such a way that it is optimized for CV operation. This means that the power supply will look at the voltage setting first and adjust all other secondary variables to achieve the programmed voltage. For a visual representation, see Figure 1 on the operating locus of a CC/CV power supply.

Figure 1 The operating locus of a CC/CV power supply. Source: Keysight

Boosting voltage or current

In instances where the demands of an application exceed the capabilities of a single power supply, a practical solution is to combine two or more power supplies strategically. This can be particularly useful when users need more voltage or current than a single power supply unit can deliver.

For scenarios demanding higher voltage, the method involves connecting the outputs of the power supplies in series. This arrangement effectively adds the individual voltage outputs, resulting in an aggregate voltage that meets the specified requirements. On the other hand, requiring a higher current, connecting the power supply outputs in parallel proves advantageous. This configuration combines the current outputs, providing a cumulative current output that satisfies the application’s demands.

To achieve optimal results, it is crucial to set each power supply output independently. This ensures that the voltages or currents align harmoniously, summing up to the total desired value. By following these simple yet effective steps, users can harness the collective power of multiple power supplies, tailoring their outputs to meet the specific voltage and current requirements of the application.

For higher voltage, first set each output to the maximum desired current limit the load can safely handle. Then, equally distribute the total desired voltage to each power supply output. For example, if engineers are using three outputs, set each to one third the total desired voltage:

  1. Never exceed the floating voltage rating (output terminal isolation) of any of the outputs.
  2. Never subject any of the power supply outputs to a reverse voltage.
  3. Only connect outputs that have identical voltage and current ratings in series.

For higher current, equally distribute the total desired current limit to each power supply:

  1. One output must operate in constant voltage (CV) mode and the other(s) in constant current (CC) mode.
  2. The output load must draw enough current to keep the CC output(s) in CC mode.
  3. Only connect outputs that have identical voltage and current ratings in parallel.

See Figure 2 for a visual representation of a series connection with remote sense to a load.

Figure 2 Series connection to a load with remote sense. Source: Keysight

In the parallel setup, the CV output determines the voltage at the load and across the CC outputs (Figure 3). The CV unit will only supply enough current to fulfill the total load demand.

Figure 3 Parallel connection to the load with remote sense; the CV output determines the voltage at the load and across the CC outputs. Source: Keysight

Dealing with unexpected temperature effects

Temperature fluctuations not only impact the behavior of DUTs but also exert a significant influence on the precision of measurement instruments. For example, during a chilly winter day, an examination of Lithium-ion batteries at room temperature yielded unexpected results. Contrary the user’s anticipation of a decrease, the voltage of the cells drifted upward over time.

This phenomenon was attributed to the nighttime drop in room temperature, which paradoxically led to an increase in cell voltage. This effect proved more pronounced than the anticipated decrease resulting from cell self-discharge during the day. It’s worth noting that the power supplies responsible for delivering power to these cells are also susceptible to temperature variations.

To accurately characterize the output voltage down to microvolts, it becomes imperative to account for temperature coefficients in the application of power. This adjustment ensures a more precise understanding of the voltage dynamics, accounting for the impact of temperature on both the DUTs and the measurement instruments.

The following is an example using a power supply precision module that features a low-voltage range. The test instrumentation specification table documents the valid temperature range at 23°C ±5°C after a 30-minute warm-up.

  1. To apply a temperature coefficient, engineers must treat it like an error term. Let’s assume that the operating temperature is 33°C, or 10°C above the calibration temperature of 23°C and a voltage output of 5.0000 V.

Voltage programming temperature coefficient = ± (40 ppm + 70 μV) per °C

  1. To correct for the 10°C temperature difference from calibration temperature, engineers will need to account for the difference in the operating temperature and voltage range specification. The low voltage range spec is valid at 23°C ±5°C or up to 28°C. Engineers will need to apply the temperature coefficient for the (5°C) difference in the operating temperature (33°C) and low voltage range spec (28°C):

± (40 ppm * 5 V + 70 μV) * 5°C = 40ppm * 5 V * 5 °C + 70 μV * 5 °C = 1.35 mV

  1. The temperature-induced error must be added to the programming error for the low-voltage range provided in the N6761A specification table:

± (0.016 % * 5 V + 1.5 mV) = 2.3 mV

  1. Therefore, the total error, programming plus temperature, will be:

± (1.35 mV + 2.3 mV) = ±3.65 mV

  1. That means user output voltage will be somewhere between 4.99635 V and 5.00365 V when attempting to set the voltage to 5.0000 V in an ambient temperature of 33°C. Since the 1.35 mV part of this error is temperature-induced, as the temperature changes, this component of the error will change, and the output of the power supply will drift. The measurement drift with temperature can be calculated using the supply’s temperature coefficient.

Dealing with noise sensitive DUTs

If the DUT is sensitive to noise, engineers will want to do everything they can to minimize noise on the DC power input. The easiest thing users can do is use a low-noise power supply. But if one is not available, there are a couple of other things engineers can do.

The links between the power supply and the DUT are vulnerable to interference, particularly noise stemming from inductive or capacitive coupling. Numerous methods exist to mitigate this interference, but employing shielded two-wire cables for both load and sense connections stands out as the most effective solution. It is essential, however, to pay careful attention to the connection details.

For optimal noise reduction, connect the shield of these cables to earth ground at only one end, as illustrated in Figure 4.

Figure 4 To reduce noise connect the shield to earth ground only on one end of the cable. Source: Keysight

Avoid the temptation to ground the shield at both ends, as this practice can lead to the formation of ground loops, as depicted in Figure 5. These loops result from the disparity in potential between the supply ground and DUT ground.

Figure 5 Diagram where the shield is connected to ground at both ends resulting in a ground loop. Source: Keysight

The presence of a ground loop current can induce voltage on the cabling, manifesting as unwanted noise for your DUT. By adhering to the recommended practice of grounding the shield at a single end, you effectively minimize the risk of ground loops and ensure a cleaner, more interference-resistant connection between your power supply and the DUT.

Also, common-mode noise is generated when common-mode current flows from inside a power supply to earth ground and produces voltage on impedances to ground, including cable impedance. To minimize the effect of common-mode current, equalize the impedance to ground from the plus and minus output terminals on the power supply. Engineers should also equalize the impedance from the DUT plus and minus input terminals to ground. Use a common-mode choke in series with the output leads and a shunt capacitor from each lead to ground to accomplish this task.

Choosing the right power supply

Navigating the selection process for a power supply demands careful consideration of the specific requirements. Whether in need of a basic power supply or one with more advanced features tailored for specific applications, the ramifications of choosing a power supply with excessive power capacity can result in numerous challenges.

Common issues associated with opting for a power supply with too much power include increased output noise, difficulties in setting accurate current limits, and a compromise in meter accuracy. These challenges can be particularly daunting, but developing basic skills related to power supplies can significantly aid in overcoming these design obstacles.

By cultivating a foundational understanding of power supply principles, such as the nuances of CV and CC modes, engineers can effectively address issues related to noise, current limits, and meter accuracy. This underscores the importance of not only selecting an appropriate power supply but also ensuring that users possess the essential skills to troubleshoot and optimize the performance of the chosen power supply in their specific applications. Striking a balance between power capacity and application needs, while honing basic skills, is key to achieving a harmonious and effective power supply setup.

Andrew Herrera is an experienced product marketer in radio frequency and Internet of Things solutions. Andrew is the product marketing manager for RF test software at Keysight Technologies, leading Keysight’s PathWave 89600 vector signal analyzer, signal generation, and X-Series signal analyzer measurement applications. Andrew also leads the automation test solutions such as Keysight PathWave Measurements and PathWave Instrument Robotic Process Automation (RPA) software.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Conquer design challenges: Skills for power supplies appeared first on EDN.

The Intel-UMC fab partnership: What you need to know

Mon, 01/29/2024 - 13:35

Intel joins hands with Taiwanese fab United Microelectronics Corp. (UMC) in a new twist in the continuously evolving and realigning semiconductor foundry business. What does Intel’s strategic collaboration with UMC mean, and what will these two companies gain from this tie-up? However, before we delve into the motives of this semiconductor manufacturing partnership, below are the basic details of this foundry deal.

Intel and UMC will jointly develop a 12-nm photolithography process for high-growth markets such as mobile, communication infrastructure and networking, and the production of chips at this 12-nm node will begin at Intel’s Fabs 12, 22, and 32 in Arizona in 2027. The 12-nm process node will be built on Intel’s FinFET transistor design, and the two companies will jointly share the investment.

Figure 1 UMC’s co-president Jason Wang calls this tie-up with Intel a step toward adding a Western footprint. Source: Intel

Besides fabrication technology, the two companies will jointly offer EDA tools, IP offerings, and process design kits (PDKs) to simplify the 12-nm deployment for chip vendors. It’s worth mentioning here that Intel’s three fabs in Arizona—already producing chips on 10-nm and 14-nm nodes—aim to leverage many of the tools for the planned 12-nm fabrication.

Intel claims that 90% of the tools have been transferable between 10-nm and 14-nm nodes; if Intel is able to employ these same tools in 12-nm chip fabrication, it will help reduce additional CapEx and maximize profits.

While the mutual gains these two companies will accomplish from this collaboration are somewhat apparent, it’s still important to understand how this partnership will benefit Intel and UMC, respectively. Let’s begin with Intel, which has been plotting to establish Intel Foundry Services (IFS) as a major manufacturing operation for fabless semiconductor companies.

What Intel wants

It’s important to note that in Intel’s chip manufacturing labyrinth, IFS has access to three process nodes. First, the Intel-16 node facilitates 16-nm chip manufacturing for cost-conscious chip vendors designing inexpensive low-power products. Second is Intel 3, which can produce cutting-edge nodes using extreme ultraviolet (EUV) lithography but sticks to tried-and-tested FinFET transistors.

Third, Intel 18A is the cutting-edge process node that focuses on performance and transistor density while employing gate-all-around (GAA) RibbonFET transistors and PowerVia backside power delivery technology. Beyond these three fab offerings, IFS needs to expand its portfolio to serve a variety of applications. On the other hand, its parent company, Intel, will have a lot of free capacity while it moves its in-house CPUs to advanced process nodes.

So, while Intel moves the production of its cutting-edge process nodes like 20A and 18A to other fabs, Fabs 12, 22, and 32 in Arizona will be free to produce chips on a variety of legacy and low-cost nodes. Fabs 12, 22, and 32 are currently producing chips on Intel’s 7-nm, 10-nm, 14-nm, and 22-nm nodes.

Figure 2 IFS chief Stuart Pann calls strategic collaboration with UMC an important step toward its goal of becoming the world’s second-largest foundry by 2030. Source: Intel

More importantly, IFS will get access to UMC’s large customer base and can utilize its manufacturing expertise in areas like RF and wireless at Intel’s depreciated and underutilized fabs. Here, it’s worth mentioning Intel’s similar arrangement with Tower Semiconductor; IFS will gain from Tower’s fab relationships and generate revenues while fabricating 65-nm chips for Tower at its fully depreciated Fab 11X.

IFS is competing head-to-head with established players such as TSMC and Samsung for cutting-edge smaller nodes. Now, such tie-ups with entrenched fab players like UMC and Tower enable IFS to cater to mature fabrication nodes, something Intel hasn’t done while building CPUs on the latest manufacturing processes. Moreover, fabricating chips at mature nodes will allow IFS to open a new front against GlobalFoundries.

UMC’s takeaway

UMC, which formed the pure-play fab duo to spearhead the fabless movement in the 1990s, steadily passed the cutting-edge process node baton to TSMC, eventually resorting to mature fabrication processes. It now boasts more than 400 semiconductor firms as its customers.

Figure 3 The partnership allows UMC to expand capacity and market reach without making heavy capital investments. Source: Intel

The strategic collaboration with IFS will allow the Hsinchu, Taiwan-based fab to enhance its existing relationships with fabless clients in the United States and better compete with TSMC in mature nodes. Beyond its Hsinchu neighbor TSMC, this hook-up with Intel will also enable UMC to better compete amid China’s rapid fab capacity buildup.

It’ll also give UMC access to 12-nm process technology without building a new manufacturing site and procuring advanced tools. However, UMC has vowed to install some of its specialized tools at Fabs 12, 22, and 32 in Arizona. The most advanced node that UMC currently has in its arsenal is 14 nm; by jointly developing a 12-nm node with Intel, UMC will expand its know-how on smaller chip fabrication processes. It’ll also open the door for the Taiwanese fab on smaller nodes below 12 nm in the future.

The new fab order

The semiconductor manufacturing business has continuously evolved since 1987 when a former TI executive, Morris Chang, founded the first pure-play foundry with major funding from Philips Electronics. UMC soon followed the fray, and soon, TSMC and UMC became synonymous with the fabless semiconductor model.

Then, Intel, producing its CPUs at the latest process nodes and quickly moving to new chip manufacturing technologies, decided to claim its share in the fab business when it launched IFS in 2021. The technology and business merits of IFS aside, one thing is clear. The fab business has been constantly realigning since then.

That’s partly because Intel is the largest IDM in the semiconductor world. However, strategic deals with Tower and UMC also turn it into an astute fab player. The arrangement with UMC is a case in point. It will allow Intel to better utilize its large chip fabrication capacity in the United States while creating a regionally diversified and resilient supply chain.

More importantly, Intel will be doing it without making heavy capital investments. The same is true for UMC, which will gain much-needed expertise in FinFET manufacturing technologies as well as strategic access to semiconductor clients in North America.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The Intel-UMC fab partnership: What you need to know appeared first on EDN.

RFICs improve in-vehicle communications

Thu, 01/25/2024 - 21:50

Two linear power amplifiers (PAs) and two low-noise amplifiers (LNAs) from Guerrilla RF serve as signal boosters to enhance in-cabin cellular signals. Qualified to AEC-Q100 Grade 2 standards, the GRF5507W and GRF5517 PAs and the GRF2106W and GRF2133W LNAs operate over a temperature range of -40°C to +105°C.

The GRF5507W power amp has a tuning range of 0.7 GHz to 0.8 GHz, while the GRF5517W spans 1.7 GHz to 1.8 GHz. Each device delivers up to 23 dBm of output power with adjacent channel leakage ratio (ACLR) performance of better than -45 dBc. Further, this ACLR figure is achieved without the aid of supplemental linearization schemes, like digital pre-distortion (DPD). According to the manufacturer, the ability to beat the -45-dBc ACLR metric without DPD helps meet the stringent size, cost, and power dissipation requirements of cellular compensator applications.

The GRF2106W low-noise amplifier covers a tuning range of 0.1 GHz to 4.2 GHz, and the GRF2133W spans 0.1 GHz to 2.7 GHz. At 2.45 GHz (3.3 V, 15 mA), the GRF2106W provides a nominal gain of 21.5 dB and a noise figure of 0.8 dB. A higher gain level of 28 dB is available with the GRF2133W, along with an even lower noise figure of 0.6 dB at 1.95 GHz (5 V, 60 mA).

Prices for the GRF5507W and GRF5517W PAs in 16-pin QFN packages start at $1.54 in lots of 10,000 units. Prices for the GRF2106W and GRF2133W LNAs in 6-pin DFN packages start at $0.62 and $0.83, respectively, in lots of 10,000 units. Samples and evaluation boards are available for all four components.

GRF5507W product page

GRF5517W product page

GRF2106W product page

GRF2133W product page

Guerrilla RF 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RFICs improve in-vehicle communications appeared first on EDN.

Bluetooth LE SoC slashes power consumption

Thu, 01/25/2024 - 21:49

Renesas offers the DA14592 Bluetooth LE SoC, which the company says is its lowest power and smallest multicore Bluetooth LE device in its class. The device balances tradeoffs between on-chip memory and SoC die size to accommodate a broad range of applications, including crowd-sourced locationing, connected medical devices, metering systems, and human interface devices.

Along with an Arm Cortex-M33 CPU, the DA14592 features a software-configurable Bluetooth LE MAC engine based on an Arm Cortex-M0+. A new low-power mode enables a radio transmit current of 2.3 mA at 0 dBm and a radio receive current of 1.2 mA.

The DA14592 also supports a hibernation current of just 90 nA, which helps to extend the shelf-life of end products shipped with the battery connected. For products requiring extensive processing, the device provides an ultra-low active current of 34 µA/MHz.

Requiring only six external components, the DA14592 lowers the engineering BOM. Packaging options for the device include WLCSP (3.32×2.48 mm) and FCQFN (5.1×4.3 mm) The SoC’s reduced BOM, coupled with small package size, helps designers minimize product footprint. Other SoC features include a sigma-delta ADC, up to 32 GPIOs, and a QSPI supporting external flash or RAM.

The DA14592 Bluetooth LE SoC is currently in mass production. Renesas also offers the DA14592MOD, which integrates all of the external components required to implement a Bluetooth LE radio into a compact module. The DA14592MOD module is targeted for world-wide regulatory certification in 2Q 2024.

DA14592 product page 

Renesas Electronics  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Bluetooth LE SoC slashes power consumption appeared first on EDN.

Smart sensor monitors in-meter water pressure

Thu, 01/25/2024 - 21:49

The 129CP digital water pressure sensor from Sensata allows remote monitoring by utilities to identify distribution issues, leaks, and other non-revenue water events. Integrated into residential and commercial water meters, the 129CP improves metering efficiency and reliability.

Water utilities manage extensive infrastructure to pump and deliver water to residential houses and commercial buildings. According to the International Water Association, 30% of water produced worldwide is wasted due to leaks within the network, metering inaccuracies, unauthorized consumption, or other issues.

By combining precision pressure monitoring with digital I2C communication, the 129CP sensor delivers granular insights into water usage. It monitors pressure from 0 to 232 psi (sealed gauge), while consuming less than 2 µA at a 1-Hz measurement rate.

Rugged construction enables the 129CP to survive 10 to 15 years in challenging high-moisture, high-shock environments. The device operates from a supply voltage of 1.7 V to 3.6 V over a temperature range of +2°C to +85°C. 

129CP product page

Sensata Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Smart sensor monitors in-meter water pressure appeared first on EDN.

MCUs integrate configurable logic block

Thu, 01/25/2024 - 21:49

Microchip’s PIC16F13145 MCUs enable the creation of hardware-based, custom combinational logic functions directly within the MCU. The integration of a configurable logic block (CLB) module in the microcontroller allows designers to optimize the speed and response time of embedded control systems.  It also helps eliminate the need for external logic components.

CLB operation is not dependent on CPU clock speed. The CLB module can be used to make logical decisions while the CPU is in sleep mode, reducing both power consumption and software reliance. According to Microchip, the CLB is easily configured using the GUI-based tool offered as part of MPLAB Code Configurator.

The PIC16F13145 family of CLB-enabled MCUs is intended for applications that use custom protocols, task sequencing, or I/O control to manage real-time control systems in industrial and automotive sectors. Devices include a 10-bit ADC with built-in computation, an 8-bit DAC, and comparators with a 50-ns response time.

Prices for the PIC16F131xx microcontrollers start at $0.47 each in lots of 10,000 units.

PIC16F13145 product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCUs integrate configurable logic block appeared first on EDN.

100-V MOSFET employs double-sided cooling

Thu, 01/25/2024 - 21:48

Alpha & Omega’s AONA66916 100-V N-channel MOSFET comes in a DFN 5×6-mm designed to afford top and bottom side cooling. In addition to improved thermal performance, the device offers low on-resistance of 3.4 mΩ at 10 VGS and a wide safe operating area, making it well-suited for telecom, solar and DC/DC applications.

When using a standard DFN 5×6-mm package, heat dissipation is primarily through the bottom contact, transferring most of the power MOSFET’s heat to the PCB. Alpha & Omega’s latest DFN package enhances heat transfer by maximizing the surface contact area between the exposed top contact and the heat sink.

The AONA66916 MOSFET provides low thermal resistance, with top-side RthJC of 0.5°C/W maximum and bottom-side RthJC of 0.55°C/W maximum. The top-exposed DFN 5×6-mm package of the AONA66916 shares the same footprint of the company’s standard DFN 5×6-mm package, eliminating the need to modify existing PCB layouts.

The AONA66916 MOSFET costs $1.85 each in lots of 1000 units. It is available now in production quantities, with a lead time of 14 to16 weeks.

AONA66916 datasheet

Alpha & Omega Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 100-V MOSFET employs double-sided cooling appeared first on EDN.

Pages